This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Luma, the Applied Research team brings our most advanced generative models to life. We sit at the intersection of Research and Product - bridging raw innovation with real creative tools used by millions. Our mission is to make Luma's multimodal foundation models more expressive, controllable, and reliable, turning frontier research into magical, production-ready experiences.
Job Responsibility:
Develop and maintain model variants purpose-built for specific product features and partner applications - adapting architectures, datasets, and fine-tuning strategies
Drive continual improvements to Luma's core model-powered experiences, leading iterations that push quality, reliability, and creative depth across versions
Collaborate closely with Product, Research, and Design to translate creative intent and user feedback into refined model behavior, intuitive controls, and new capabilities
Build internal tools and workflows that accelerate model iteration and evaluation - enabling faster experimentation, deeper insight, and tighter feedback loops
Contribute to applied research in safety, authenticity, and control - spanning topics like moderation, watermarking, fairness, and color science
Requirements:
Strong engineering skills in Python and deep learning frameworks (preferably PyTorch)
comfortable moving between research prototypes and production systems
Hands-on experience with modern visual generative models (diffusion, transformers, or related architectures)
Demonstrated ability to tune, refine, and deploy models in real products using human feedback and creative evaluation
Curiosity and passion for multimodal AI - understanding how models perceive, generate, and evolve in the wild
Nice to have:
Familiarity with large-scale training infrastructure (e.g., SLURM, Ray, or Kubernetes)