This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Scale's LLM post-training platform team builds our internal distributed framework for large language model training. The platform powers MLEs, researchers, data scientists, and operators for fast and automatic training and evaluation of LLMs. It also serves as the underlying training framework for the data quality evaluation pipeline. Scale is uniquely positioned at the heart of the field of AI as an indispensable provider of training and evaluation data and end-to-end solutions for the ML lifecycle. You will work closely with Scale’s ML teams and researchers to build the foundation platform which supports all our ML research and development works. You will be building and optimizing the platform to enable our next generation LLM training, inference and data curation.
Job Responsibility:
Build, profile and optimize our training and inference framework
Collaborate with ML and research teams to accelerate their research and development, and enable them to develop the next generation of models and data curation
Research and integrate state-of-the-art technologies to optimize our ML system
Requirements:
Passionate about system optimization
Experience with multi-node LLM training and inference
Experience with developing large-scale distributed ML systems
Experience with post-training methods like RLHF/RLVR and related algorithms like PPO/GRPO etc.
Strong software engineering skills, proficient in frameworks and tools such as CUDA, Pytorch, transformers, flash attention, etc.
Strong written and verbal communication skills to operate in a cross functional team environment
Nice to have:
Demonstrated expertise in post-training methods and/or next generation use cases for large language models including instruction tuning, RLHF, tool use, reasoning, agents, and multimodal, etc.