This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join our team as a Sr MLOps Engineer to help us bring current and next generations of Pod ML models to life. You'll be a part of a small team designing and implementing solutions with high levels of autonomy to bring our members better sleep. Your work will go directly to our fleet of existing Pods with low friction and direct impact to the business. We are a fast moving and fast growing company, and we embrace individuals with a growth mindset and strong desire to help us achieve our mission: Improving people's lives through optimal sleep.
Job Responsibility:
Pioneer Cutting-Edge Technology: Introduce and implement cutting-edge ML technologies, integrating them into our products and processes to enable the future of health monitoring
End-to-End Ownership: Own design and operation of robust ML infrastructure – building scalable data, model, and deployment pipelines that ensure reliable delivery of models to production
Cross-functional Collaboration Partner with R&D, firmware, data, and backend teams to ensure ML inference operates reliably and scales to Pods everywhere
Optimize for Performance: Drive cost-effective, scalable, and high-performance ML systems by optimizing compute, storage, and deployment resources across training and inference
Enhance Tooling and Platforms: Develop tooling, micro services, and frameworks to streamline data processing, experimentation, and deployment
Effective Remote Communication: Thrive in a remote work environment, ensuring clear and direct communication
Requirements:
5+ years of software engineering experience with a focus on ML infrastructure, distributed systems, or large-scale data processing in Python (e.g., PyTorch, TensorFlow, or similar)
Hands-on experience with ML workflow orchestration and CI/CD pipelines for model deployment
Demonstrated success shipping ML models to production at scale, handling telemetry, monitoring, and feedback loops across large device fleets or user populations
Strong experience with AWS (Lambda, ECS, DynamoDB, CloudWatch) or equivalent cloud platforms for serving and monitoring ML systems
A fast-paced, collaborative, and iterative approach to tackling complex problems
Nice to have:
Expertise in real-time ML workflows and streaming systems (e.g., Kinesis, Kafka, Flink)
Demonstrated expertise in optimizing ML infrastructure for efficiency, latency, and cloud cost at scale
Understanding of secure ML operations, privacy practices, and compliance considerations, particularly for health-related or IoT data
Familiarity with health, wellness, or IoT domains, especially wearables or medical-grade devices
What we offer:
Equity participation
Periodic equity refreshments based on performance
Your own Pod
Full access to health, vision, and dental insurance for you and your dependents