This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We're looking for a backend software engineer with strong data analysis skills to join our camera fleet management team. You'll build the data infrastructure and analytical tools that power our safe release operations across a million+ camera devices. This role combines traditional backend engineering with data pipeline development, log analysis, and metrics-driven insights. Camera firmware releases include critical updates like new AI models, and understanding their impact requires sophisticated data analysis at scale. You'll develop the pipelines, dashboards, and analytical tools that help us detect anomalies, measure release health, and ensure every deployment is successful. Your work will directly support data-driven decision making for releases that impact our customers and our reputation.
Job Responsibility:
Build data pipelines: Design and implement data workflows using technologies like Kafka, Firehose, or Spark to process release metrics and device telemetry at scale
Develop analytical tools: Create Python-based analysis tools using pandas and SQL to identify release issues, detect anomalies, and measure fleet health
High-volume log analysis: Build systems to ingest, process, and analyze logs from millions of devices using technologies like OpenSearch, text clustering, and AI-based techniques
Create monitoring infrastructure: Develop Grafana dashboards and alerts that surface critical metrics and anomalies in real-time
Support release operations: Provide data-driven insights during releases, helping the team make informed decisions about rollout speed and risk
Design test infrastructure: Build test bench setups and CI pipelines that validate releases before they reach production
Query and optimize: Write efficient SQL queries against timeseries databases to extract insights from large-scale device data
Requirements:
BS/MS in Computer Science (or similar degree)
3+ years experience of industry experience in distributed software engineering
Strong Python skills: Proficiency in Python for data analysis, particularly with libraries like pandas
SQL expertise: Experience writing complex SQL queries and queries for time-series analysis
Data pipeline experience: Familiarity with pipeline technologies like Kafka, Firehose, or Spark
Log analysis at scale: Experience with high-volume log analysis technologies such as OpenSearch, text clustering, or AI-based log analysis techniques
Timeseries databases: Experience working with timeseries databases and temporal data
Metrics & observability: Hands-on experience with Grafana or similar monitoring tools
Anomaly detection: Understanding of anomaly detection techniques and their practical application
Coding-based analysis: Preference for solving problems through code rather than manual analysis
Must be willing and able to work onsite five days per week.
Nice to have:
Experience with Go
Background in statistics or experimental design
Familiarity with A/B testing and statistical inference
Experience with CI/CD systems
Knowledge of test automation frameworks
Understanding of distributed systems
What we offer:
Healthcare programs that can be tailored to meet the personal health and financial well-being needs - Premiums are 100% covered for the employee under at least one plan and 80% for family premiums under all plans
Nationwide medical, vision and dental coverage
Health Saving Account (HSA) with annual employer contributions and Flexible Spending Account (FSA) with tax saving options
Expanded mental health support
Paid parental leave policy & fertility benefits
Time off to relax and recharge through our paid holidays, firmwide extended holidays, flexible PTO and personal sick time