This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We have a 6 - 9 month contract, with potential for contract to hire for a skilled Data Engineer to design, build, and optimize scalable data pipelines and automation for analytics solutions in a modern cloud-based environment. This role is highly hands-on and will focus on developing reliable, performant data workflows using Spark, Python, SQL, and Databricks, while following strong software engineering best practices. You’ll collaborate closely with data scientists, analytics teams, and platform engineers to deliver high-quality data solutions that support reporting, analytics, and machine learning initiatives. Focused on foundational work supporting the data solutions initiatives
Job Responsibility:
Design, develop, and maintain scalable data pipelines using Apache Spark and Databricks
Build and optimize data transformations using Python, PySpark, and SQL
Ensure data quality, reliability, and performance across batch and streaming workloads
Apply strong software engineering best practices, including unit testing, integration testing, and code reviews
Manage source control using GitHub and participate in CI/CD workflows
Collaborate with cross-functional teams to support analytics and ML use cases
Troubleshoot and resolve data pipeline and performance issues
Requirements:
3 or more years of experience in Spark
3 or more years of experience in Python
3 or more years of experience in SQL
5 or more years of experience in software engineering (unit tests, integration tests, GitHub, dependency management, CI/CD)