This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled and experienced Azure Data Engineer to join our data team. The ideal candidate will have over five years of professional experience and possess deep expertise in building, managing, and optimizing scalable data pipelines and solutions within the Microsoft Azure ecosystem. This role requires a strong focus on Databricks, Python, and SQL to deliver high-quality, reliable, and performant data products.
Job Responsibility:
Design, development, and implementation of robust and scalable ETL/ELT processes using Azure services and Databricks
Act as a subject matter expert for Databricks, leveraging its capabilities for large-scale data processing, advanced analytics, and machine learning workloads
Write, optimize, and maintain high-quality code primarily in Python and SQL for data transformation, cleaning, and aggregation
Utilize a comprehensive suite of Azure services including Azure Data Lake Storage (Gen2), Azure Synapse Analytics, Azure Data Factory, and Azure Key Vault to build and manage end-to-end data solutions
Demonstrate and apply strong working knowledge of Microsoft Fabric to unify data, analytics, and AI workloads, contributing to the modernization of our data platform
Refactor legacy code for improved performance, readability, and maintainability
Write and execute comprehensive unit tests to ensure the reliability and integrity of all data pipelines and code
Implement optimization techniques to significantly improve the performance and reduce the cost of existing and new data solutions, especially within Databricks and Synapse
Apply best practices for code versioning using tools like Git (e.g., GitHub, Azure DevOps) within a structured CI/CD environment
Work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical specifications
Requirements:
5+ years of hands-on experience as a Data Engineer, primarily focused on the Microsoft Azure data stack
Expert-level proficiency in Databricks (Spark SQL/PySpark), Python, and SQL
Strong, practical knowledge of core Azure data services, including Azure Data Lake Storage (Gen2) and Azure Synapse Analytics (or Azure SQL Data Warehouse)
Deep understanding and experience with modern ETL/ELT principles and tools (e.g., Azure Data Factory)
Solid understanding of the capabilities and architecture of Microsoft Fabric
Proven experience with code versioning (Git), unit testing frameworks, and principles of writing production-ready, clean, and well-documented code
Demonstrated ability to identify and implement performance and cost optimization techniques across data storage and processing layers
Excellent analytical and problem-solving skills with a track record of successfully refactoring complex or legacy data infrastructure
Nice to have:
Certifications such as Azure Data Engineer Associate (DP-203)
Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs)
Knowledge of Data Governance and Security best practices in Azure