This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Are you passionate about building robust data ecosystems that power advanced analytics and business insights? Join our dynamic team as a Data Engineer III, where you’ll design and implement cutting-edge data pipelines, integrate diverse data sources, and ensure the highest standards of data quality, security, and compliance. This is your opportunity to work on impactful projects that shape the future of data-driven decision-making.
Job Responsibility:
Architect & Innovate: Design and maintain scalable data pipelines and infrastructure for ingestion, integration, and storage of structured and unstructured data
Enable Analytics: Deliver clean, reliable datasets for Business Intelligence, Advanced Analytics, and Data Science teams
Optimize Performance: Implement modern practices using Microsoft Fabric, SQL Server, SSIS, Power BI Premium, and big data frameworks like Spark
Ensure Quality: Validate solutions through rigorous testing and maintain data integrity across all systems
Collaborate & Lead: Partner with Product Owners, Architects, Analysts, and Data Scientists on medium-to-large projects, and lead small-to-medium initiatives independently
Drive Continuous Improvement: Automate processes, optimize data delivery, and redesign infrastructure for scalability
Requirements:
Prefer 4 year/bachelor’s degree or master’s degree in computer science, Information Technology, Computer Engineering, or equivalent experience
5+ years relatable work experience
Ability to design and optimize SQL queries across cloud data warehouses, with strong proficiency in T‑SQL
Skilled in data orchestration and ETL tools such as Azure Data Factory or SSIS, and eager to adopt Fabric Data Factory (Pipelines & Dataflows Gen2)
Background in CI/CD and version control for data solutions (e.g., Git, deployment pipelines), with interest in DevOps practices for analytics platforms
Familiarity with modern data lake architectures, including Delta Lake and medallion design patterns
Knowledge of columnar storage formats and lakehouse technologies such as Delta Lake and Parquet for scalable data processing
Nice to have:
Hands-on work with big data frameworks like Apache Spark (PySpark or Scala) for batch workloads is helpful
Understanding BI and semantic modeling concepts, including star schema design and performance tuning is helpful