This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Our client is a global jewelry manufacturer undergoing a major transformation, moving from IaaS-based solutions to a modern Azure PaaS data platform. As part of this journey, you will design and implement scalable, reusable, and high-quality data products using technologies such as Data Factory, Data Lake, Synapse, and Databricks. These solutions will enable advanced analytics, reporting, and data-driven decision-making across the organization. By collaborating with product owners, architects, and business stakeholders, you will play a key role in maximizing the value of data and driving measurable commercial impact worldwide.
Job Responsibility:
Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform
Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models
Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies
Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance
Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions
Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits
Promote a “build-once, consume-many” approach to maximize reuse and value creation across business verticals
Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering
Requirements:
5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server
Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes
Practical experience with Azure Data Factory, Databricks, and PySpark
Track record of designing, building, and delivering production-ready data products at enterprise scale
Strong analytical skills and ability to translate business requirements into technical solutions
Excellent communication skills in English, with the ability to adapt technical details for different audiences
Experience working in Agile/Scrum teams
Nice to have:
Familiarity with infrastructure tools such as Kubernetes and Helm
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.