This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Ivy Partners is a Swiss consulting firm that assists companies in overcoming their strategic, technological, and organizational challenges. As a Data Engineer at Ivy Partners, you will be tasked with designing, developing, and maintaining data pipelines, building connectors, implementing data quality, applying software engineering practices, implementing CI/CD, performing testing, implementing monitoring, taking operational ownership, and contributing to cost optimization.
Job Responsibility:
Designing, developing, and maintaining end-to-end data ingestion pipelines that support batch, near real-time, streaming, and event-driven patterns
Building and maintaining connectors for various sources including APIs (REST), relational and NoSQL databases, filesystems (Windows, Linux, SharePoint), SaaS platforms, and more
Implementing data quality checks, validation rules, and data contracts as outlined by the client and Data Office
Applying robust software engineering practices in Python, including modular design, packaging, unit testing, and documentation
Implementing CI/CD pipelines and automation using Docker and GitHub Actions with disciplined Git workflows and code reviews
Performing integration testing in development environments and supporting production releases in collaboration with internal teams
Implementing monitoring, alerting, and dashboards to ensure observability and efficient troubleshooting
Taking operational ownership of delivered pipelines, including incident handling and hot fixes when required
Contributing to cost optimization and efficient resource usage with a FinOps-aware mindset
Requirements:
Over 5 years of experience as a Data Engineer with considerable exposure to data ingestion in production environments
Advanced Python skills demonstrated by experience in building production-grade data pipelines
Solid experience with PySpark and Pandas
Experienced in using AWS and Databricks platforms
Strong track record with CI/CD and automation using Docker and GitHub Actions
Skilled in integrating with APIs, including all aspects from authentication to robust error handling
Implemented observability practices including logging, metrics, alerting, and run management
Maintains a strong Git discipline, featuring daily commits, feature branches, pull requests, and code reviews
Fluent in English, both written and spoken
Holds a strong sense of ownership and autonomy, with the ability to thrive in high-expectation, delivery-focused environments
What we offer:
Career that promotes both personal and professional development
Support for skill enhancement
Real opportunities for progression
Supportive environment where everyone is valued and empowered with training and growth prospects