CrawlJobs Logo

Senior Data Integration Engineer

levy-professionals.com Logo

Levy Professionals

Location Icon

Location:
Netherlands , Amsterdam

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As a Senior Data Integration Engineer, you will be part of a multidisciplinary data team working on a modern Data Lake environment. You will play a key role in designing, developing, and maintaining the data integration layer, ensuring reliable, scalable, and high-quality data delivery for downstream reporting and analytics.

Job Responsibility:

  • Design, build, and maintain a data integration layer within a large-scale Data Lake environment
  • Develop and manage complex ETL flows for batch and near real-time data ingestion
  • Create data mapping specifications from multiple source systems
  • Work extensively with enterprise data storage platforms, including Oracle-based solutions
  • Address data modelling and integration challenges across the data lifecycle
  • Coach and mentor junior engineers from a technical and architectural perspective
  • Contribute to automation, standardisation, and continuous improvement of data delivery
  • Collaborate closely with product teams in an agile, self-steering setup

Requirements:

  • Minimum 10 years’ experience as a Data Engineer / Data Integration Engineer
  • Strong experience designing and building integration layers
  • Proven experience with enterprise ETL tooling (e.g. IBM DataStage or similar)
  • Solid knowledge of data modelling (Data Vault, dimensional modelling, or similar)
  • Strong Oracle database knowledge (Oracle Exadata or comparable platforms)
  • Deep understanding of data storage tooling and large-scale data platforms
  • Experience coaching and guiding junior engineers
  • Strong SQL skills
  • Fluent in English

Nice to have:

  • Experience with scheduling tools
  • Version control (Git or similar)
  • CI/CD pipelines (e.g. Azure DevOps pipelines)
  • Reporting or analytics tooling (e.g. Cognos or similar)
  • Agile / DevOps ways of working
  • Exposure to cloud-based data platforms or cloud migration initiatives

Additional Information:

Job Posted:
January 14, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Integration Engineer

Senior Microsoft Stack Data Engineer

Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senio...
Location
Location
United States , West Des Moines
Salary
Salary:
155000.00 USD / Year
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of DATA WAREHOUSE EXPERIENCE / Data Lake experience
  • Advanced SQL Server
  • Strong SQL experience, working with structured and unstructured data
  • Strong in SSIS ETL
  • Proficiency in SQL and SQL Queries
  • Experience with SQL Server and SQL Server
  • Knowledge of Data Warehousing and Data Warehousing
  • Data Warehouse experience: Star Schema and Fact & Dimension data warehouse structure
  • Experience with Azure Data Lake and Data lakes
  • Proficiency in ETL / SSIS and SSAS
Job Responsibility
Job Responsibility
  • Modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD
  • REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting
  • ALL ASPECTS OF Data Engineering
  • Technical Leader of the team
What we offer
What we offer
  • Bonus
  • 2 1/2 day weekends
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Join Inetum as a Data Engineer! At Inetum, we empower innovation and growth thro...
Location
Location
Portugal , Lisbon
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Teradata – advanced SQL and data warehousing
  • CONTROL-M – job scheduling and automation
  • UNIX – working in a UNIX environment (directories, scripting, etc.)
  • SQL (Teradata) – strong querying and data manipulation skills
  • Ab Initio – data integration and ETL development
  • DevOps – CI/CD practices and automation
  • Collaborative tools – GIT, Jira, Confluence, MEGA, Zeenea
Job Responsibility
Job Responsibility
  • Design, development, and optimization of data solutions that support business intelligence and analytics
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adswerve is looking for a Senior Data Engineer to join our Adobe Services team. ...
Location
Location
United States
Salary
Salary:
130000.00 - 155000.00 USD / Year
adswerve.com Logo
Adswerve, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience)
  • 5+ years of experience in a data engineering, analytics, or marketing technology role
  • Hands-on expertise in Adobe Experience Platform (AEP), Real-Time CDP, Journey Optimizer, or similar tools is a big plus
  • Strong proficiency in SQL and hands-on experience with data transformation and modeling
  • Understanding of ETL/ELT workflows (e.g., dbt, Fivetran, Airflow, etc.) and cloud data platforms (e.g., GCP, Snowflake, AWS, Azure)
  • Experience with ingress/egress patterns and interacting with API’s to move data
  • Experience with Python, or JavaScript in a data or scripting context
  • Experience with customer data platforms (CDPs), event-based tracking, or customer identity management
  • Understanding of Adobe Experience Cloud integrations (e.g., Adobe Analytics, Target, Campaign) is a plus
  • Strong communication skills with the ability to lead technical conversations and present to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Lead the end-to-end architecture of data ingestion and transformation in Adobe Experience Platform (AEP) using Adobe Data Collection (Tags), Experience Data Model (XDM), and source connectors
  • Design and optimize data models, identity graphs, and segmentation strategies within Real-Time CDP to enable personalized customer experiences
  • Implement schema mapping, identity resolution, and data governance strategies
  • Collaborate with Data Architects to build scalable, reliable data pipelines across multiple systems
  • Conduct data quality assessments and support QA for new source integrations and activations
  • Write and maintain internal documentation and knowledge bases on AEP best practices and data workflows
  • Simplify complex technical concepts and educate team members and clients in a clear, approachable way
  • Contribute to internal knowledge sharing and mentor junior engineers in best practices around data modeling, pipeline development, and Adobe platform capabilities
  • Stay current on the latest Adobe Experience Platform features and data engineering trends to inform client strategies
What we offer
What we offer
  • Medical, dental and vision available for employees
  • Paid time off including vacation, sick leave & company holidays
  • Paid volunteer time
  • Flexible working hours
  • Summer Fridays
  • “Work From Home Light” days between Christmas and New Year’s Day
  • 401(k) Plan with 5% company match and no vesting period
  • Employer Paid Parental Leave
  • Health-care Spending Accounts
  • Dependent-care Spending Accounts
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right