CrawlJobs Logo

AI & Data Engineer

riverflex.com Logo

Riverflex

Location Icon

Location:
Netherlands , Rotterdam

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

A large, international company in the engineering sector is scaling production-grade AI across the organization. The Data & AI Platform team is expanding delivery capacity and is looking for an interim Data & AI Engineer who can step in immediately, take ownership of AI use cases end-to-end, and accelerate adoption across business teams on an Azure + Databricks foundation. This is a hands-on delivery role with high autonomy: you will build, ship, operate, and improve AI solutions in production—while introducing reusable components and pragmatic best practices that raise the bar across the platform.

Job Responsibility:

  • Deliver AI use cases end-to-end: from ingestion and feature engineering to model/agent development and production rollout
  • Design and operate Databricks lakehouse pipelines (batch and streaming) using Spark/SQL/Delta Lake, including monitoring and data quality controls
  • Build AI solutions on the platform, including: RAG patterns (retrieval, chunking, embeddings, evaluation), tool-using agents and orchestration approaches, prompt strategies and testing/guardrails, (where relevant) custom ML models and supporting pipelines
  • Productionize and run what you build: reliability, observability, cost control, and operational hygiene
  • Enable other teams by creating reusable components, templates, and delivery standards
  • Work with governance and compliance: align with AI governance requirements and ensure solutions are secure and auditable
  • Collaborate with stakeholders across IT and the business to translate needs into working solutions and clear delivery increments

Requirements:

  • Proven experience as a Data Engineer / Data & AI Engineer delivering solutions into production environments
  • Strong hands-on Databricks expertise: Spark/SQL, Delta Lake, Jobs/Workflows, performance tuning
  • Strong Python + SQL for data engineering and AI/ML workflows
  • Experience building data pipelines with quality checks and operational monitoring
  • Practical experience with LLM-based solutions (RAG and/or agents), including prompt strategies and evaluation approaches
  • Comfortable working independently in an interim context: you can own delivery, communicate clearly, and unblock yourself

Nice to have:

  • Azure services exposure (e.g., Azure ML, Azure OpenAI, Key Vault, Functions, ADF)
  • LLM toolkits (LangChain, Semantic Kernel), prompt evaluation frameworks, early LLMOps patterns
  • CI/CD (GitHub Actions) and Infrastructure-as-Code (Terraform)
  • ML frameworks (PyTorch, TensorFlow, scikit-learn) where needed

Additional Information:

Job Posted:
February 03, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for AI & Data Engineer

Data & AI Impact Consultant Engineer

Data Consultant role in Data & AI Business Unit, designing and building modern d...
Location
Location
Belgium
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering or analytics projects
  • Good knowledge of Dutch and English (French is an asset)
  • Familiarity with Azure stack (Data Factory, Synapse, Storage, Purview, Functions) and/or Databricks
  • Structural and flexible thinking
  • Interest in AI and its role in modern data products (prompt engineering, GenAI, monitoring, automation)
  • Ability to create business impact and understand outcomes
  • Team-oriented mindset
Job Responsibility
Job Responsibility
  • Design and build modern data platforms (Azure, Databricks, Data Fabric) with focus on reusability and AI-readiness
  • Deliver value today while preparing for tomorrow
  • Help colleagues grow through coaching, feedback, or knowledge sharing
  • Provide consultancy with solution-oriented approach
  • Take initiative beyond projects to help build Inetum
What we offer
What we offer
  • Company car
  • Fuel/charging card
  • Group insurance
  • Hospitalization coverage
  • 32 days of annual leave
  • Hybrid working options
  • Satellite offices
  • Training and certification programs
  • Fulltime
Read More
Arrow Right

Data & AI Impact Consultant Engineer

As a Data Consultant, you are a cornerstone of our Data & AI Business Unit – tec...
Location
Location
Belgium
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering or analytics projects
  • Good knowledge of Dutch and English (French is an asset)
  • Familiar with Azure stack (Data Factory, Synapse, Storage, Purview, Functions) and/or Databricks
  • Structural and flexible thinking
  • Interest in AI and its role in modern data products (prompt engineering, GenAI, monitoring, automation)
  • Ability to create business impact and understand outcomes
  • Team-oriented mindset
Job Responsibility
Job Responsibility
  • Design and build modern data platforms (Azure, Databricks, Data Fabric) with focus on reusability and AI-readiness
  • Deliver customer- and future-oriented value
  • Help colleagues grow through coaching, feedback, or knowledge sharing
  • Provide consultancy with solution-oriented approach
  • Take initiative in client development, talent growth, or community engagement
What we offer
What we offer
  • Company car
  • Fuel/charging card
  • Group insurance
  • Hospitalization coverage
  • 32 days of annual leave
  • Hybrid working options
  • Satellite offices
  • Continuous learning & development
  • Training and certification programs
  • Fulltime
Read More
Arrow Right

Principal Consulting AI / Data Engineer

As a Principal Consulting AI / Data Engineer, you will design, build, and optimi...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven expertise in delivering enterprise-grade data engineering and AI solutions in production environments
  • Strong proficiency in Python and SQL, plus experience with Spark, Airflow, dbt, Kafka, or Flink
  • Experience with cloud platforms (AWS, Azure, or GCP) and Databricks
  • Ability to confidently communicate and present at C-suite level, simplifying technical concepts into business impact
  • Track record of engaging senior executives and influencing strategic decisions
  • Strong consulting and stakeholder management skills with client-facing experience
  • Background in MLOps, ML pipelines, or AI solution delivery highly regarded
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable data and AI solutions using Databricks, cloud platforms, and modern frameworks
  • Lead solution architecture discussions with clients, ensuring alignment of technical delivery with business strategy
  • Present to and influence executive-level stakeholders, including boards, C-suite, and senior directors
  • Translate highly technical solutions into clear business value propositions for non-technical audiences
  • Mentor and guide teams of engineers and consultants to deliver high-quality solutions
  • Champion best practices across data engineering, MLOps, and cloud delivery
  • Build DyFlex’s reputation as a trusted partner in Data & AI through thought leadership and client advocacy
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with opportunities to lead large-scale client programs
  • Exposure to diverse industries and client environments, including executive-level engagement
  • Fulltime
Read More
Arrow Right

Consulting AI / Data Engineer

As a Consulting AI / Data Engineer, you will design, build, and optimise enterpr...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on data engineering experience in production environments
  • Strong proficiency in Python and SQL
  • Experience with at least one additional language (e.g. Java, Typescript/Javascript)
  • Experience with modern frameworks such as Apache Spark, Airflow, dbt, Kafka, or Flink
  • Background in building ML pipelines, MLOps practices, or feature stores is highly valued
  • Proven expertise in relational databases, data modelling, and query optimisation
  • Demonstrated ability to solve complex technical problems independently
  • Excellent communication skills with ability to engage clients and stakeholders
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Build and maintain scalable data pipelines for ingesting, transforming, and delivering data
  • Manage and optimise databases, warehouses, and cloud storage solutions
  • Implement data quality frameworks and testing processes to ensure reliable systems
  • Design and deliver cloud-based solutions (AWS, Azure, or GCP)
  • Take technical ownership of project components and lead small development teams
  • Engage directly with clients, translating business requirements into technical solutions
  • Champion best practices including version control, CI/CD, and infrastructure as code
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, remote working, additional purchased leave, and company-provided laptop
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with mentoring from senior engineers
  • Exposure to diverse industries and client environments
  • Fulltime
Read More
Arrow Right

Data Engineer with Generative AI Expertise

We are looking for a skilled Data Engineer with expertise in Generative AI to jo...
Location
Location
India , Jaipur
Salary
Salary:
Not provided
infoobjects.com Logo
InfoObjects
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields
  • 2-6 years of hands-on experience in Data Engineering
  • Proficiency in Generative AI frameworks (e.g., GPT, DALL-E, Stable Diffusion)
  • Strong programming skills in Python, SQL, and familiarity with Java or Scala
  • Experience with data tools and platforms such as Apache Spark, Hadoop, or similar
  • Knowledge of cloud platforms like AWS, Azure, or GCP
  • Familiarity with MLOps practices and AI model deployment
  • Excellent problem-solving and communication skills
Job Responsibility
Job Responsibility
  • Design, develop, and maintain robust data pipelines and workflows
  • Integrate Generative AI models into existing data systems to enhance functionality
  • Collaborate with cross-functional teams to understand business needs and translate them into scalable data and AI solutions
  • Optimize data storage, processing, and retrieval systems for performance and scalability
  • Ensure data security, quality, and governance across all processes
  • Stay updated with the latest advancements in Generative AI and data engineering practices
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Data Engineer – AI Insights

We are looking for an experienced Data Engineer with AI Insights to design and d...
Location
Location
United States
Salary
Salary:
Not provided
thirdeyedata.ai Logo
Thirdeye Data
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of Data Engineering experience with exposure to AI/ML workflows
  • Advanced expertise in Python programming and SQL
  • Hands-on experience with Snowflake (data warehousing, schema design, performance tuning)
  • Experience building scalable ETL/ELT pipelines and integrating structured/unstructured data
  • Familiarity with LLM and RAG workflows, and how data supports these AI applications
  • Experience with reporting/visualization tools (Tableau)
  • Strong problem-solving, communication, and cross-functional collaboration skills
Job Responsibility
Job Responsibility
  • Develop and optimize ETL/ELT pipelines using Python, SQL, and Snowflake to ensure high-quality data for analytics, AI, and LLM workflows
  • Build and manage Snowflake data models and warehouses, focusing on performance, scalability, and security
  • Collaborate with AI/ML teams to prepare datasets for model training, inference, and LLM/RAG-based solutions
  • Automate data workflows, validation, and monitoring for reliable AI/ML execution
  • Support RAG pipelines and LLM data integration, enabling AI-driven insights and knowledge retrieval
  • Partner with business and analytics teams to transform raw data into actionable AI-powered insights
  • Contribute to dashboarding and reporting using Tableau, Power BI, or equivalent tools
  • Fulltime
Read More
Arrow Right

AI Research Engineer, Data Infrastructure

As a Research Engineer in Infrastructure, you will design and implement a robust...
Location
Location
United States , Palo Alto
Salary
Salary:
180000.00 - 250000.00 USD / Year
1x.tech Logo
1X Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong experience in building data pipelines and ETL systems
  • Ability to design and implement systems for data collection and management from robotic fleets
  • Familiarity with architectures that span on-robot components, on-premise clusters, and cloud infrastructure
  • Experience with data labeling tools or building dataset visualization and annotation tooling
  • Proficiency in creating or applying machine learning models for dataset organization and automated labeling
Job Responsibility
Job Responsibility
  • Optimize operational efficiency of data collection across the NEO robot fleet
  • Design intelligent triggers to determine when and what data should be uploaded from the robots
  • Automate ETL pipelines to make fleet-wide data easily queryable and training-ready
  • Collaborate with external dataset providers to prepare diverse multi-modal pre-training datasets
  • Build frontend tools for visualizing and automating the labeling of large datasets
  • Develop machine learning models for automatic dataset labeling and organization
What we offer
What we offer
  • Equity
  • Health, dental, and vision insurance
  • 401(k) with company match
  • Paid time off and holidays
  • Fulltime
Read More
Arrow Right