CrawlJobs Logo

Data Engineer - Core Systems

n26.com Logo

N26

Location Icon

Location:
Germany; Spain, Berlin

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a highly skilled and experienced Data Engineer to join our Core Systems team. The ideal candidate will have a proven track record of building and maintaining robust data products and data mart infrastructure within the AWS Redshift environment. This role is critical in empowering our organization with actionable insights through efficient and reliable data solutions.

Job Responsibility:

  • Design, develop, and deploy scalable data products that provide meaningful insights to business stakeholders
  • Build and maintain a robust data mart infrastructure within AWS Redshift to support reporting needs
  • Design and implement efficient data models to optimize data storage, retrieval, and performance
  • Develop and maintain efficient ETL (Extract, Transform, Load) processes to ingest data from various sources into the data mart
  • Implement data quality checks and monitoring processes to ensure data accuracy and integrity
  • Continuously monitor and optimize data pipelines and queries for optimal performance
  • Work closely with data analysts, business intelligence teams, and other stakeholders to understand data requirements and deliver solutions
  • Stay abreast of emerging technologies and industry trends in data engineering and propose innovative solutions

Requirements:

  • Background in Computer Science, Engineering, or a related field
  • 5+ years of experience in data engineering, with a strong emphasis on data product development and data warehousing
  • Extensive experience with AWS Redshift, including data modelling, performance tuning, and cluster management
  • Proficiency in ETL tools such as Apache Airflow, AWS Glue, or similar
  • Strong programming skills in Python or Java
  • Expertise in SQL for data manipulation and analysis
  • Experience with data modelling techniques and best practices
  • Excellent problem-solving and analytical skills
  • Strong communication and interpersonal skills to collaborate effectively with cross-functional teams

Nice to have:

  • Experience in the fintech industry
  • Familiarity with DBT
  • Knowledge of data governance and security practices
  • Experience with Infrastructure-as-Code tools - Terraform
What we offer:
  • Accelerate your career growth by joining one of Europe’s most talked about disruptors
  • Employee benefits that range from a competitive personal development budget, work from home budget, discounts to fitness & wellness memberships, language apps and public transportation
  • As an N26 employee you will have access to a Premium subscription on your personal N26 bank account. As well as subscriptions for friends and family members
  • Additional day of annual leave for each year of service
  • A high degree of autonomy and access to cutting edge technologies - all while working with a friendly team of peers of diverse nationalities, life experiences and backgrounds
  • A relocation package with visa support for those who need it

Additional Information:

Job Posted:
December 10, 2025

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer - Core Systems

New

Senior Software Engineer, Core Data

As a Senior Software Engineer on our Core Data team, you will take a leading rol...
Location
Location
United States
Salary
Salary:
190000.00 - 220000.00 USD / Year
pomelocare.com Logo
Pomelo Care
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience building high-quality, scalable data systems and pipelines
  • Expert-level proficiency in SQL and Python, with a deep understanding of data modeling and transformation best practices
  • Hands-on experience with dbt for data transformation and Dagster, Beam, Dataflow or similar tools for pipeline orchestration
  • Experience with modern data stack tools and cloud platforms, with a strong understanding of data warehouse design principles
  • A track record of delivering elegant and maintainable solutions to complex data problems that drive real business impact
Job Responsibility
Job Responsibility
  • Build and maintain elegant data pipelines that orchestrate ingestion from diverse sources and normalize data for company-wide consumption
  • Lead the design and development of robust, scalable data infrastructure that enables our clinical and product teams to make data-driven decisions, using dbt, Dagster, Beam and Dataflow
  • Write clean, performant SQL and Python to transform raw data into actionable insights that power our platform
  • Architect data models and transformations that support both operational analytics and new data-driven product features
  • Mentor other engineers, providing technical guidance on data engineering best practices and thoughtful code reviews, fostering a culture of data excellence
  • Collaborate with product, clinical and analytics teams to understand data needs and ensure we are building infrastructure that unlocks the most impactful insights
  • Optimize data processing workflows for performance, reliability and cost-effectiveness
What we offer
What we offer
  • Competitive healthcare benefits
  • Generous equity compensation
  • Unlimited vacation
  • Membership in the First Round Network (a curated and confidential community with events, guides, thousands of Q&A questions, and opportunities for 1-1 mentorship)
  • Fulltime
Read More
Arrow Right

Senior Principal Engineer Core Data Platform

As an engineer well into your career, we know you're an expert at what you do an...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
198300.00 - 318600.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field
  • 12+ years of experience in backend software development, with a focus on distributed systems and large-scale storage solutions
  • 8+ years of experience designing and managing highly available, large-scale storage architectures in cloud environments
  • 5+ years of hands-on experience working with AWS storage services (S3, EBS, EFS, FSx, Glacier, DynamoDB)
  • Proficiency in system design, performance optimization, and cost-efficient architecture for exabyte-scale storage
  • Expertise in at least one major backend programming language (Kotlin, Java, Go, Rust, or Python)
  • Experience leading technical strategy and architectural decisions in large, multi-team engineering organizations
  • Strong understanding of distributed systems principles, including consistency models, replication, sharding, and consensus algorithms (Raft, Paxos)
  • Deep knowledge of security best practices, including encryption, access control (IAM), and compliance standards (SOC2, GDPR, HIPAA)
  • Experience mentoring senior engineers and driving high-impact engineering initiatives
Job Responsibility
Job Responsibility
  • Collaborate with partner teams and internal customers to help define technical direction and OKRs for the Core Data platform organization
  • Regularly tackle the largest and most complex problems on the team, from technical design to implementation and launch
  • Partner across engineering teams to take on company-wide initiatives spanning multiple projects
  • Routinely tackle complex architecture challenges and apply architectural standards and start using them on new projects
  • Work across senior engineering and product leaders to build strategy and design solutions to earn customers trust and business
  • Own key OKRs and end-to-end outcomes of critical projects in a microservices environment
  • Champion best practices and innovative techniques for scalability, reliability, and performance optimizations
  • Own engineering and operational excellence for the health of our systems and processes
  • Proactively drive opportunities for continuous improvements and own key operational metrics
  • Continually drive developer productivity initiatives to ensure that we unleash the potential of our own teams
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right
New

Staff Data Engineer

We’re looking for a Staff Data Engineer to own the design, scalability, and reli...
Location
Location
United States , San Jose
Salary
Salary:
150000.00 - 250000.00 USD / Year
figure.ai Logo
Figure
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience owning or architecting large-scale data platforms — ideally in EV, autonomous driving, or robotics fleet environments, where telemetry, sensor data, and system metrics are core to product decisions
  • Deep expertise in data engineering and architecture (data modeling, ETL orchestration, schema design, transformation frameworks)
  • Strong foundation in Python, SQL, and modern data stacks (dbt, Airflow, Kafka, Spark, BigQuery, ClickHouse, or Snowflake)
  • Experience building data quality, validation, and observability systems to detect regressions, schema drift, and missing data
  • Excellent communication skills — able to understand technical needs from domain experts (controls, perception, operations) and translate complex data patterns into clear, actionable insights for engineers and leadership
  • First-principles understanding of electrical and mechanical systems, including motors, actuators, encoders, and control loops
Job Responsibility
Job Responsibility
  • Architect and evolve Figure’s end-to-end platform data pipeline — from robot telemetry ingestion to warehouse transformation and visualization
  • Improve and maintain existing ETL/ELT pipelines for scalability, reliability, and observability
  • Detect and mitigate data regressions, schema drift, and missing data via validation and anomaly-detection frameworks
  • Identify and close gaps in data coverage, ensuring high-fidelity metrics coverage across releases and subsystems
  • Define the tech stack and architecture for the next generation of our data warehouse, transformation framework, and monitoring layer
  • Collaborate with robotics domain experts (controls, perception, Guardian, fall-prevention) to turn raw telemetry into structured metrics that drive engineering/business decisions
  • Partner with fleet management, operators, and leadership to design and communicate fleet-level KPIs, trends, and regressions in clear, actionable ways
  • Enable self-service access to clean, documented datasets for engineers
  • Develop tools and interfaces that make fleet data accessible and explorable for engineers without deep data backgrounds
  • Fulltime
Read More
Arrow Right
New

Software Engineer, Data Infrastructure

The Data Infrastructure team at Figma builds and operates the foundational platf...
Location
Location
United States , San Francisco; New York
Salary
Salary:
149000.00 - 350000.00 USD / Year
figma.com Logo
Figma
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of Software Engineering experience, specifically in backend or infrastructure engineering
  • Experience designing and building distributed data infrastructure at scale
  • Strong expertise in batch and streaming data processing technologies such as Spark, Flink, Kafka, or Airflow/Dagster
  • A proven track record of impact-driven problem-solving in a fast-paced environment
  • A strong sense of engineering excellence, with a focus on high-quality, reliable, and performant systems
  • Excellent technical communication skills, with experience working across both technical and non-technical counterparts
  • Experience mentoring and supporting engineers, fostering a culture of learning and technical excellence
Job Responsibility
Job Responsibility
  • Design and build large-scale distributed data systems that power analytics, AI/ML, and business intelligence
  • Develop batch and streaming solutions to ensure data is reliable, efficient, and scalable across the company
  • Manage data ingestion, movement, and processing through core platforms like Snowflake, our ML Datalake, and real-time streaming systems
  • Improve data reliability, consistency, and performance, ensuring high-quality data for engineering, research, and business stakeholders
  • Collaborate with AI researchers, data scientists, product engineers, and business teams to understand data needs and build scalable solutions
  • Drive technical decisions and best practices for data ingestion, orchestration, processing, and storage
What we offer
What we offer
  • equity
  • health, dental & vision
  • retirement with company contribution
  • parental leave & reproductive or family planning support
  • mental health & wellness benefits
  • generous PTO
  • company recharge days
  • a learning & development stipend
  • a work from home stipend
  • cell phone reimbursement
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right
New

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Data Engineer

As a Data Engineer, you’ll build and refine the pipelines, data models, and serv...
Location
Location
United States , Redmond
Salary
Salary:
155000.00 - 175000.00 USD / Year
2a.consulting Logo
2A Consulting
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven ability to design and build end-to-end data systems, from ingestion through cleaning, structuring, storage, and serving
  • Experience building and shipping data products that deliver practical value
  • Demonstrated impact using AI models in data workflows (applied use, not ML research)
  • 5+ years of software or data engineering experience, including at least 2 years of hands-on work with data pipelines
  • Comfortable defining architecture and starting systems from scratch, working independently in a small cross-functional team
  • Proficiency in Python, SQL, or similar languages used in data engineering workflows
Job Responsibility
Job Responsibility
  • Build and maintain core data pipelines
  • Build and maintain end-to-end ingestion pipelines for documents, datasets, code repositories, videos, transcripts, and internal knowledge sources
  • Clean, normalize, structure, and store data in formats that support both web applications and AI-driven use cases
  • Use “out of the box” Microsoft tools—such as Fabric, Azure services, Cosmos DB, or Copilot Studio—to create reliable, maintainable systems
  • Enrich and model research data
  • Use AI models to transform unstructured content into structured metadata and durable knowledge assets
  • Design the architecture and foundational data systems, establishing the patterns and infrastructure for a new, scalable environment
  • Develop and refine embeddings, vector indexes, and retrieval components to support semantic search and grounding scenarios
  • Build backend and data services
  • Build data services, APIs, and backend components that power internal applications and agent-supported workflows
What we offer
What we offer
  • Flexible time-off plan
  • 100% employer-paid medical, dental, and vision insurance
  • Employer-paid life insurance for those enrolled in medical coverage
  • 401(k) plan with company match
  • Fertility, surrogacy, and adoption benefits
  • Fitness and caregiver benefits
  • Employee Assistance Program
  • 100% employer-paid short- and long-term disability coverage
  • Fulltime
Read More
Arrow Right

Data Engineer

This is a data engineer position - a programmer responsible for the design, deve...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of experience in working in data eco systems
  • 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Job Responsibility
Job Responsibility
  • Ensuring high quality software development, with complete documentation and traceability
  • Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data
  • Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
  • Ensure efficient data storage and retrieval using Big Data
  • Implement best practices for spark performance tuning including partition, caching and memory management
  • Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins)
  • Work on batch processing frameworks for Market risk analytics
  • Promoting unit/functional testing and code inspection processes
  • Work with business stakeholders and Business Analysts to understand the requirements
  • Work with other data scientists to understand and interpret complex datasets
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.