CrawlJobs Logo

Data Engineer (Databricks)

nttdata.com Logo

NTT DATA

Location Icon

Location:
India , Bangalore

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Data Engineer (Databricks) role involves designing and optimizing data pipelines and ETL workflows. Candidates should have strong skills in Python, SQL, and Databricks, with experience in cloud platforms like AWS and Azure. Collaboration with data scientists and analysts is essential for delivering high-quality data models and supporting advanced analytics.

Job Responsibility:

  • Design, build, and optimize scalable data pipelines and ETL workflows
  • Develop high quality, production grade data models and ensure data reliability, performance, and governance across the platform
  • Collaborate with data scientists, analysts, and cross-functional teams to deliver clean, curated datasets and support advanced analytics and ML workloads
  • Monitor, troubleshoot, and continuously improve data processing jobs, ensuring operational excellence and adherence to best practices

Requirements:

  • Minimum of 5 years of experience in data engineering or related fields
  • Strong skills in Python
  • Strong skills in SQL
  • Strong skills in Databricks
  • Experience in cloud platforms like AWS
  • Experience in cloud platforms like Azure
  • Must have Python
  • Must have SQL
  • Must have DBT
  • Must have Databricks (Spark+Pyspark)
  • Must have Spark+Pyspark
  • Must have Snowflake
  • Must have AWS - S3
  • Must have AWS - GLUE
  • Must have AWS - Athena
  • Must have AWS - Redshift
  • Must have Azure - ADF
  • Must have Azure - Synapse
  • Must have Azure - Fabric
  • Must have Airflow
  • Must have CI/CD

Nice to have:

  • KAFKA
  • Mongodb
  • Plantir foundry
  • Agentic AI
  • Lanchain
  • Langgraph
  • GenAI

Additional Information:

Job Posted:
February 14, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer (Databricks)

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and...
Location
Location
United States
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience in data engineering, data architecture, or related technical fields
  • Proven ability to design, build, and optimize large-scale data ecosystems
  • Strong track record of leading complex data engineering initiatives
  • Deep hands-on expertise in ETL/ELT design, data warehousing, and data modeling
  • Extensive experience with data integration frameworks and best practices
  • Advanced knowledge of cloud-based data services and architectures (AWS Redshift, Azure Synapse Analytics, Google BigQuery, or equivalent)
  • Strong strategic and analytical thinking
  • Proficiency with modern data engineering frameworks (Databricks, Spark, lakehouse technologies like Delta Lake)
  • Exceptional communication and interpersonal skills
Job Responsibility
Job Responsibility
  • Engage deeply with stakeholders to understand data needs, business challenges, and technical constraints
  • Translate stakeholder needs into scalable, high-quality data solutions
  • Implement with a DataOps mindset using tools like Apache Airflow, Databricks/Spark, Kafka
  • Build reliable, automated, and efficient data pipelines and architectures
  • Lead and execute complex projects
  • Provide technical direction and set engineering standards
  • Ensure alignment with customer goals and company principles
  • Mentor and develop data engineers
  • Promote knowledge sharing and thought leadership
  • Contribute to internal and external content
What we offer
What we offer
  • Comprehensive health benefits
  • Generous time away and flexible PTO
  • Maternity and paternity leave
  • Access to educational resources with reimbursement for continued learning
  • 401(k) plan with company contribution
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience in data engineering, data architecture, or related fields
  • Extensive experience in writing and testing Java and/or Python
  • Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue
  • Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask
  • Proficiency with Spark and Databricks is highly desirable
  • Proven track record of leading complex data engineering projects, including designing and implementing scalable data solutions
  • Hands-on experience with ETL processes, data warehousing, and data modeling tools
  • In-depth knowledge of data integration tools and best practices
  • Strong understanding of cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery)
  • Strong strategic and analytical skills
Job Responsibility
Job Responsibility
  • Understand Requirements and Challenges: Collaborate with stakeholders to deeply understand their data requirements and challenges
  • Implement with a DataOps Mindset: Embrace a DataOps mindset and utilize modern data engineering tools and frameworks, such as Apache Airflow, Apache Spark, or similar, to build scalable and efficient data pipelines and architectures
  • Lead Data Engineering Projects: Take the lead in managing and executing data engineering projects, providing technical guidance and oversight to ensure successful project delivery
  • Mentor Data Engineers: Share your extensive knowledge and experience in data engineering with junior team members, guiding and mentoring them to foster their growth and development in the field
  • Promote Knowledge Sharing: Contribute to our knowledge base by writing technical blogs and articles, promoting best practices in data engineering, and contributing to a culture of continuous learning and innovation
Read More
Arrow Right

Senior Data Engineer

We’re growing our team at ELEKS in partnership with a company, the UK’s largest ...
Location
Location
Salary
Salary:
Not provided
eleks.com Logo
ELEKS
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in Data Engineering, SQL, ETL(data validation + data mapping + exception handling) 4+ years
  • Hands-on experience with Databricks 2+ years
  • Experience with Python
  • Experience with AWS (e.g. S3, Redshift, Athena, Glue, Lambda, etc.)
  • At least an Upper-Intermediate level of English
Job Responsibility
Job Responsibility
  • Building Databases and Pipelines: Developing databases, data lakes, and data ingestion pipelines to deliver datasets for various projects
  • End-to-End Solutions: Designing, developing, and deploying comprehensive solutions for data and data science models, ensuring usability for both data scientists and non-technical users. This includes following best engineering and data science practices
  • Scalable Solutions: Developing and maintaining scalable data and machine learning solutions throughout the data lifecycle, supporting the code and infrastructure for databases, data pipelines, metadata, and code management
  • Stakeholder Engagement: Collaborating with stakeholders across various departments, including data platforms, architecture, development, and operational teams, as well as addressing data security, privacy, and third-party coordination
What we offer
What we offer
  • Close cooperation with a customer
  • Challenging tasks
  • Competence development
  • Ability to influence project technologies
  • Team of professionals
  • Dynamic environment with low level of bureaucracy
Read More
Arrow Right

Senior Data Engineer

At Rearc, we're committed to empowering engineers to build awesome products and ...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases
  • Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments
  • Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows
  • Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue
  • Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask
  • Proficiency with Spark and Databricks is highly desirable
  • Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB
  • In-depth knowledge of data architecture principles and best practices, especially in cloud environments
  • Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK
  • Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Strategic Data Engineering Leadership: Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives
  • Architect Data Solutions: Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability
  • Drive Innovation: Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes
  • Technical Expertise: Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality
  • Collaboration and Mentorship: Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices
  • Thought Leadership: Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums
Read More
Arrow Right

Senior Data Engineer

Our client is a global jewelry manufacturer undergoing a major transformation, m...
Location
Location
Poland , Wroclaw
Salary
Salary:
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server
  • Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes
  • Practical experience with Azure Data Factory, Databricks, and PySpark
  • Track record of designing, building, and delivering production-ready data products at enterprise scale
  • Strong analytical skills and ability to translate business requirements into technical solutions
  • Excellent communication skills in English, with the ability to adapt technical details for different audiences
  • Experience working in Agile/Scrum teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform
  • Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models
  • Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies
  • Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance
  • Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions
  • Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits
  • Promote a “build-once, consume-many” approach to maximize reuse and value creation across business verticals
  • Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering
What we offer
What we offer
  • Paid Vacation
  • Sick Days
  • Sport/Insurance Compensation
  • English Classes
  • Charity
  • Training Compensation
Read More
Arrow Right

Lead Data Engineer

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Certifications in Azure, Databricks, or Snowflake are a plus
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right