CrawlJobs Logo

Senior Databricks Engineer

bibbyfinancialservices.com Logo

Bibby Financial Services

Location Icon

Location:
United Kingdom, Banbury

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

65000.00 - 75000.00 GBP / Year

Job Description:

Senior Databricks Engineer - Banbury Hybrid - Salary £65-75K + Benefits. Bibby Financial Services have an exciting opportunity available for a reliable Senior Databricks Engineer to join our team. You will join us on a full time, permanent basis and in return, you will receive a competitive salary of £65,000 - £75,000 per annum. About the role: As our Senior Databricks Engineer, you will operate within an Agile delivery environment, working closely with the Data Product Manager and Data Architect to ensure your team maintains a pipeline of delivery against the Backlog; providing vital insight from our wide-ranging dataset to support executive and operational decision making that will underpin sustained growth of BFS business units domestically and internationally. You will have an active leadership role in determining and developing the shape of your teams solution delivery against business requirements, as well as helping to inform and input into the wider technical architecture and strategy. This is very much a hands-on role, where the majority of your time will be spent actively developing solutions. You will also have management responsibilities for a small team of Data Engineers, who you will coach, support and organise to ensure we sustain a predictable pipeline of delivery, whilst ensuring all appropriate governance and best practice is adhered to.

Job Responsibility:

  • Understand the business / product strategy and supporting goals with the purpose of ensuring data interpretation aligns
  • Provide technical leadership on how to break down initiatives into appropriately sized features, epics and stories that balance value and risk. Take a leadership role on setting standards, driving quality and consistency in solution delivery
  • Work closely with the Data Architect to collaborate on Design of our data architecture and interpret into a build plan
  • Lead the build and maintenance of scalable data pipelines and ETL processes to support data integration and analytics from a diverse range of data sources, Cloud storage, databases and APIs
  • Deliver large-scale data processing workflows (ingestion, cleansing, transformation, validation, storage) using best practice tools and techniques
  • Collaborate with the BI Product Owner, analysts, and other business stakeholders to understand data requirements and deliver solutions that meet business needs
  • Optimize and tune data processing systems for performance, reliability, and scalability
  • Implement data quality and validation processes to ensure the accuracy and integrity of data throughout the pipelines
  • Operate an agile CI/CD environment within Azure DevOps, collaborating on Sprint cycles, code deployment, version control, and development practices
  • Develop and maintain data models, schemas, and documentation
  • Monitor and troubleshoot data pipeline issues, ensuring timely resolution
  • Stay up-to-date with the latest industry trends and technologies in data engineering and recommend improvements to existing systems or processes as appropriate
  • Ensure adherence to BFS Governance processes
  • Provide line management and technical leadership to a small team of Data Engineers

Requirements:

  • Significant years of Databricks experience, including Unity Catalog
  • Terraform, defining, deploying, and managing cloud infrastructure as code
  • Proficiency in programming languages such as Python, Spark, SQL
  • Strong experience with SQL databases
  • Expertise in data pipeline and workflow management tools (e.g., Apache Airflow, ADF)
  • Experience with cloud platforms (Azure preferred) and related data services
  • Excellent problem-solving skills and attention to detail
  • Inclusive and curious, continuously seeks to build knowledge and understanding
  • Strong communication and collaboration skills
  • Experience of Waterfall and Agile delivery methodologies
What we offer:
  • Private healthcare for you and your family
  • Company car allowance
  • Company pension scheme
  • Wide range of flexible benefits, such as gym membership, technology, or health assessments
  • Access to an online wellbeing centre
  • Range of discounts from many businesses
  • 25 days holiday, which increases with service, and options to buy or sell more
  • Electric Vehicle/Plug-in Hybrid Vehicle (EV/PHEV) scheme

Additional Information:

Job Posted:
December 06, 2025

Expiration:
January 09, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Databricks Engineer

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re growing our team at ELEKS in partnership with a company, the UK’s largest ...
Location
Location
Salary
Salary:
Not provided
eleks.com Logo
ELEKS
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in Data Engineering, SQL, ETL(data validation + data mapping + exception handling) 4+ years
  • Hands-on experience with Databricks 2+ years
  • Experience with Python
  • Experience with AWS (e.g. S3, Redshift, Athena, Glue, Lambda, etc.)
  • At least an Upper-Intermediate level of English
Job Responsibility
Job Responsibility
  • Building Databases and Pipelines: Developing databases, data lakes, and data ingestion pipelines to deliver datasets for various projects
  • End-to-End Solutions: Designing, developing, and deploying comprehensive solutions for data and data science models, ensuring usability for both data scientists and non-technical users. This includes following best engineering and data science practices
  • Scalable Solutions: Developing and maintaining scalable data and machine learning solutions throughout the data lifecycle, supporting the code and infrastructure for databases, data pipelines, metadata, and code management
  • Stakeholder Engagement: Collaborating with stakeholders across various departments, including data platforms, architecture, development, and operational teams, as well as addressing data security, privacy, and third-party coordination
What we offer
What we offer
  • Close cooperation with a customer
  • Challenging tasks
  • Competence development
  • Ability to influence project technologies
  • Team of professionals
  • Dynamic environment with low level of bureaucracy
Read More
Arrow Right

Senior Data Engineer

Join a leading energy sector analytics company as we expand our innovative data ...
Location
Location
Poland
Salary
Salary:
Not provided
edvantis.com Logo
Edvantis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience as a Data Engineer, with a proven track record of successful projects
  • Solid experience with relational database systems, particularly SQL Server
  • Advanced proficiency in Python and PySpark – the languages of data manipulation and analysis
  • Expertise in Databricks as a distributed data engineering platform
  • Expertise with Airflow and Grafana
  • Ability to collaborate effectively within a team environment and meet project deadlines
  • Strong communication skills and fluency in English
Job Responsibility
Job Responsibility
  • Develop and maintain scalable data pipelines using Python, SQL, AWS services(Amazon Bedrock, S3), and Databricks
  • Build and optimize ETL jobs in Databricks using PySpark, ensuring efficient processing of large-scale distributed datasets
  • Play a pivotal role in enhancing the breadth and depth of our courthouse data products
  • Utilize your Python expertise to parse complex datasets, manipulate intricate image data, and craft innovative data products that meet our customers’ evolving needs
  • Champion data quality, consistency, and reliability throughout our product lifecycle
  • Contribute to the development of new features and the continuous improvement of existing data systems
  • Design and implement distributed data engineering solutions in Databricks, leveraging PySpark for optimized workflows
What we offer
What we offer
  • Remote-first work model with flexible working hours (we provide all equipment)
  • Comfortable and fully equipped offices in Lviv and Rzeszów
  • Competitive compensation with regular performance reviews
  • 18 paid vacation days per year + all state holidays
  • 12 days of paid sick leave per year without a medical certificate + extra paid leave for blood donation
  • Medical insurance with an affordable family coverage option
  • Mental health program which includes free and confidential consultations with a psychologist
  • English, German, and Polish language courses
  • Corporate subscription to learning platforms, regular meetups and webinars
  • Friendly team that values accountability, innovation, teamwork, and customer satisfaction
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
170000.00 - 230000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks or Palantir Foundry, including building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (for example, Python or Java)
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments
  • Develop data systems that make our products truly data-driven
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Group health insurance plans
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

Our client is a global jewelry manufacturer undergoing a major transformation, m...
Location
Location
Turkey , Istanbul
Salary
Salary:
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server
  • Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes
  • Practical experience with Azure Data Factory, Databricks, and PySpark
  • Track record of designing, building, and delivering production-ready data products at enterprise scale
  • Strong analytical skills and ability to translate business requirements into technical solutions
  • Excellent communication skills in English, with the ability to adapt technical details for different audiences
  • Experience working in Agile/Scrum teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform
  • Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models
  • Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies
  • Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance
  • Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions
  • Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits
  • Promote a “build-once, consume-many” approach to maximize reuse and value creation across business verticals
  • Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering
What we offer
What we offer
  • Paid Vacation
  • Hybrid Work (home/office)
  • Sick Days
  • Sport/Insurance Compensation
  • Holidays Day Off
  • English Classes
  • Training Compensation
  • Transportation compensation
Read More
Arrow Right

Senior Data Engineer

This Senior Data Engineer position is a member of 3Cloud’s Managed Services team...
Location
Location
United States
Salary
Salary:
90200.00 - 130800.00 USD / Year
3cloudsolutions.com Logo
3Cloud
Expiration Date
December 22, 2025
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Mathematics, or related field, or equivalent certification and experience
  • Experience working in a support role and understanding standard support processes (i.e., ITIL)
  • 3+ years working with data and business intelligence solutions on the Microsoft tool stack for Business Intelligence (ADF, SQL DW, Synapse, Databricks, Power BI, etc.), or other vendor Business Intelligence tools (Business Objects, MicroStrategy, Qlik, SAP)
  • Strong verbal and written communication skills
  • A passion for problem solving and learning new technologies
  • Client relationship skills and experience managing vendors
  • Ability to work in a fast paced, rapidly changing environment
  • Strong desire for personal development and learning
  • Databricks: advanced experience with Delta Lake, Unity Catalog, jobs, clusters, notebook development
  • PySpark: strong hands-on development
Job Responsibility
Job Responsibility
  • Lead support of client’s Azure Data platform and Power BI Environment, including response to any escalations while helping to analyze and resolve incidents for customers environment
  • Consult, develop, and advise on solutions in Microsoft Azure with tools such as Synapse, Data Factory, Databricks, Azure ML, Data Lake, Data Warehouse, and Power BI
  • Consult, develop, and advise on Power BI, including governance, strategy, report development, performance
  • Consistently learn, apply, and refine skills around data engineering and data analytics
  • Proactively mentors junior team members and actively gives feedback on their work and performance
  • Design and deploy data pipelines, models, and AI-driven solutions for clients in various industries
  • Understand and work with customers on licensing involving Microsoft data solutions
  • Takes responsibility for the quality of deliverables and solutions
  • Supports and participates in the estimating of work to be done by self and others
  • Communicate ticket status information to all associated parties and escalate cases as appropriate
What we offer
What we offer
  • Flexible work location with a virtual first approach to work
  • 401(K) with match up to 50% of your 6% contributions of eligible pay
  • Generous PTO providing a minimum of 15 days in addition to 9 paid company holidays and 2 floating personal days
  • Two medical plan options
  • Option for vision and dental coverage
  • 100% employer paid coverage for life and disability insurance
  • Paid leave for birth parents and non-birth parents
  • Option for Healthcare FSA, HSA, and Dependent Care FSA
  • $67.00 monthly tech and home office allowance
  • Utilization and/or discretionary bonus eligibility based on role
  • Fulltime
!
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.