CrawlJobs Logo

Lead Data Engineer

socialvalueportal.com Logo

Social Value Portal Ltd

Location Icon

Location:
United Kingdom , London

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

65000.00 - 75000.00 GBP / Year

Job Description:

We are looking for a Lead Data Engineer to join our growing Data & AI team. This is a hands-on player/coach role, combining people leadership with meaningful technical contribution, where you will help build robust, scalable data solutions while supporting the growth of a small but impactful team. You will have direct line management responsibility and are expected to remain technically hands-on, contributing to design decisions, code reviews and problem-solving alongside the team. We are looking for a Lead Data Engineer to lead our Data Engineering & Automation capability, with a focus on delivery, technical quality and team development. With core data foundations already in place, this role will focus on evolving and extending our data platform to support new use cases and deliver high-quality data products that create value across the organisation. This is a hands-on player/coach role, combining strong technical leadership with people management, supporting engineers through code reviews, design discussions and day-to-day problem solving, while maintaining high standards for scalability, reliability and data quality. You will report into the Associate Director of Data & AI, who owns the overall data strategy and senior stakeholder relationships. You will work in close partnership to translate strategic priorities into clear delivery plans and robust, scalable systems. You are comfortable balancing short-term delivery with long-term technical health, and you enjoy guiding teams through ambiguity without losing momentum.

Job Responsibility:

  • Team Management: Providing direct line management to the data engineering & automation team. You will run code reviews, manage performance and nurture career growth
  • Stakeholder Collaboration: Working in partnership with the Associate Director of Data & AI to represent the data engineering team, supporting stakeholder conversations, setting realistic delivery expectations and protecting the team’s focus
  • Operational Stability: Supporting the team in diagnosing and resolving data issues across our integrations (APIs, CRM, automation tools) and designing systems that reduce manual intervention and single points of failure
  • Engineering Delivery: Overseeing the rollout of our Client Portal and agentic products. You will ensure these tools are stable, scalable and ready for high-volume operations
  • Governance & Quality: Establishing safe deployment practices and data quality checks. You ensure that our data is accurate, backed up and handled securely in line with GDPR

Requirements:

  • Experience mentoring, leading or managing people in a technical environment
  • A strong coaching mindset: helping others get unstuck, break down complex problems, and prioritise effectively when everything feels urgent
  • Comfortable balancing hands-on delivery with people leadership
  • Strong Python and SQL skills, with experience writing and reviewing production-grade code
  • Confidence conducting deep code reviews and setting engineering standards, including testing and deployment practices appropriate for data systems
  • Solid data modelling foundations and experience working with modern cloud data warehouses (e.g. BigQuery, Snowflake)
  • Strong Git and version control practices, including branching and merging workflows
  • Experience architecting data solutions on cloud platforms (GCP preferred)
  • Ability to troubleshoot data issues across multiple business systems, APIs and integrations
  • Ability to identify opportunities for automation, design pragmatic solutions and propose clear implementation approaches
  • Practical experience working with data quality, documentation and safe deployment of the importance of documentation, data quality and observability
  • Experience handling data securely and in line with GDPR and data protection principles
  • An interest in using AI tools to support engineering workflows, with a strong human-in-the-loop approach to ensure quality, accountability and sound engineering judgement

Nice to have:

  • Experience developing or owning internal data products
  • Knowledge of data governance or data quality tooling
  • Experience supporting teams that use low-code or no-code tools (such as n8n or Zapier), and introducing guardrails and best practices to ensure reliability and maintainability
What we offer:
  • Salary : £65k to £75k DOE + opportunity for up to 10% bonus, dependent on company performance
  • Annual leave: 20 days + 7 days over Christmas + 1 Special Occasion Day (+ 8 bank holiday days). Optional: Buy or sell 3 days of holiday, 1 Giving back day, option to work 2 days over the festive period and get additional days back
  • Parental leave: 16 weeks full pay for the primary caregiver, 6 weeks full pay for the secondary caregiver. Phased return to the office for the primary caregiver (work for 4 days at full pay for 2 months)
  • Flexible working: Our standard working hours are 9.30am - 5.30pm, but we offer flexibility to fit your work around your life
  • Work from Abroad: Opportunity to work from abroad for up to 90 days a year
  • Free CFG education: After working at CFG for 6 months you are able to enroll on one of our Tech Accelerator courses
  • 6% matched pension: CFG will match your pension contribution up to 6%
  • Length of service: After 3 years you get an extra day of annual leave, 2 extra days at 4 years and 3 extra days at 5 years (and 5+ years). After 5 years a 2-month sabbatical, first month paid, 2nd month unpaid is available
  • Mental health support: Free access to Spill, which offers employees free workplace support therapy sessions

Additional Information:

Job Posted:
February 12, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Lead Data Engineer

Lead Data Engineer

At Citi Fund Services is undergoing a major transformation effort to transform t...
Location
Location
United Kingdom , Belfast
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant years of hands-on experience in software development, with proven experience in data integration / data pipeline developments
  • Exceptional technical leader with a proven background in delivery of significant projects
  • Multi-year experience in Data integration development (Ab Initio, Talend, Apache spark, AWS Glue, SSIS or equivalent) including optimization, tuning and benchmarking
  • Multi-year experience in SQL Oracle, MSSQL and equivalents including optimization, tuning and benchmarking
  • Expertise with Cloud-native development and Container Orchestration tools (Serverless, Docker, Kubernetes, OpenShift, etc.) a significant plus
  • Strong understanding of Agile methodologies (Scrum, Kanban) and experience working in Agile teams
  • Exposure to Continuous Integration and Continuous Delivery (CI/CD) pipelines, either on-premises or public cloud (i.e., Tekton, Harness, Jenkins, etc.)
  • Demonstrable expertise in financial services considered a plus
  • Self-starter with the ability to drive projects independently and deliver results in a fast paced environment
Job Responsibility
Job Responsibility
  • Architect and develop enterprise-scale data pipelines using the latest data streaming technologies
  • Implement and optimize delivered solutions through tuning for optimal performance through frequent benchmarking
  • Develop containerised solutions capable of running in private or public cloud
  • Ensure solution is aligned to ci/cd tooling and standards
  • Ensure solution is aligned to observability standards
  • Effectively communicate technical solutions and artifacts to non-technical stakeholders and senior leadership
  • Contribute to the journey of modernizing existing data processors move to common and cloud
  • Collaborate with cross function domain experts to translate business requirements to scalable data solutions
What we offer
What we offer
  • 27 days annual leave (plus bank holidays)
  • A discretional annual performance related bonus
  • Private Medical Care & Life Insurance
  • Employee Assistance Program
  • Pension Plan
  • Paid Parental Leave
  • Special discounts for employees, family, and friends
  • Access to an array of learning and development resources
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Big Data / Scala / Python Engineering Lead

The Applications Development Technology Lead Analyst is a senior level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least two years (Over all 10+ hands on Data Engineering experience) of experience building and leading highly complex, technical data engineering teams
  • Lead data engineering team, from sourcing to closing
  • Drive strategic vision for the team and product
  • Experience managing an data focused product, ML platform
  • Hands on experience relevant experience in design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala
  • Experience managing, hiring and coaching software engineering teams
  • Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality
  • 7 to 10+ years of hands-on experience in big data development, focusing on Apache Spark, Scala, and distributed systems
  • Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines
  • Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Certifications in Azure, Databricks, or Snowflake are a plus
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Lead Data Engineer to serve as both a technical leader and people coach for our ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 8-10 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

We are seeking an experienced Senior Data Engineer to lead the development of a ...
Location
Location
India , Kochi; Trivandrum
Salary
Salary:
Not provided
experionglobal.com Logo
Experion Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in data engineering with analytical platform development focus
  • Proficiency in Python and/or PySpark
  • Strong SQL skills for ETL processes and large-scale data manipulation
  • Extensive AWS experience (Glue, Lambda, Step Functions, S3)
  • Familiarity with big data systems (AWS EMR, Apache Spark, Apache Iceberg)
  • Database experience with DynamoDB, Aurora, Postgres, or Redshift
  • Proven experience designing and implementing RESTful APIs
  • Hands-on CI/CD pipeline experience (preferably GitLab)
  • Agile development methodology experience
  • Strong problem-solving abilities and attention to detail
Job Responsibility
Job Responsibility
  • Architect, develop, and maintain end-to-end data ingestion framework for extracting, transforming, and loading data from diverse sources
  • Use AWS services (Glue, Lambda, EMR, ECS, EC2, Step Functions) to build scalable, resilient automated data pipelines
  • Develop and implement automated data quality checks, validation routines, and error-handling mechanisms
  • Establish comprehensive monitoring, logging, and alerting systems for data quality issues
  • Architect and develop secure, high-performance APIs for data services integration
  • Create thorough API documentation and establish standards for security, versioning, and performance
  • Work with business stakeholders, data scientists, and operations teams to understand requirements
  • Participate in sprint planning, code reviews, and agile ceremonies
  • Contribute to CI/CD pipeline development using GitLab
Read More
Arrow Right

Lead Data Engineer

Join our dynamic team as a Lead Data Engineer to spearhead the development, opti...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
fissionlabs.com Logo
Fission Labs
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • Expert-level understanding of Salesforce object model
  • Comprehensive knowledge of Salesforce integration patterns
  • Deep understanding of Salesforce data architecture
  • Experience with Salesforce Bulk API jobs
  • Ability to handle complex data transformations and migrations
  • Knowledge of Salesforce metadata and custom object relationships
  • Advanced Python programming skills
  • Proven experience with AWS cloud services, specifically: AWS Glue, AWS Step Functions, AWS Lambda, AWS S3, AWS CloudWatch, Athena
  • Salesforce data model, Bulk jobs and architecture
Job Responsibility
Job Responsibility
  • Experience leading and driving large scale data migration projects
  • Experience in working with all the stakeholders to plan and coordinate the entire migration process
  • Proficiency in Python development
  • Experience building scalable, event-driven data migration solutions
  • Strong understanding of ETL (Extract, Transform, Load) processes
  • Familiarity with cloud-native architecture principles
What we offer
What we offer
  • Opportunity to work on business challenges from top global clientele with high impact
  • Vast opportunities for self-development, including online university access and sponsored certifications
  • Sponsored Tech Talks, industry events & seminars to foster innovation and learning
  • Generous benefits package including health insurance, retirement benefits, flexible work hours, and more
  • Supportive work environment with forums to explore passions beyond work
  • Fulltime
Read More
Arrow Right