CrawlJobs Logo

Big Data Engineering Lead

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India , Chennai

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

The Senior Big Data engineering lead will play a pivotal role in designing, implementing, and optimizing large-scale data processing and analytics solutions. This role requires a visionary leader who can drive innovation, define architecture strategy, and ensure the scalability and efficiency of our big data infrastructure.

Job Responsibility:

  • Lead the design and development of a robust and scalable big data architecture handling exponential data growth while maintaining high availability and resilience
  • Design complex data transformation processes using Spark and other big data technologies using Java, Pyspark or Scala
  • Design and implement data pipelines that ensure data quality, integrity, and availability
  • Collaborate with cross-functional teams to understand business needs and translate them into technical requirements
  • Evaluate and select technologies that improve data efficiency, scalability, and performance
  • Oversee the deployment and management of big data tools and frameworks such as Hadoop, Spark, Kafka, and others
  • Provide technical guidance and mentorship to the development team and junior architects
  • Continuously assess and integrate emerging technologies and methodologies to enhance data processing capabilities
  • Optimize big data frameworks, such as Hadoop, Spark, for performance improvements and reduced processing time across distributed systems
  • Implement data governance frameworks to ensure data accuracy, consistency, and privacy across the organization, leveraging metadata management and data lineage tracking
  • Conduct benchmarking and stress testing of big data solutions to validate performance standards and operational capacity
  • Ensure compliance with data security best practices and regulations

Requirements:

  • Bachelor's or Master’s degree in Computer Science, Information Technology, or related field
  • Atleast 10 -12 years overall software development experience on majorly working with handling application with large scale data volumes from ingestion, persistence and retrieval
  • Deep understanding of big data technologies, including Hadoop, Spark, Kafka, Flink, NoSQL databases, etc.
  • Experience with Bigdata technologies Developer Hadoop, Apache Spark, Python, PySpark
  • Strong programming skills in languages such as Java, Scala, or Python
  • Excellent problem-solving skills with a knack for innovative solutions
  • Strong communication and leadership abilities
  • Proven ability to manage multiple projects simultaneously and deliver results

Nice to have:

  • Experience with data modeling and ETL/ELT processes
  • Experience in moving ETL frameworks from proprietary ETL technologies like Abinitio to Apache Spark
  • Familiarity with machine learning and data analytics tools
  • Knowledge of core banking/financial services systems

Additional Information:

Job Posted:
August 22, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Big Data Engineering Lead

New

Lead Data Engineer

Location
Location
Uzbekistan , Tashkent
Salary
Salary:
Not provided
ventionteams.com Logo
Vention
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in data engineering, including leading teams
  • Strong experience with Python and SQL
  • Solid knowledge of Apache Airflow, Kafka, Big Query, and AWS/Azure
  • Strong experience with ETL processes, data warehousing, and stream processing
  • Leadership skills with proven ability to mentor and grow engineering teams
  • Experience working in an Agile environment (Scrum, Kanban, etc.)
  • B2+ English, with experience communicating with English-speaking customers
Job Responsibility
Job Responsibility
  • Guide a team of data engineers in building and optimizing data pipelines
  • Oversee architecture for data ingestion, transformation, and storage with Big Query and SQL, ensuring high performance and reliability
  • Collaborate with product managers and clients to define data strategies and resolve complex technical challenges
  • Stay up-to-date with the latest cloud data technologies and industry best practices, bringing innovation to our data ecosystem
What we offer
What we offer
  • EDU corporate community (300+ members): tech communities, interest clubs, events, a small R&D lab, a knowledge base, and a dedicated AI track
  • Licenses for AI tools: GitHub Copilot, Cursor, and others
  • Expanded medical support for employees in Tashkent
  • 19 working days of vacation per year, 21 after two years in the company
  • Corporate getaway & teambuilding activities
  • Support for the significant events in your life
  • Referral bonuses for bringing in new talent
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

We are seeking an experienced Senior Data Engineer to lead the development of a ...
Location
Location
India , Kochi; Trivandrum
Salary
Salary:
Not provided
experionglobal.com Logo
Experion Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in data engineering with analytical platform development focus
  • Proficiency in Python and/or PySpark
  • Strong SQL skills for ETL processes and large-scale data manipulation
  • Extensive AWS experience (Glue, Lambda, Step Functions, S3)
  • Familiarity with big data systems (AWS EMR, Apache Spark, Apache Iceberg)
  • Database experience with DynamoDB, Aurora, Postgres, or Redshift
  • Proven experience designing and implementing RESTful APIs
  • Hands-on CI/CD pipeline experience (preferably GitLab)
  • Agile development methodology experience
  • Strong problem-solving abilities and attention to detail
Job Responsibility
Job Responsibility
  • Architect, develop, and maintain end-to-end data ingestion framework for extracting, transforming, and loading data from diverse sources
  • Use AWS services (Glue, Lambda, EMR, ECS, EC2, Step Functions) to build scalable, resilient automated data pipelines
  • Develop and implement automated data quality checks, validation routines, and error-handling mechanisms
  • Establish comprehensive monitoring, logging, and alerting systems for data quality issues
  • Architect and develop secure, high-performance APIs for data services integration
  • Create thorough API documentation and establish standards for security, versioning, and performance
  • Work with business stakeholders, data scientists, and operations teams to understand requirements
  • Participate in sprint planning, code reviews, and agile ceremonies
  • Contribute to CI/CD pipeline development using GitLab
Read More
Arrow Right

Team Lead Data Engineer

Data Management Platform is the core system that receives, processes and provide...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Desire and readiness to perform a team lead role and a tech lead role
  • 5+ years of experience in Java
  • Strong knowledge of algorithms and data structures
  • Readiness to deep dive into legacy codebase
  • Experience with SQL DBs
  • Solid experience with Kafka, streaming systems, microservices
  • Experience in dealing with performance and high scale systems
  • Understanding of Hadoop/Spark/big data tools
  • Analytical thinking, ability to deeply investigate tasks and understand how system components works from business side
  • Reliability, confidence and readiness to deal with production issues
Job Responsibility
Job Responsibility
  • Perform the team lead / people management role for 3 our engineers: 1:1s, ensuring high motivation and retention, working with feedback, mentoring and tech support
  • Perform the tech lead role for a mixed team of +-5 customer and Coherent engineers: coordination, task distribution, technical assistance
  • End-to-end development and ownership, from design to production
  • Implement high scale Big-Data solutions and contribute to our platform infrastructure and architecture
  • Research core technologies and integrations with external APIs and services
  • Work with various stakeholders: Product, Engineering, Data providers, and etc.
  • Participate in off-hours Pager Duty
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Big Data Lead Developer

We are seeking a highly skilled and experienced Big Data Lead Developer to estab...
Location
Location
Canada , Mississauga
Salary
Salary:
170.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of relevant experience in Big Data application development or systems analysis role
  • Experience in leading and mentoring big data engineering teams
  • Strong understanding of big data concepts, architectures, and technologies (e.g., Hadoop, PySpark, Hive, Kafka, NoSQL databases)
  • Proficiency in programming languages such as Java, Scala, or Python
  • Excellent problem-solving and analytical skills
  • Strong presentation, communication and interpersonal skills
  • Experience with data warehousing and business intelligence tools
  • Experience with data visualization and reporting
  • Knowledge of cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight, Google Cloud Dataproc)
  • Proficiency in Unix/Linux environments
Job Responsibility
Job Responsibility
  • Lead and mentor a team of big data engineers, fostering a collaborative and high-performing environment
  • Provide technical guidance, code reviews, and support for professional development
  • Design and implement scalable and robust big data architectures and pipelines to handle large volumes of data from various sources
  • Evaluate and select appropriate big data technologies and tools based on project requirements and industry best practices
  • Implement and integrate these technologies into our existing infrastructure
  • Develop and optimize data processing and analysis workflows using technologies such as Spark, Hadoop, Hive, and other relevant tools
  • Implement data quality checks and ensure adherence to data governance policies and procedures
  • Continuously monitor and optimize the performance of big data systems and pipelines to ensure efficient data processing and retrieval
  • Collaborate effectively with cross-functional teams, including data scientists, business analysts, and product managers, to understand their data needs and deliver impactful solutions
  • Stay up to date with the latest advancements in big data technologies and explore new tools and techniques to improve our data infrastructure
What we offer
What we offer
  • Global benefits designed to support your well-being, growth, and work-life balance
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Big Data Program Lead

As part of Citi’s broad transformation strategy, Data Engineering group is under...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years relevant experience on building data engineering solutions for large scale Operational and Data Warehouse implementations
  • consistently demonstrates clear and concise written and verbal communication
  • management and prioritization skills
  • 8+ years experience in building enterprise data warehouse systems in finance sector is preferable
  • 8+ years of relevant experience in Application Development for Enterprise
  • must Technical Skills - Java/Scala, Hadoop, Python, Hive, Impala, Kafka and Elastic
  • Apache Iceberg
  • Databases: Oracle
  • Netezza
  • must Core Skills - 10+ years experience in handling Large Teams: IT Projects Design and Development
Job Responsibility
Job Responsibility
  • provides architectural vision, ensuring architecture conforms to enterprise blueprints
  • develops architecture, strategy, planning, and problem solving solutions on an enterprise level
  • interfaces across several channels, acting as a visionary to proactively assist in defining direction for future projects
  • maintains continuous awareness of business, technical, and infrastructure issues and acts as a sounding board or consultant to aid in the development of creative solutions
  • experience with developing metadata driven frameworks for data processing/transformation and built real-time processing solutions
  • hands-on data engineer with core skillset on Big Data stack of technologies including but not limited to, Spark with Scala, Hive, Impala, Kafka, Solace, Iceberg format tables etc.
  • position requires excellent communication skills to drive the change required and ability to translate the vision into technical artifacts
  • identify and apply optimization techniques to improve performance of existing applications
  • provides thought leadership in subjects that are key to the business
  • provides senior-level technical consulting and mentoring during design and development for highly complex and critical data projects
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.