CrawlJobs Logo

Big Data Engineering Developer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India, Pune

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Applications Development Senior Programmer/Lead Analyst is an senior level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. This role is for a Data Engineering Lead to work on the Vanguard Big Data Platform. The team is responsible for the maintenance and development of leading Big Data initiatives and use cases providing business value.

Job Responsibility:

  • Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas
  • Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users
  • Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement
  • Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality
  • Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems
  • Ensure essential procedures are followed and help define operating standards and processes
  • Serve as advisor or coach to new or lower level analysts
  • Has the ability to operate with a limited level of direct supervision
  • Can exercise independence of judgement and autonomy
  • Acts as SME to senior stakeholders and /or other team members
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
  • Interface with product teams to understand their requirements to build the ingestion pipelines and conformance layer for consumption by business
  • Work closely with the data ingestion team to track the requirements and drive the build out of the canonical models
  • Provide guidance to the data conformance team for implementing the requirements/changes/enhancements to the conformance model
  • Do hands-on development as part of the conformance team to deliver the business requirements
  • Manage the workload of the team and the scrum process to align it with the objectives and priorities of the product owners
  • Participate in data management activities related to the Risk and Regulatory requirements as needed

Requirements:

  • Strong solid understanding of the Big Data architecture and the ability to trouble shoot performance and/or development issues on Hadoop (Cloudera preferably)
  • 9+ years of experience working with Hive, Impala and Hbase, Kudu Spark for data curation/conformance related work
  • Strong proficiency in Spark for development work related to curation/conformance. Strong Scala developer (with previous Java background) preferred.
  • Experience with Spark/Storm/Kafka or equivalent streaming/batch processing and event based messaging
  • Strong data analysis skills and the ability to slice and dice the data as needed for business reporting
  • Experience working in an agile environment with a fast paced changing requirements.
  • Excellent planning and organizational skills
  • Strong Communication skills

Nice to have:

  • Cloudera/Hortonworks/AWS EMR, Couchbase, S3 experience a plus
  • Experience with the AWS or GCP tech stack components
  • Relational and NoSQL database integration and data distribution principles experience
  • Experience with API development and use of JSON/XML/Hypermedia data formats
  • Analysis and development across Lines of business including Payments, Digital Channels, Liquidities, Trade, Client Experience
  • Cross train and fertilize functional and technical knowledge
  • Align to Engineering Excellence Development principles and standards
  • Promote and increase our Development Productivity scores for coding
  • Fully adhere to and evangelize a full Continuous Integration and Continuous Deploy pipeline
  • Experience in systems analysis and programming of software applications
  • Experience in managing and implementing successful projects
  • Working knowledge of consulting/project management techniques/methods
  • Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements

Additional Information:

Job Posted:
August 08, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Big Data Engineering Developer

New

Big Data Engineer

We are looking for a Big Data Engineer that will work on the collecting, storing...
Location
Location
United States , St. Louis
Salary
Salary:
Not provided
protocolinfotech.com Logo
Protocol Infotech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Applications, Science, Engineering, Technology
  • 2-4 Years experience
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Experience with Spark
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
Job Responsibility
Job Responsibility
  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
  • Implementing ETL process
  • Monitoring performance and advising any necessary infrastructure changes
  • Defining data retention policies
  • Ability to solve any ongoing issues with operating the cluster
  • Guides the development team in overall application technology design activities
What we offer
What we offer
  • Employee referral program
  • Referral fee of $1,000 will be paid if referred candidate is hired
  • Fulltime
Read More
Arrow Right
New

Senior Big Data Engineer

Location
Location
United States , Flowood
Salary
Salary:
Not provided
phasorsoft.com Logo
PhasorSoft Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Python programming for data manipulation and analysis
  • Experience with PySpark for processing large-scale data
  • Strong understanding and practical experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • Knowledge of designing and implementing ETL processes for data integration
  • Ability to work with large datasets, perform data cleansing, transformations, and aggregations
  • Familiarity with machine learning concepts and experience implementing ML models
  • Understanding of data governance principles and experience implementing data security measures
  • Ability to create clear and concise documentation for data pipelines and processes
  • Strong teamwork and collaboration skills to work with cross-functional teams
  • Analytical and problem-solving skills to optimize data workflows and processes
Job Responsibility
Job Responsibility
  • Design and develop scalable data pipelines and solutions using Python and PySpark
  • Utilize big data technologies such as Hadoop, Spark, Kafka, or similar tools for processing and analyzing large datasets
  • Develop and maintain ETL processes to extract, transform, and load data into data lakes or warehouses
  • Collaborate with data engineers and scientists to implement machine learning models and algorithms
  • Optimize and tune data processing workflows for performance and efficiency
  • Implement data governance and security measures to ensure data integrity and privacy
  • Create and maintain documentation for data pipelines, workflows, and processes
  • Provide technical leadership and mentorship to junior team members
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Big Data Lead Developer

We are seeking a highly skilled and experienced Big Data Lead Developer to estab...
Location
Location
Canada , Mississauga
Salary
Salary:
170.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of relevant experience in Big Data application development or systems analysis role
  • Experience in leading and mentoring big data engineering teams
  • Strong understanding of big data concepts, architectures, and technologies (e.g., Hadoop, PySpark, Hive, Kafka, NoSQL databases)
  • Proficiency in programming languages such as Java, Scala, or Python
  • Excellent problem-solving and analytical skills
  • Strong presentation, communication and interpersonal skills
  • Experience with data warehousing and business intelligence tools
  • Experience with data visualization and reporting
  • Knowledge of cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight, Google Cloud Dataproc)
  • Proficiency in Unix/Linux environments
Job Responsibility
Job Responsibility
  • Lead and mentor a team of big data engineers, fostering a collaborative and high-performing environment
  • Provide technical guidance, code reviews, and support for professional development
  • Design and implement scalable and robust big data architectures and pipelines to handle large volumes of data from various sources
  • Evaluate and select appropriate big data technologies and tools based on project requirements and industry best practices
  • Implement and integrate these technologies into our existing infrastructure
  • Develop and optimize data processing and analysis workflows using technologies such as Spark, Hadoop, Hive, and other relevant tools
  • Implement data quality checks and ensure adherence to data governance policies and procedures
  • Continuously monitor and optimize the performance of big data systems and pipelines to ensure efficient data processing and retrieval
  • Collaborate effectively with cross-functional teams, including data scientists, business analysts, and product managers, to understand their data needs and deliver impactful solutions
  • Stay up to date with the latest advancements in big data technologies and explore new tools and techniques to improve our data infrastructure
What we offer
What we offer
  • Global benefits designed to support your well-being, growth, and work-life balance
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Big Data Scala Engineering Developer

The Applications Development Senior Programmer/Lead Analyst is an senior level p...
Location
Location
India , Pune; Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong solid understanding of the Big Data architecture and the ability to trouble shoot performance and/or development issues on Hadoop (Cloudera preferably)
  • 9+ years of experience working with Hive, Impala and Hbase, Kudu Spark for data curation/conformance related work
  • Strong proficiency in Spark for development work related to curation/conformance
  • Strong Scala developer (with previous Java background) preferred
  • Experience with Spark/Storm/Kafka or equivalent streaming/batch processing and event based messaging
  • Strong data analysis skills and the ability to slice and dice the data as needed for business reporting
  • Experience working in an agile environment with a fast paced changing requirements
  • Excellent planning and organizational skills
  • Strong Communication skills
  • Bachelor's degree/University degree or equivalent experience
Job Responsibility
Job Responsibility
  • Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development
  • Monitor and control all phases of development process and analysis, design, construction, testing, and implementation
  • Provide user and operational support on applications to business users
  • Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues
  • Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality
  • Consult with users/clients and other technology groups on issues, recommend advanced programming solutions
  • Ensure essential procedures are followed and help define operating standards and processes
  • Serve as advisor or coach to new or lower level analysts
  • Interface with product teams to understand their requirements to build the ingestion pipelines and conformance layer
  • Work closely with the data ingestion team to track the requirements and drive the build out of the canonical models
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.