This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The job involves developing high performance data pipelines, establishing robust frameworks for data ingestion, and collaborating with various teams to meet Citi’s Enterprise Data Strategy objectives.
Job Responsibility:
Design and develop reusable Frameworks for Data ingestion, extraction, report submission etc.
Translate high-level & functional requirements, Data Modelling (dimensional, semi structured, transactional use cases) to technical design
Develop batch & real time data ingestion Pipelines involving wide range of technologies like Messaging middleware, Kafka, SFTP, Spark, Hive etc.
Develop programs to migrate Historical Data from legacy platforms to the BigData platform
Develop programs for real time and EOD reconciliations
Provide SME support for development of automated QA scripts
Participate in UAT/ SIT Test cycles, Release cycles, triage and resolve issues
Setup monitoring and management for services
Partner with Project Manager, BA and Business stakeholders and prioritize the Book of Work
Perform Code reviews, test case reviews and ensure Functional & Non-Functional requirements
Analyse Platform & Software version upgrades, evaluate new tools and technologies for Big Data handling
Ensure adherence to and develop best practices supporting Citi’s Project Management Standards
Ensure SDLC standards are followed with artefacts to support Internal & External Audits
Requirements:
5+ years of software development experience building large scale distributed data processing systems or large-scale Web applications
Experience of at least 3 years in designing & developing Big Data solutions with at least one end to end implementation
Strong Hands-on experience in following technical skills: Apache Spark, Java/ Scala, XML/ JSON/ Parquet/ Avro/ protobuf, SQL, Spring Boot/ Microservices, Linux, Hadoop Ecosystem (HDFS, Spark, Impala, HIVE, HBASE etc.), Kafka
Exposure to Cloudera offerings like Ozone, Iceberg etc. is good to have
Performance analysis, troubleshooting, and issue resolution
Experience working with Software vendor teams on open issues and resolutions
Strong experience with SQL, building queries, analysing, troubleshooting and improving queries
A history of delivering against agreed objectives
Ability to multi-task and work under pressure
Enthusiastic and proactive approach with willingness to learn, ability to pick up new concepts and applying the knowledge
Demonstrated problem solving skills
Excellent analytical and process-based skills, ability to produce process flow diagrams, business modelling, and functional design
The candidate is expected to be dynamic, flexible with a high energy level as this is a demanding and rapidly changing environment
Nice to have:
Exposure to Cloudera offerings like Ozone, Iceberg
Enthusiastic and proactive approach with willingness to learn
Ability to pick up new concepts and applying the knowledge
What we offer:
Global benefits for supporting well-being, growth and work-life balance
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.