CrawlJobs Logo

Senior Kafka Platform Engineer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
United Kingdom , London

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

This role is a Senior Platform Engineer working on the Kafka as a Service project, responsible for maintaining and building a Kafka platform. Responsibilities include configuring new environments, managing onboarding and configuration workflows, advising partner teams on proper Kafka architecture and troubleshooting, and adhering to compliance regulations.

Job Responsibility:

  • Serve as a technology subject matter expert for internal and external stakeholders and provide direction for all firm mandated controls and compliance initiatives
  • Ensure that all integration of functions meet business goals
  • Define necessary system enhancements to deploy new products and process enhancements
  • Recommend product customization for system integration
  • Identify problem causality, business impact, and root causes
  • Exhibit knowledge of how our own specialty area contributes to the business and apply knowledge of competitors, products and services
  • Advise and mentor junior team members
  • Impact the engineering function by influencing decisions through advice, counsel or facilitating services
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.

Requirements:

  • Experience working in Financial Services or a large complex and/or global environment
  • Experience of the following technologies: Kakfa Ecosystem (Confluent distribution preferred), Kubernetes and Openshift, Java, Python, Ansible
  • Consistently demonstrates clear and concise written and verbal communication
  • Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices
  • Demonstrated analytic/diagnostic skills
  • Ability to work in a matrix environment and partner with virtual teams
  • Ability to work independently, multi-task, and take ownership of various parts of a project or initiative
  • Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements
  • Proven track record of operational process change and improvement.

Nice to have:

  • Experience with Financial Services
  • Master’s degree.
What we offer:
  • Equal opportunity employer
  • Global benefits
  • Accessibility accommodations.

Additional Information:

Job Posted:
March 21, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Kafka Platform Engineer

Senior Kafka Platform Engineer

This role is responsible for the management and operational excellence of the Ka...
Location
Location
Poland
Salary
Salary:
Not provided
https://www.hsbc.com Logo
HSBC
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • Must be able to communicate on technical levels with Engineers and stakeholders
  • Strong problem solving and analytical skills
  • Thorough understanding of the Kafka Architecture
  • Familiar with cluster maintenance processes and implementing changes and recommended fixes to Kafka clusters and topics to protect production
  • Experience operating in an infrastructure as code and automation first principles environment
  • Key Technologies Messaging technologies –Apache Kafka, Confluent Kafka
  • DevOps toolsets – GitHub, JIRA, Confluence, Jenkins
  • Automation – Ansible, Puppet or similar
  • Monitoring –Observability tools such as DataDog, NewRelic, Prometheus, Grafana
Job Responsibility
Job Responsibility
  • Provide expertise in Kafka brokers, zookeepers, Kafka connect, schema registry, KSQL, Rest proxy and Kafka Control Center
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume
  • Provide administration and operations of the Kafka platform - provisioning, access lists Kerberos and SSL configurations
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API
  • Create topics, setup redundancy clusters, deploy monitoring tools, and configure appropriate alerts and create stubs for producers, consumers, and consumer groups for helping onboard applications from different languages/platforms
  • As a Kafka SRE you will conduct root cause analysis of production incidents, document for reference and put into place proactive measures to enhance system reliability
  • Automate routine tasks using scripts or automation tools and perform data related benchmarking, performance analysis and tuning
  • Conduct root cause analysis of production incidents, documenting for reference and initiating proactive measures to enhance system reliability
What we offer
What we offer
  • Competitive salary
  • Annual performance-based bonus
  • Additional bonuses for recognition awards
  • Multisport card
  • Private medical care
  • Life insurance
  • One-time reimbursement of home office set-up (up to 800 PLN)
  • Corporate parties & events
  • CSR initiatives
  • Nursery discounts
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Senior Data Platform Engineer

We are looking for an experienced data engineer to join our platform engineering...
Location
Location
United States
Salary
Salary:
141000.00 - 225600.00 USD / Year
axon.com Logo
Axon
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data engineering, software engineering with a data focus, data science, or a related role
  • Knowledge of designing data pipelines from a variety of source (e.g. streaming, flat files, APIs)
  • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL)
  • Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming, Flink, Pulsar, Redpanda)
  • Strong programming skills in common data-focused languages (e.g., Python, Scala)
  • Experience with data pipeline and workflow management tools (e.g., Apache Airflow, Prefect, Temporal)
  • Familiarity with AWS-based data solutions
  • Strong understanding of data warehousing concepts and technologies (Snowflake)
  • Experience documenting data dependency maps and data lineage
  • Strong communication and collaboration skills
Job Responsibility
Job Responsibility
  • Design, implement, and maintain scalable data pipelines and infrastructure
  • Collaborate with software engineers, product managers, customer success managers, and others across the business to understand data requirements
  • Optimize and manage our data storage solutions
  • Ensure data quality, reliability, and security across the data lifecycle
  • Develop and maintain ETL processes and frameworks
  • Work with stakeholders to define data availability SLAs
  • Create and manage data models to support business intelligence and analytics
What we offer
What we offer
  • Competitive salary and 401k with employer match
  • Discretionary time off
  • Paid parental leave for all
  • Medical, Dental, Vision plans
  • Fitness Programs
  • Emotional & Development Programs
  • Snacks in our offices
Read More
Arrow Right

Senior Data Engineer - Platform Enablement

SoundCloud empowers artists and fans to connect and share through music. Founded...
Location
Location
United States , New York; Atlanta; East Coast
Salary
Salary:
160000.00 - 210000.00 USD / Year
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, analytics engineering, or similar roles
  • Expert-level SQL skills, including performance tuning, advanced joins, CTEs, window functions, and analytical query design
  • Proven experience with Apache Airflow (designing DAGs, scheduling, task dependencies, monitoring, Python)
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Excellent communication skills and a collaborative mindset
Job Responsibility
Job Responsibility
  • Develop and optimize SQL data models and queries for analytics, reporting, and operational use cases
  • Design and maintain ETL/ELT workflows using Apache Airflow, ensuring reliability, scalability, and data integrity
  • Collaborate with analysts and business teams to translate data needs into efficient, automated data pipelines and datasets
  • Own and enhance data quality and validation processes, ensuring accuracy and completeness of business-critical metrics
  • Build and maintain reporting layers, supporting dashboards and analytics tools (e.g. Looker, or similar)
  • Troubleshoot and tune SQL performance, optimizing queries and data structures for speed and scalability
  • Contribute to data architecture decisions, including schema design, partitioning strategies, and workflow scheduling
  • Mentor junior engineers, advocate for best practices and promote a positive team culture
What we offer
What we offer
  • Comprehensive health benefits including medical, dental, and vision plans, as well as mental health resources
  • Robust 401k program
  • Employee Equity Plan
  • Generous professional development allowance
  • Creativity and Wellness benefit
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Experience Platform Team

The Experience Platform team is looking for a full-stack/backend software engine...
Location
Location
United States , New York City
Salary
Salary:
Not provided
pinecone.io Logo
Pinecone
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience in full-stack or backend development (NodeJS, Rust, Python, or Go)
  • Experience with queueing and streaming technologies like Kafka, Kinesis, or Pub/Sub
  • Familiarity with creating web interfaces with React or other frontend frameworks
  • Expertise in event-driven system design and distributed systems principles
  • Proficiency in building reliable data processing pipelines for usage tracking and reconciliation
  • Familiarity with integrating third-party APIs and handling inconsistent data
  • Hands-on experience with one or more major cloud providers (AWS, GCP, Azure), especially services related to data streaming, serverless compute, and data storage
  • Strong understanding of RESTful API design
Job Responsibility
Job Responsibility
  • Design event-driven architectures and distributed systems for reliable real-time and batch event processing
  • Develop queueing and streaming systems (e.g., Kafka, Kinesis) with robust event handling mechanisms
  • Build pipelines for ingesting, transforming, and aggregating usage data, ensuring accuracy and reliability
  • Integrate with external APIs and vendor systems, designing for resiliency against outages or inconsistent data
  • Create auditable and observable systems with monitoring, alerting, and verification mechanisms
  • Implement end-to-end user experiences across multiple services and web applications
What we offer
What we offer
  • Comprehensive health coverage including medical, dental, vision, and mental health resources
  • 401(k) Plan
  • Equity award
  • Flexible time off
  • Paid parental leave
  • Annual Company Retreat
  • WFH Equipment Stipend
Read More
Arrow Right

Senior Data Platform Engineer

At Fever, our engineering team powers the technology behind our apps and website...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expert in Python and frameworks like FastAPI or Django
  • Experience with Snowflake, PostgreSQL, and data management best practices
  • Familiar with IaC (Terraform), orchestration tools (Airflow, Metaflow), and CI/CD (Jenkins)
  • Experience with Kubernetes, Kafka, and GitOps tools like ArgoCD
  • Skilled in observability tools like Datadog, Grafana, and Prometheus
  • Strong communicator and team player in international, cross-functional environments
  • Proactive, adaptable, and solution-oriented
  • Fluent in English for effective communication
Job Responsibility
Job Responsibility
  • Build and maintain a scalable, reliable data platform
  • Implement data governance policies to ensure data quality, consistency, and security
  • Develop observability systems for platform monitoring and reliability
  • Build and automate infrastructure: data warehouses, lakes, pipelines, and real-time systems
  • Apply Infrastructure as Code (IaC) practices using tools like Terraform
  • Promote best practices for data engineering and platform operations
  • Build internal tools that simplify data-driven application development
  • Work with teams across the company to understand and meet data needs
What we offer
What we offer
  • Attractive compensation package consisting of base salary and the potential to earn a significant bonus for top performance
  • Stock options
  • Opportunity to have a real impact in a high-growth global category leader
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Responsibility from day one and professional and personal growth
  • Great work environment with a young, international team of talented people to work with
  • Health insurance and other benefits such as Flexible remuneration with a 100% tax exemption through Cobee
  • English Lessons
  • Gympass Membership
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Search Platform

The Search Platform team is responsible for powering all of Rovo Search as well ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in building and developing backend applications
  • Bachelor's or Master's degree with a preference for Computer Science degree
  • Expertise with one or more prominent languages such as Java, Python, Kotlin, Go, or TypeScript is required
  • Understanding of SaaS, PaaS, IaaS industry with hands-on experience with public cloud offerings (e.g., AWS, GCP, or Azure)
  • Experience in Java, Spring, REST, and NoSQL databases
  • Experience building event-driven based on SQS, SNS, Kafka or equivalent technologies
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
Job Responsibility
Job Responsibility
  • Handle complex problems in the team from technical design to launch
  • Determine plans-of-attack on large projects
  • Solve complex architecture challenges and apply architectural standards and start using them on new projects
  • Lead code reviews & documentation and take on complex bug fixes, especially on high-risk problems
  • Set the standard for meaningful code reviews
  • Partner across engineering teams to take on org-wide programmes in multiple projects
  • Transfer your depth of knowledge from your current language to excel as a Software Engineer
  • Mentor junior members of the team
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Transactional Data Platform

As a Senior Software Engineer, you will play a critical role in designing, build...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field
  • 5+ years of experience in backend software development
  • 3+ years of hands-on experience working with AWS cloud services, particularly AWS storage technologies (S3, DynamoDB, EBS, EFS, FSx, or Glacier)
  • 3+ years of experience in designing and developing distributed systems or high-scale backend services
  • Strong programming skills in Kotlin
  • Experience working in agile environments following DevOps and CI/CD best practices
  • Strong Backend Development Skills
  • Proficiency in Kotlin, Java for backend development
  • Experience building high-performance, scalable microservices and APIs
  • Strong understanding of RESTful APIs, gRPC, and event-driven architectures
Job Responsibility
Job Responsibility
  • Designing, building, and optimizing high-performance, scalable, and resilient backend storage solutions on AWS cloud infrastructure
  • Developing distributed storage systems, APIs, and backend services that power mission-critical applications, ensuring low-latency, high-throughput, and fault-tolerant data storage
  • Collaborating closely with principal engineers, architects, SREs, and product teams to define technical roadmaps, improve storage efficiency, and optimize access patterns
  • Driving performance tuning, data modeling, caching strategies, and cost optimization across AWS storage services like S3, DynamoDB, EBS, EFS, FSx, and Glacier
  • Contributing to infrastructure automation, security best practices, and monitoring strategies using tools like Terraform, CloudWatch, Prometheus, and OpenTelemetry
  • Troubleshooting and resolving production incidents related to data integrity, latency spikes, and storage failures, ensuring high availability and disaster recovery preparedness
  • Mentoring junior engineers, participating in design reviews and architectural discussions, and advocating for engineering best practices such as CI/CD automation, infrastructure as code, and observability-driven development
What we offer
What we offer
  • Atlassians can choose where they work – whether in an office, from home, or a combination of the two
  • Flexibility for eligible candidates to work remotely across the West US
  • Fulltime
Read More
Arrow Right