This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are building our next-generation Data Enrichment Platform and we need a heavy hitter to help us scale. We aren't just moving data; we are building the real-time engine that powers our business. As a core member of our engineering team, you will architect high-throughput pipelines, solve complex latency challenges, and define the standards for a robust Lakehouse architecture. If you obsess over JVM internals, distributed consistency, and Exactly-Once semantics, this is the role for you.
Job Responsibility:
Architect high-throughput pipelines
Solve complex latency challenges
Define standards for a robust Lakehouse architecture
Lead design of distributed streaming architectures using Kafka Streams, Spark, and Flink
Build roadmap for transition to Lakehouse architecture
Dive deep to identify system bottlenecks
Own stability of pipelines processing massive events per second
Solve for data integrity and low latency
Drive culture of reliability through robust monitoring (Datadog/Prometheus), observability, and automated testing frameworks
Proactively identify opportunities to refactor and modernize platform for efficiency and cost-effectiveness
Requirements:
8+ years of software engineering experience
At least 5 years of deep production expertise in Scala or Java (concurrency, memory management, GC tuning)
Deep JVM knowledge required
5+ years hands-on with the Kafka ecosystem (Apache Kafka, Kafka Streams, Kafka Connect, Flink, Spark)
Proven track record of designing resilient systems under heavy load
Mastery of the AWS ecosystem (MSK, EMR, Athena, Lambda)