This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Staff Software Engineer role in Data, Engineering for Uber's Payments data ecosystem. The candidate will provide strategic leadership, own the technical vision, architect scalable data pipelines, elevate data standards, optimize systems, mentor engineers, and drive engineering excellence.
Job Responsibility:
Own and drive the technical roadmap for the Payments data ecosystem
Actively identify strategically important problems and inefficiencies
Partner with Product, Operations, and Engineering stakeholders to translate ambiguous business goals into clear, actionable technical solutions
Drive consensus on complex technical decisions across the organization
Design and implement resilient, cost-effective, and high-scale batch and streaming pipelines
Define and enforce robust data modeling standards, data contracts, and governance frameworks
Identify opportunities to automate manual workflows and optimize infrastructure efficiency
Champion sustainable engineering practices
Serve as a humble mentor and technical advisor
Act as a role model for judgment and responsibility
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field
10+ years of hands-on experience in Data Engineering, with a proven track record of delivering results at a Staff Engineer level (or equivalent scope) at a premier technology company
10+ years of hands-on, expert-level SQL experience
Extensive experience designing dimensional data models (Star/Snowflake schemas) and data warehouses
Proficiency in at least one high-level programming language (Java, Scala, Python, or Go)
10+ years of experience working with distributed data systems (Hadoop, Hive, Spark) and MPP databases (Vertica, Redshift, etc.)
Experience designing full-lifecycle data systems, including logging, ingestion (Batch/Stream), quality frameworks, and monitoring
Excellent written and verbal communication skills
A strong passion for driving engineering excellence and mentoring engineers
Nice to have:
Deep expertise in large-scale Batch Processing systems (Spark, MapReduce, Hive)
Extensive experience building real-time data platforms using Apache Kafka, Flink, or Spark Streaming
Expert hands-on understanding of designing fault-tolerant, multi-datacenter, and cloud-native architectures
Experience with Infrastructure as Code (IaC) (Terraform, Kubernetes)
Proficiency in multiple programming languages (Java, Scala, Go, Python) and deep knowledge of various storage engines (MySQL, Cassandra, Redis, Pinot)
Experience with modern open table formats like Apache Iceberg, Hudi, or Delta Lake
Experience designing end-to-end Data Observability frameworks
Ability to implement automated quality gates, anomaly detection, and SLAs
Passion for defining and enforcing Data Contracts
Track record of driving cost efficiency in big data environments
Background in Fintech, Payments, or Operations Analytics, with exposure to complex regulatory environments (GDPR, SOX) and data privacy frameworks
Passion for Data Governance and establishing engineering best practices