This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for an experienced Staff Data Engineer proficient in Data infrastructure operations, automation, and management, as well as traditional data engineering responsibilities. As a Staff Data Engineer, you will work cross-functionally with various teams and contribute to helping build an enterprise-grade Data Platform.
Job Responsibility:
Lead the enhancement of internal processes, focusing on scaling infrastructure, streamlining data delivery, and implementing automation to replace manual operations
Design and implement advanced infrastructure for efficient data extraction, transformation, and loading (ETL) using cutting-edge AWS and SQL technologies
Develop sophisticated analytical tools that leverage the data pipeline to deliver deep insights into critical business metrics such as customer growth and operational efficiency
Architect and manage extensive, sophisticated data sets to meet complex business needs and requirements
Engage closely with a wide array of stakeholders, including executive, product, data, and design teams, providing high-level support for data infrastructure challenges and advising on technical data issues
Make a meaningful impact on our community members' lives
Collaborate and mentor other senior engineers while providing thoughtful guidance using code, design, and architecture reviews
Contribute to defining technical direction, planning the roadmap, escalating issues, and synthesizing feedback to ensure team success
Estimate and manage team project timelines and risks
Participate in hiring and onboarding for new team members
Lead cross-team engineering initiatives
Constantly learning about new technologies and industry standards in data engineering
Requirements:
7+ years in designing, building, and maintaining the Data infrastructure, and the ability to lead complex projects and teams
Bachelor's, Master's, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience
Knowledge of Kafka
Proficiency in programming languages like Python and Scala
Strong knowledge of distributed computing frameworks, including Apache Hadoop and Apache Spark, and of cloud platforms such as AWS, Azure, and GCP
Deep understanding of database design, SQL, and NoSQL databases. Experience in managing large datasets and optimizing database performance
Proficiency in Git and Terraform, and experience in deploying continuous integration and continuous deployment (CI/CD) practices
Experience in managing event-driven systems (especially in Kafka ecosystems)
Expertise in developing and implementing data governance frameworks, policies, and procedures to ensure data quality, compliance, and effective data management practices
Deep understanding of data security principles, including encryption, decryption, and secure data storage and transfer protocol
Nice to have:
Working experience with Databricks would be nice to have
What we offer:
healthcare
internet/cell phone reimbursement
a learning and development stipend
potential opportunities to travel to our Mountain View HQ