This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Team at Bird is an integral component of every facet of the business from strategic initiatives to tactical effectiveness, and is a key resource in developing Bird's product and processes. As an Analytics Engineer, you will lead data pipeline, data modeling, architecture and data strategy-related efforts for the Data team at Bird working in close partnership with the engineering team. You’re a seasoned analytics professional and engineer who not only understands how to use big data in answering complex business questions but also how to design semantic layers to best support self-service reporting and discovery. You will manage projects from requirements gathering to planning to implementation of full-stack data solutions. You will work closely with cross-functional partners such as engineering, ops and finance to ensure that business logic is properly represented in the semantic layer and production environments, where it can be used by the wider Bird team to drive business strategy.
Job Responsibility:
Design and implement understandable data models that support flexible querying and data visualization
Instrument algorithms and machine learning pipelines from complex business requirements
Advance automation efforts that help the team spend less time manipulating & validating data and more time analyzing it
Guide the Analytics Engineering roadmap, communicate timelines, and manage development cycles/sprints to deliver value
Own the creation and support of internal data architecture and governance standards and best practices
Lead the selection, implementation, optimization, and integration of data tools
Rapidly deliver on concepts through prototypes that can be presented for feedback
Train fellow employees on best practices for data standards, DAGs, code, documentation and visualization and help others act as successful stewards of our internal tools
Requirements:
Bachelor's degree in quantitative field of study (Computer Science, Engineering, Mathematics, Statistics, Finance, etc.) from a top-tier institution
2-3+ years of relevant experience in data engineering
Expertise in writing SQL and in data-warehousing concepts such as star schemas, slowly changing dimensions, ELT/ETL, and MPP databases
Experience with big-data technologies (e.g. Spark, Kafka, Hive)
Experience in transforming flawed/changing data into consistent, trustworthy datasets, and in developing DAGs to batch-process millions of records
Experience with general-purpose programming (e.g. Python, Java, Go), dealing with a variety of data structures, algorithms, and serialization formats
Proficiency with Git (or similar version control) and CI/CD best practices
Experience in managing workflows using Agile practices
Nice to have:
MS or higher in a quantitative field (CS, Engineering, Math, Stats, Finance) from a top-tier institution
Proven ability to build complex reports and dashboards using tools like Tableau or Looker
Deep understanding of data warehouse architecture and data design principles
Ability to translate complex technical asks into clear documentation and compelling data stories for any audience
A self-starter who thrives in ambiguity and can manage projects independently from ideation to execution
A growth mindset focused on seizing opportunities to optimize products, business processes, and team knowledge
A dedicated team player who delivers timely, high-quality work and expects the same from their peers
What we offer:
Plenty of time off to relax and recharge, plus a wellness resource to help you wind down