This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are on the look out for a Working Student, Data Engineering for our Vendor Data Team, where your work will directly support how vendors are discovered, managed, and analyzed at scale. You'll be working mostly on Vendor Sales & Operations topics, contributing to data-driven initiatives such as vendor lead generation, vendor management, and competitor analysis. As part of our Vendor Team, you’ll be the driving force behind the success of thousands of restaurants, shops, and local businesses. Your contributions will empower vendors with advanced tools to manage their operations, boosting their visibility and reach. Every feature you help build will create growth opportunities for businesses of all sizes, strengthening Delivery Hero’s ecosystem and impact. This is a hands-on role designed for students who want real production exposure. You'll work with modern data tooling and collaborate closely with data engineers, analytics engineers, and business stakeholders. If you're proactive, communicative, and eager to learn how real-world data platforms are built and operated, this role will give you an excellent foundation.
Job Responsibility:
Support the maintenance and improvement of data ingestion and transformation pipelines powering Vendor Sales & Operations use cases
Work with SQL and dbt to build, test, and optimize data models in BigQuery
Help monitor and ensure data quality and reliability using tools like Monte Carlo
Assist with Airflow-based orchestration, debugging pipeline failures and improving robustness
Contribute to ML matching systems by supporting data preparation, validation, and monitoring
Collaborate closely with data engineers, analysts, and commercial stakeholders to understand data needs and translate them into reliable datasets
Document data models, pipelines, and assumptions to improve transparency and long-term maintainability.
Requirements:
Currently enrolled in a Bachelor's or Master's program such as: Computer Science, Computer Engineering, Software Engineering. etc.
Strong working knowledge of SQL and Python (this is essential)
Familiarity or hands-on experience with dbt and modern data warehouses (BigQuery is a plus)
Basic understanding of data pipelines, ETL/ELT concepts, and analytical data modeling
A proactive and communicative mindset — you ask questions, flag issues early, and enjoy collaborating with others
Eagerness to learn, take ownership of tasks, and grow your technical skills in a production environment
Ability to work independently in a remote-friendly setup, while joining on-site twice per week.
Nice to have:
Prior experience with Airflow or other workflow orchestration tools
Exposure to data quality or observability tools (e.g., Monte Carlo)
Basic understanding of machine learning pipelines or matching systems
Interest in vendor ecosystems, marketplaces, or competitive intelligence