This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As Senior Data Engineer, you should be an expert with data warehousing technical components (e.g. Data Modeling, ETL and Reporting), infrastructure (e.g. hardware and software) and their integration. You will be responsible for collecting data from multiple sources and building optimal pipelines to process & leverage the data to meet various business requirements. You will be responsible for the execution of our data strategy through design and development of the Data platform using, but not limited to, AWS technologies, Airflow and Snowflake to deliver Reporting, BI and Analytics solutions. You will be working closely with business and technical stakeholders to aggregate, analyze & transform data to report insights.
Job Responsibility:
Create and maintain optimal data pipeline architecture
Assemble, analyze and organize large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using DBT, SQL, Snowflake and AWS/GCP ‘big data’ technologies
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Assist the data science team by preparing data for prescriptive and predictive modelling
Collaborating with the data architects, analysts and scientists on the team
Requirements:
5+ years of experience as a Data Engineer or in a similar role
Experience with data modeling, data warehousing, and building ETL pipelines
Experience in SQL
Experience with building data pipelines and applications to stream and process datasets
Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures
Knowledge of Engineering and Operational Excellence using standard methodologies
Expertise in designing systems and workflows for handling Big data volumes
Knowledge of data management fundamentals and data storage principles
Strong problem-solving skills and ability to prioritize conflicting requirements
Excellent written and verbal communication skills and ability to succinctly summarize key findings
Experience working with AWS Big Data Technologies (EMR, Redshift, S3)
Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent
What we offer:
High-growth space, both virtually and in person, where you can do your best work
Meaningful rewards and development opportunities
Recognizing performance and creating a supportive working environment