CrawlJobs Logo

Mid Level Data Engineer (Snowflake)

parserdigital.com Logo

Parser Limited

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

This position offers you the opportunity to join a fast-growing technology organization that is redefining productivity paradigms in the software engineering industry. Thanks to our flexible, distributed model of global operation and the high caliber of our experts, we have enjoyed triple digit growth over the past five years, creating amazing career opportunities for our people. If you want to accelerate your career working with like-minded subject matter experts, solving interesting problems and building the products of tomorrow, this opportunity is for you. As a Snowflake Data Engineer at Parser, you will be part of our team and work on challenging engineering projects. You will help improve data processes and tooling, automating workloads and pipelines wherever possible. Moreover, we expect that you provide our client with your professional expertise, not only hands-on but also for technically improving the “under development” data framework.

Job Responsibility:

  • Data Pipeline Design, Implementation, Optimization and productionization in snowflake
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Create and maintain datasets that support the needs and products
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
  • Implementing processes oriented to improve quality, consistency, and reliability through the various pipelines (monitoring, retry, failure detection)

Requirements:

  • MS or BS in CS, Engineering, Math, Statistics, or a related field or equivalent practical experience in data engineering
  • Proven track record within a Data Engineer or engineering environment where you have developed and deployed software / pipeline
  • 3-5 years of experience working in data engineering using Snowflake
  • 2-4 years of experience working in data engineering using Python or any other language programming known for data engineering (Scala, Go, R, Java, etc)
  • Proven SQL Skills
  • Experience using the data warehousing tool: Snowflake
  • Understanding about several tools for data transformation and pipelining, like Airflow, DBT, Spark, Pandas
  • Cloud experience: Proficient in AWS, with expertise in data and analytics services such as Redshift, Kinesis, Glue, Step Functions, Sagemaker, RDS, etc
  • Knowledge to build processes and infrastructure to manage lifecycle of datasets: data structures, metadata, dependency and workload management
  • You have worked in an Agile environment or open to adopting this culture
  • Excellent English communication skills

Nice to have:

  • Experience with Technologies like Kubeflow, EKS, Docker
  • Experience with stream-processing systems: Kafka, Storm, Spark-Streaming, etc
  • Statistical analysis and modeling experience
  • Experience with machine learning algorithms
  • Data-driven approach to problem solving
  • The ability to visualize and communicate complex concepts
What we offer:
  • The chance to work in innovative projects with leading brands, that use the latest technologies that fuel transformation
  • The opportunity to be part of an amazing, multicultural community of tech experts
  • The opportunity to grow and develop your career with the company
  • A flexible and remote working environment

Additional Information:

Job Posted:
February 20, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Mid Level Data Engineer (Snowflake)

Senior Software Engineer - Data Team

We’re looking for Software Engineers to join our Data Department, developers wit...
Location
Location
Spain , Barcelona; Madrid
Salary
Salary:
50000.00 - 70000.00 EUR / Year
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Python Proficiency: confident working deeply in Python, understand topics like the GIL, concurrency (asyncio), generators, and decorators, care about maintainable typing and thoughtful performance optimization
  • Architecture Patterns: comfortable applying Hexagonal Architecture to keep domain logic clean and decoupled, can leverage patterns like CQRS and the Transactional Outbox to support consistency and reliability in an event-driven environment
  • Database Polyglot: strong SQL fundamentals, know how to design for performance (PostgreSQL internals, indexing strategies), understand when tools like Redis (caching) or Elasticsearch (search/aggregations) are the right fit
  • Communication: communicate clearly in English across audiences
  • Pragmatic mindset: balance quality with impact, able to make thoughtful trade-offs, deliver iteratively, and keep an eye on long-term sustainability while moving at a good pace
Job Responsibility
Job Responsibility
  • Architect and Build: Design, implement, and maintain scalable microservices using Python (FastAPI/Django), take ownership of breaking down complex monoliths or building new services from the ground up, applying DDD principles
  • Master the Event Stream: Build robust, event-driven flows with Kafka, ensure that our events are durable, ordered, and processed idempotently, managing eventual consistency with care
  • Integrate at Scale: Design fault-tolerant integrations with third-party ecosystems (Meta Ads, Google Marketing Platform, Salesforce), navigate rate limits, retries, and circuit breakers to maintain platform stability
  • Bridge OLTP and OLAP: Work at the intersection of transactional applications and analytical data, optimize PostgreSQL for operational efficiency while designing ingestion pipelines for Snowflake and Elasticsearch, using Airflow and dbt
  • Productionize Data Capabilities: Partner closely with Data Science, Machine Learning, and Data Engineering teams to ensure seamless integration of data sources and model infrastructure
  • Elevate the Bar: Lead thorough code reviews, write RFCs for key technical decisions, and mentor mid-level engineers, champion testing strategies (unit, integration, contract testing) and advocate for clean, sustainable code architecture
What we offer
What we offer
  • Responsibility from day one and professional and personal growth
  • Opportunity to have a real impact in a high-growth global category leader
  • A compensation package consisting of base salary and the potential to earn a significant bonus for top performance
  • Stock options plan
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Health insurance and other benefits such as Flexible remuneration with a 100% tax exemption through Cobee
  • English / Spanish Lessons
  • Wellhub Membership
  • Possibility to receive in advance part of your salary by Payflow
  • Fulltime
Read More
Arrow Right
New

Staff Software Engineer, Data Infrastructure

At Docker, we make app development easier so developers can focus on what matter...
Location
Location
United States , Seattle
Salary
Salary:
195400.00 - 275550.00 USD / Year
docker.com Logo
Docker
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of software engineering experience with 3+ years focused on data engineering and analytics systems
  • Expert-level experience with Snowflake including advanced SQL, performance optimization, and cost management
  • Deep proficiency in DBT for data modeling, transformation, and testing with experience in large-scale implementations
  • Strong expertise with Apache Airflow for complex workflow orchestration and pipeline management
  • Hands-on experience with Sigma or similar modern BI platforms for self-service analytics
  • Extensive AWS experience including data services (S3, Redshift, EMR, Glue, Lambda, Kinesis) and infrastructure management
  • Proficiency in Python, SQL, and other programming languages commonly used in data engineering
  • Experience with infrastructure-as-code, CI/CD practices, and modern DevOps tools
  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience
  • Proven track record designing and implementing large-scale distributed data systems
Job Responsibility
Job Responsibility
  • Define and drive the technical strategy for Docker's data platform architecture, establishing long-term vision for scalable data systems
  • Lead design and implementation of highly scalable data infrastructure leveraging Snowflake, AWS, Airflow, DBT, and Sigma
  • Architect end-to-end data pipelines supporting real-time and batch analytics across Docker's product ecosystem
  • Drive technical decision-making around data platform technologies, architectural patterns, and engineering best practices
  • Establish technical standards for data quality, testing, monitoring, and operational excellence
  • Design and build robust, scalable data systems that process petabytes of data and support millions of user interactions
  • Implement complex data transformations and modeling using DBT for analytics and business intelligence use cases
  • Develop and maintain sophisticated data orchestration workflows using Apache Airflow
  • Optimize Snowflake performance and cost efficiency while ensuring reliability and scalability
  • Build data APIs and services that enable self-service analytics and integration with downstream systems
What we offer
What we offer
  • Freedom & flexibility
  • fit your work around your life
  • Designated quarterly Whaleness Days plus end of year Whaleness break
  • Home office setup
  • we want you comfortable while you work
  • 16 weeks of paid Parental leave
  • Technology stipend equivalent to $100 net/month
  • PTO plan that encourages you to take time to do the things you enjoy
  • Training stipend for conferences, courses and classes
  • Equity
  • Fulltime
Read More
Arrow Right
New

Mid Data Engineer

We are looking for a Mid/Senior Data Engineer to join our team and support the d...
Location
Location
Mexico , Dinastía, Nuevo León
Salary
Salary:
Not provided
enroutesystems.com Logo
Enroute
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5+ years (Mid-level) or 5–8+ years (Senior-level) experience in Data Engineering
  • Strong hands-on experience with SQL and data modeling
  • Proven past experience implementing reverse ETL solutions (hands-on ownership, not just exposure)
  • Experience working with modern data warehouses (Snowflake preferred)
  • Experience building ETL/ELT pipelines using Python and/or SQL-based tools
  • Experience with orchestration tools (Airflow, dbt, or similar)
  • Experience working in cloud environments (AWS, GCP, or Azure)
  • Strong understanding of data quality and monitoring practices
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines
  • Implement and manage reverse ETL workflows that sync data from the data warehouse (e.g., Snowflake) into operational systems (CRMs, marketing tools, internal applications, etc.)
  • Optimize data models to support both analytics and activation use cases
  • Ensure data quality, validation, and monitoring across pipelines
  • Collaborate with cross-functional teams to translate business requirements into reliable data solutions
  • Support performance tuning and cost optimization of warehouse workloads
  • Maintain documentation and best practices across data workflows
What we offer
What we offer
  • Monetary compensation
  • Year-end Bonus
  • IMSS, AFORE, INFONAVIT
  • Major Medical Expenses Insurance
  • Minor Medical Expenses Insurance
  • Life Insurance
  • Funeral Expenses Insurance
  • Preferential rates for car insurance
  • TDU Membership
  • Holidays and Vacations
  • Fulltime
Read More
Arrow Right

Mid-Level Data Engineer

We are seeking a Professional Data Engineer to join our dynamic team, where you ...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
40000.00 / Year
parserdigital.com Logo
Parser Limited
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a data engineer
  • Proficiency in SQL is a must
  • Experience with modern cloud data warehousing, data lake solutions like Snowflake, BigQuery, Redshift, Azure Synapse
  • Experience with ETL/ELT, batch, streaming data processing pipelines
  • Excellent ability to investigate and troubleshoot data issues, providing fixes and proposing both short and long-term solutions
  • Knowledge of AWS services (like S3, DMS, Glue, Athena, etc.)
  • Familiar with DBT or other data transformation tools
  • Familiarity with GenAI, and how to leverage LLMs to resolve engineering challenges
Job Responsibility
Job Responsibility
  • Develop and maintain ETL pipelines using SQL and/or Python
  • Use tools like Dagster/Airflow for pipeline orchestration
  • Collaborate with cross-functional teams to understand and deliver data requirements
  • Ensure a consistent flow of high-quality data using stream, batch, and CDC processes
  • Use data transformation tools like DBT to prepare datasets to enable business users to self-service
  • Ensure data quality and consistency in all data stores
  • Monitor and troubleshoot data pipelines for performance and reliability
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

As a Senior Data Engineer (Platform) at Freshbooks, you will help shape the futu...
Location
Location
Canada , Toronto; Halifax; Calgary; Vancouver; Kitchener; Waterloo; Hamilton; Truro
Salary
Salary:
112400.00 - 148000.00 CAD / Year
freshbooks.com Logo
FreshBooks
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience designing, building, and operating data pipelines and data platforms
  • Strong experience with batch and near real-time data processing and streaming architectures
  • Hands-on expertise with Google Cloud Platform or another major cloud provider (AWS or Azure) and cloud data warehouses (BigQuery, Snowflake, Redshift)
  • Expert SQL skills and strong programming experience in Python or similar languages
  • Experience with orchestration tools (Airflow or equivalent), CDC technologies, and event-driven systems
  • Experience with DevOps and IaC tooling (Docker, Kubernetes, Terraform, Jenkins, Git, CI/CD pipelines)
  • Strong communication skills with the ability to explain complex technical concepts to non-technical stakeholders
Job Responsibility
Job Responsibility
  • Design, build, and operate batch and streaming data pipelines on GCP using Airflow (Cloud Composer), dbt, Datastream, Fivetran, Pub/Sub, Dataflow, BigQuery, and Cloud Functions
  • Build event-driven and near real-time ingestion and transformation workflows to support analytics, operations, and ML workloads
  • Develop and operate ML data and serving infrastructure using Vertex AI, Kubeflow, Cloud Run, and Cloud Composer for batch and real-time predictions
  • Implement CI/CD pipelines and infrastructure as code using tools such as GitHub Actions, Azure Pipelines, Terraform, and Terraspace
  • Drive observability, monitoring, alerting, security, and access controls using OpenTelemetry and cloud-native services
  • Partner with Product, Data Analytics, Machine Learning, Engineering, Platform, Infrastructure, and Security teams to design scalable, secure, and cost-efficient data systems
  • Lead design and code reviews, contribute to engineering standards, incident management practices, and mentor junior and mid-level engineers
What we offer
What we offer
  • Equity grant
  • Immediate enrollment in FreshBooks' comprehensive benefits program
  • Fulltime
Read More
Arrow Right
New

Vp, Software Engineer

This role sits within Preqin, a part of BlackRock. Preqin plays a key role in ho...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
blackrock.com Logo
BlackRock Investments
Expiration Date
September 14, 2026
Flip Icon
Requirements
Requirements
  • 7+ years’ experience in software engineering
  • Strong technical ability across the full stack: Python, FastAPI, React and Typescript is a plus
  • Experience with databases like Postgres and Snowflake
  • Experience of working within cloud provider services – Azure or AWS (preferred) and utilization of infrastructure as code
  • A data-driven mindset to make development decisions based on robust analyses
  • Ability to collaborate effectively with designers, engineering and data scientist teams to build our technical solutions
  • You have driven technical solution design, taking the balance of engineering quality, testing, scalability and security into consideration
  • A “let’s do it” and “challenge accepted” attitude when faced with less known or challenging tasks, with a willingness to learn new technologies and ways of working
  • Excellent verbal and written communication and interpersonal skills, with the ability to influence at all organizational levels and bridge technical perspectives
  • Proficiency in English required
Job Responsibility
Job Responsibility
  • Develop workflows that seamlessly combine AI/ML with human expertise to accelerate data collection and improve decision-making processes
  • Prioritize work based on data-driven insights and outcome-based goals in collaboration with stakeholders
  • Design and implementation of scalable, reliable data pipelines that ingest, process, and deliver high quality data to downstream applications and analytics platforms
  • Work closely with engineering teams across the business, ensuring the best technical solutions are adopted, and elevate development standards through knowledge sharing and best practices
  • Collaborate across engineering, product, and data scientist teams to translate business requirements into technical solutions and ensuring our data assets are organized and accessible
  • Mentor and guide team members, fostering a culture of continuous improvement, innovation, and open communication
  • Actively participate in technical discussions about new product directions, data modelling, and architectural decisions, ensuring our technology platform remains extensible
  • Lead an engineering pod using strong leadership and influence skills
  • Manage of a team of junior and mid-level engineers, supporting their careers and growth
What we offer
What we offer
  • retirement investment and tools designed to help you in building a sound financial future
  • access to education reimbursement
  • comprehensive resources to support your physical health and emotional well-being
  • family support programs
  • Flexible Time Off (FTO)
  • Fulltime
Read More
Arrow Right

Software Backend Engineer

We’re looking for Software Backend Engineers to join our Data Department, develo...
Location
Location
Spain , Madrid or Barcelona
Salary
Salary:
50000.00 - 70000.00 EUR / Year
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Python Proficiency: confident working deeply in Python, understand topics like the GIL, concurrency (asyncio), generators, and decorators, care about maintainable typing and thoughtful performance optimization
  • Architecture Patterns: comfortable applying Hexagonal Architecture to keep domain logic clean and decoupled, can leverage patterns like CQRS and the Transactional Outbox to support consistency and reliability in an event-driven environment
  • Database Polyglot: strong SQL fundamentals, know how to design for performance (PostgreSQL internals, indexing strategies), understand when tools like Redis (caching) or Elasticsearch (search/aggregations) are the right fit
  • Communication: communicate clearly in English across audiences — from engineers to non-technical stakeholders — and can explain decisions in a structured way, backed by data and sound reasoning
  • Pragmatic mindset: balance quality with impact, able to make thoughtful trade-offs, deliver iteratively, and keep an eye on long-term sustainability while moving at a good pace
Job Responsibility
Job Responsibility
  • Architect and Build: Design, implement, and maintain scalable microservices using Python (FastAPI/Django), take ownership of breaking down complex monoliths or building new services from the ground up, applying DDD principles
  • Master the Event Stream: Build robust, event-driven flows with Kafka, ensure that our events are durable, ordered, and processed idempotently, managing eventual consistency with care
  • Integrate at Scale: Design fault-tolerant integrations with third-party ecosystems (Meta Ads, Google Marketing Platform, Salesforce), navigate rate limits, retries, and circuit breakers to maintain platform stability, even when external APIs are unpredictable
  • Bridge OLTP and OLAP: Work at the intersection of transactional applications and analytical data, optimize PostgreSQL for operational efficiency while designing ingestion pipelines for Snowflake and Elasticsearch, using Airflow and dbt
  • Productionize Data Capabilities: Partner closely with Data Science, Machine Learning, and Data Engineering teams to ensure seamless integration of data sources and model infrastructure
  • Elevate the Bar: Lead thorough code reviews, write RFCs for key technical decisions, and mentor mid-level engineers, champion testing strategies (unit, integration, contract testing) and advocate for clean, sustainable code architecture
What we offer
What we offer
  • Responsibility from day one and professional and personal growth
  • Opportunity to have a real impact in a high-growth global category leader
  • A compensation package consisting of base salary and the potential to earn a significant bonus for top performance
  • Stock options plan
  • 40% discount on all Fever events and experiences
  • Home office friendly, location in Madrid or Barcelona required
  • Health insurance and other benefits such as Flexible remuneration with a 100% tax exemption through Cobee
  • English / Spanish Lessons
  • Wellhub Membership
  • Possibility to receive in advance part of your salary by Payflow
  • Fulltime
Read More
Arrow Right

Senior Software Engineer

We’re looking for Software Engineers to join our Data Department, developers wit...
Location
Location
Spain , Madrid or Barcelona
Salary
Salary:
50000.00 - 70000.00 EUR / Year
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Python Proficiency: confident working deeply in Python
  • understanding topics like the GIL, concurrency (asyncio), generators, and decorators
  • Architecture Patterns: comfortable applying Hexagonal Architecture
  • leverage patterns like CQRS and the Transactional Outbox
  • Database Polyglot: strong SQL fundamentals
  • know how to design for performance (PostgreSQL internals, indexing strategies)
  • understand when tools like Redis or Elasticsearch are the right fit
  • Communication: communicate clearly in English across audiences
  • Pragmatic mindset: balance quality with impact
Job Responsibility
Job Responsibility
  • Architect and Build: Design, implement, and maintain scalable microservices using Python (FastAPI/Django)
  • take ownership of breaking down complex monoliths or building new services from the ground up, applying DDD principles
  • Master the Event Stream: Build robust, event-driven flows with Kafka
  • ensure that our events are durable, ordered, and processed idempotently, managing eventual consistency with care
  • Integrate at Scale: Design fault-tolerant integrations with third-party ecosystems (Meta Ads, Google Marketing Platform, Salesforce)
  • navigate rate limits, retries, and circuit breakers to maintain platform stability
  • Bridge OLTP and OLAP: Work at the intersection of transactional applications and analytical data
  • optimize PostgreSQL for operational efficiency while designing ingestion pipelines for Snowflake and Elasticsearch, using Airflow and dbt
  • Productionize Data Capabilities: Partner closely with Data Science, Machine Learning, and Data Engineering teams to ensure seamless integration of data sources and model infrastructure
  • Elevate the Bar: Lead thorough code reviews, write RFCs for key technical decisions, and mentor mid-level engineers
What we offer
What we offer
  • Responsibility from day one and professional and personal growth
  • Opportunity to have a real impact in a high-growth global category leader
  • A compensation package consisting of base salary and the potential to earn a significant bonus for top performance
  • Stock options plan
  • 40% discount on all Fever events and experiences
  • Health insurance and other benefits such as Flexible remuneration with a 100% tax exemption through Cobee
  • English / Spanish Lessons
  • Wellhub Membership
  • Possibility to receive in advance part of your salary by Payflow
  • Fulltime
Read More
Arrow Right