CrawlJobs Logo

Senior Data & Automation Engineer

fluentco.com Logo

Fluent, Inc

Location Icon

Location:
Canada , Toronto

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

90000.00 - 100000.00 CAD / Year

Job Description:

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition. Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers. As a Senior Data & Automation Engineer, you will leverage your Databricks and Spark expertise to execute on building enterprise-grade data products that power Fluent’s business lines. These products serve as the foundation for sophisticated representations of customer journeys and marketplace activity across our ecosystem. You will partner with Data Architects, Data Scientists, and Product Managers to transform Enterprise Data Models into optimized physical data models and real-time pipelines. You will elevate standards across the team in code quality, observability, and architecture design—while actively contributing as a hands-on engineer. This role is fully remote in Ontario, with occasional travel to NYC or Toronto offices.

Job Responsibility:

  • Design, build, and support scalable real-time and batch data pipelines using PySpark and Spark Structured Streaming on Databricks
  • Implement process automation and end-to-end workflows following Bronze → Silver → Gold architecture using Delta Lake best practices
  • Handle event-driven ingestion with Kafka and integrate into automated pipelines
  • Orchestrate workflows using Databricks Workflows/Jobs and CI/CD automation
  • Implement strong monitoring, observability, and alerting for reliability and performance (Databricks metrics, dashboards)
  • Collaborate cross-functionally in agile sprints with Product, Analytics, and Data Science teams
  • Translate enterprise logical data models into optimized physical and performance-tuned implementations
  • Write modular, version-controlled code in Git
  • contribute to code reviews and enforce quality standards
  • Implement robust logging, error handling, and data quality validation across automation layers
  • Utilize relevant AWS services (S3, IAM, Secrets Manager) and DevOps practices
  • Promote best practices through documentation, knowledge sharing, tech talks, and training

Requirements:

  • 5+ years of professional experience in data engineering, including Spark (PySpark) and SQL
  • 3+ years of hands-on experience building pipelines on Databricks (Workflows, Notebooks, Delta Lake)
  • Deep understanding of Apache Spark distributed processing concepts and optimization
  • Strong experience with streaming architectures and Kafka
  • Familiarity with Databricks monitoring and observability tooling
  • Understanding of Lakehouse architecture, Unity Catalog, and governance principles
  • Proven proficiency in Git-based CI/CD workflows and automated deployment
  • Strong troubleshooting, optimization, and performance tuning skills
  • Experience designing and building large-scale, automated data pipelines

Nice to have:

  • Experience with schema management (Schema Registry) and data validation frameworks (Great Expectations, Deequ)
  • Exposure to real-time ML systems and feature pipelines
  • Prior experience in startup or small agile teams
  • Familiarity with test-driven development in data engineering contexts
What we offer:
  • Competitive compensation
  • Ample career and professional growth opportunities
  • New Headquarters with an open floor plan to drive collaboration
  • Health, dental, and vision insurance
  • Pre-tax savings plans and transit/parking programs
  • 401K with competitive employer match
  • Volunteer and philanthropic activities throughout the year
  • Educational and social events
  • Fully stocked kitchen
  • Catered lunch
  • Activity-filled events
  • Quarterly outings

Additional Information:

Job Posted:
January 20, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data & Automation Engineer

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions of events each month into ou...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions events each month into our a...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a senior data engineer, you will help our clients with building a variety of ...
Location
Location
Belgium , Brussels
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience as a Data Engineer or in software engineering in a data context
  • Programming experience with one or more languages: Python, Scala, Java, C/C++
  • Knowledge of relational database technologies/concepts and SQL is required
  • Experience building, scheduling and maintaining data pipelines (Spark, Airflow, Data Factory)
  • Practical experience with at least one cloud provider (GCP, AWS or Azure). Certifications from any of these is considered a plus
  • Knowledge of Git and CI/CD
  • Able to work independently, prioritize multiple stakeholders and tasks, and manage work time effectively
  • You have a degree in Computer Engineering, Information Technology or related field
  • You are proficient in English, knowledge of Dutch and/or French is a plus.
Job Responsibility
Job Responsibility
  • Gather business requirements and translate them to technical specifications
  • Design, implement and orchestrate scalable and efficient data pipelines to collect, process, and serve large datasets
  • Apply DataOps best practices to automate testing, deployment and monitoring
  • Continuously follow & learn the latest trends in the data world.
What we offer
What we offer
  • A variety of perks, such as mobility options (including a company car), insurance coverage, meal vouchers, eco-cheques, and more
  • Continuous learning opportunities through the Sopra Steria Academy to support your career development
  • The opportunity to connect with fellow Sopra Steria colleagues at various team events.
Read More
Arrow Right

Senior Data Engineer

Join Inetum as a Data Engineer! At Inetum, we empower innovation and growth thro...
Location
Location
Portugal , Lisbon
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Teradata – advanced SQL and data warehousing
  • CONTROL-M – job scheduling and automation
  • UNIX – working in a UNIX environment (directories, scripting, etc.)
  • SQL (Teradata) – strong querying and data manipulation skills
  • Ab Initio – data integration and ETL development
  • DevOps – CI/CD practices and automation
  • Collaborative tools – GIT, Jira, Confluence, MEGA, Zeenea
Job Responsibility
Job Responsibility
  • Design, development, and optimization of data solutions that support business intelligence and analytics
  • Fulltime
Read More
Arrow Right

Data Engineer Senior

We are looking for a highly skilled professional to lead the industrialisation o...
Location
Location
Portugal , Lisbon
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 5 years’ experience in MLOps, data engineering, or DevOps with a focus on ML/DL/LLM/AI agents in production environments
  • Strong proficiency in Python
  • Hands-on experience with CI/CD tools such as GitLab, Docker, Kubernetes, Jenkins
  • Solid understanding of ML, DL, and LLM models
  • Experience with ML lifecycle tools such as MLflow or DVC
  • Good understanding of model lifecycle, data traceability, and governance frameworks
  • Experience with on-premise and hybrid infrastructures
  • Excellent communication skills and ability to collaborate with remote teams
  • Proactive mindset, technical rigour, and engineering mentality
  • Willingness to learn, document, and standardise best practices
Job Responsibility
Job Responsibility
  • Analyse, monitor, and optimise ML models, tracking their performance
  • Design and implement CI/CD pipelines for ML models and data flows
  • Containerise and deploy models via APIs, batch processes, and streaming
  • Manage model versioning and traceability
  • Ensure continuous improvement and adaptation of AI use cases and ML models
  • Set up monitoring and alerting for model performance
  • Establish incident response protocols in collaboration with IT
  • Maintain dashboards and automated reports on model health
  • Implement validation frameworks for data and models (e.g., Great Expectations, unit tests, stress tests), in collaboration with Group Governance
  • Contribute to documentation and apply technical best practices
What we offer
What we offer
  • Work in a constantly evolving environment
  • Contribute to digital impact
  • Opportunity for growth and development
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role at UpGuard supporting analytics teams to extract insig...
Location
Location
Australia , Sydney; Melbourne; Brisbane; Hobart
Salary
Salary:
Not provided
https://www.upguard.com Logo
UpGuard
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with data sourcing, storage and modelling to effectively deliver business value right through to BI platform
  • AI first mindset and experience scaling an Analytics and BI function at another SaaS business
  • Experience with Looker (Explores, Looks, Dashboards, Developer interface, dimensions and measures, models, raw SQL queries)
  • Experience with CloudSQL (PostgreSQL) and BigQuery (complex queries, indices, materialised views, clustering, partitioning)
  • Experience with Containers, Docker and Kubernetes (GKE)
  • Familiarity with n8n for automation
  • Experience with programming languages (Go for ETL workers)
  • Comfortable interfacing with various APIs (REST+JSON or MCP Server)
  • Experience with version control via GitHub and GitHub Flow
  • Security-first mindset
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable data pipelines to consolidate information from various internal systems and third-party sources
  • Develop and manage comprehensive semantic layer using technologies like LookML, dbt or SQLMesh
  • Implement and enforce data quality checks, validation rules, and governance processes
  • Ensure AI agents have access to necessary structured and unstructured data
  • Create clear, self-maintaining documentation for data models, pipelines, and semantic layer
What we offer
What we offer
  • Great Place to Work certified company
  • Equal Employment Opportunity and Affirmative Action employer
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a highly skilled Senior Data Engineer to join our team on a l...
Location
Location
United States , Dallas
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related discipline
  • At least 7 years of experience in data engineering
  • Strong background in designing and managing data pipelines
  • Proficiency in tools such as Apache Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and Amazon S3
  • Expertise in programming languages like Python, Scala, or Java for data processing and automation
  • Strong knowledge of both relational and NoSQL databases
  • Experience with Kubernetes-based data engineering and hybrid cloud environments
  • Familiarity with data modeling principles, governance frameworks, and quality assurance processes
  • Excellent problem-solving, analytical, and communication skills
Job Responsibility
Job Responsibility
  • Design and implement robust data pipelines and architectures to support data-driven decision-making
  • Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks
  • Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink
  • Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3
  • Establish and enforce data governance, quality, and security protocols
  • Manage complex workflows and processes across hybrid and multi-cloud environments
  • Work with diverse data formats, including Parquet and Avro
  • Troubleshoot and fine-tune distributed data systems
  • Mentor and guide engineers at the beginning of their careers
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Free online training
  • Fulltime
Read More
Arrow Right