CrawlJobs Logo

Senior Data Operations Engineer

tripleseat.com Logo

Tripleseat

Location Icon

Location:
United States , Concord

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

135000.00 - 155000.00 USD / Year

Job Description:

The Senior Data Operations Engineer owns and evolves the technical infrastructure powering Tripleseat's data pipelines and analytics platforms. This role is responsible for building reliable data flows from production systems into our Snowflake data warehouse, developing data models that enable business insights, and integrating data from across the organization. The ideal candidate combines strong database fundamentals with modern data engineering practices, bringing a mindset of automation, reliability, and operational excellence to everything they build.

Job Responsibility:

  • Build, operate, and maintain reliable ETL/ELT pipelines moving data from MySQL and other source systems into Snowflake using Dagster, Stitch, and dbt Core
  • Create data pipelines that read from external APIs and write to Snowflake, integrating data from business systems across the organization (Salesforce, etc.)
  • Optimize the dbt project build to enhance build times and reduce warehouse costs
  • Automate ingestion, transformation, validation, and monitoring processes to increase reliability
  • Ensure pipelines meet SLAs for data availability, latency, and accuracy
  • Design and build data models in Snowflake that support analytics, reporting, and business intelligence needs
  • Partner with Developers and Product teams to build data products that drive business value
  • Contribute to data platform roadmap and technical strategy
  • Scale and optimize data platforms as data volumes, concurrency, and business demand grow
  • Identify and resolve performance bottlenecks: query optimization, indexing, warehouse sizing, and workload management
  • Implement and maintain data quality frameworks, schema validation, and drift detection
  • Partner with analytics teams to ensure upstream schema changes and downstream models remain aligned
  • Create and maintain documentation, runbooks, and operational procedures for data pipelines and systems
  • Manage data-related infrastructure: cloud resources, Snowflake warehouses, access controls, and pipeline orchestration
  • Work with DevOps/Engineering to integrate data workflows into CI/CD pipelines and infrastructure-as-code environments
  • Support incident response and root-cause analysis for data pipeline or platform issues
  • Drive technical direction and best practices for the data platform
  • Mentor team members on data engineering practices and tooling
  • Proactively identify opportunities to improve data architecture, reliability, and efficiency

Requirements:

  • 5+ years in data engineering, analytics engineering, or similar data-focused technical roles
  • Strong experience with relational databases, especially MySQL, including schema design and SQL proficiency
  • Experience building and maintaining ETL/ELT pipelines connecting application databases to cloud data warehouses
  • Hands-on experience with Snowflake—warehouse configuration, performance tuning, cost optimization
  • Proficiency with dbt for data transformation and modeling
  • Experience with pipeline orchestration tools, preferably Dagster
  • Comfortable working with Git for version control and collaborative development
  • Experience working in AWS cloud environments
  • Experience leveraging AI-assisted development and analysis tools (e.g., GitHub Copilot, Cursor, Claude) to accelerate workflows and improve productivity
  • Strong communicator who can partner effectively with engineering, product, and analytics teams

Nice to have:

  • Experience with data modeling methodologies and dimensional design
  • Familiarity with infrastructure tools (Terraform, Docker, CI/CD)
  • Experience integrating data from CRM and business systems (Salesforce, HubSpot, etc.)
  • Background in advanced MySQL internals: query optimization, replication, slow query analysis
  • Experience with data cataloging, lineage tracking, or metadata systems
  • Background working in a fast-growing SaaS company or startup environment
What we offer:
  • Competitive Medical, Dental, and Vision Insurance
  • Company Paid Life Insurance, Short- and Long-Term Disability Plans
  • 401(k) with Company Match
  • Parental Leave
  • Flexible Paid Time Off
  • Pet Insurance

Additional Information:

Job Posted:
January 13, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Operations Engineer

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role in Data & Analytics, Group Digital to build trusted da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on building production data systems
  • Experience designing and operating batch and streaming pipelines on cloud platforms (GCP preferred)
  • Proficiency with tools like BigQuery, Dataflow/Beam, Pub/Sub (or Kafka), Cloud Composer/Airflow, and dbt
  • Fluent in SQL and production-grade Python/Scala for data processing and orchestration
  • Understanding of data modeling (star/snowflake, vault), partitioning, clustering, and performance at TB-PB scale
  • Experience turning ambiguous data needs into robust, observable data products with clear SLAs
  • Comfort with messy external data and geospatial datasets
  • Experience partnering with Data Scientists to productionize features, models, and feature stores
  • Ability to automate processes, codify standards, and champion governance and privacy by design (GDPR, PII handling, access controls)
Job Responsibility
Job Responsibility
  • Build Expansion360, the expansion data platform
  • Architect and operate data pipelines on GCP to ingest and harmonize internal and external data
  • Define canonical models, shared schemas, and data contracts as single source of truth
  • Enable interactive maps and location analytics through geospatial processing at scale
  • Deliver curated marts and APIs that power scenario planning and product features
  • Implement CI/CD for data, observability, access policies, and cost controls
  • Contribute to shared libraries, templates, and infrastructure-as-code
What we offer
What we offer
  • Intellectually stimulating, diverse, and open atmosphere
  • Collaboration with world-class peers across Data & Analytics, Product, and Engineering
  • Opportunity to create measurable, global impact
  • Modern tooling on Google Cloud Platform
  • Hardware and OS of your choice
  • Continuous learning (aim to spend ~20% of time on learning)
  • Flexible, friendly, values-led working environment
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right

Senior Manager, Data Engineering

You will build a team of talented engineers that will work cross functionally to...
Location
Location
United States , San Jose
Salary
Salary:
240840.00 - 307600.00 USD / Year
archer.com Logo
Archer Aviation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in a similar role, 2 of which are in a data leadership role
  • B.S. in a quantitative discipline such as Computer Science, Computer Engineering, Electrical Engineering, Mathematics, or a related field
  • Expertise with data engineering disciplines including data warehousing, database management, ETL processes, and ML model deployment
  • Experience with processing and storing telemetry data
  • Demonstrated experience with data governance standards and practices
  • 3+ years leading teams, including building and recruiting data engineering teams supporting diverse stakeholders
  • Experience with cloud-based data platforms such as AWS, GCP, or Azure
Job Responsibility
Job Responsibility
  • Lead and continue to build a world-class team of engineers by providing technical guidance and mentorship
  • Design and implement scalable data infrastructure to ingest, process, store, and access multiple data supporting flight test, manufacturing and supply chain, and airline operations
  • Take ownership of data infrastructure to enable a highly scalable and cost-effective solution serving the needs of various business units
  • Build and support the development of novel tools to enable insight and decision making with teams across the organization
  • Evolve data engineering and AI strategy to align with the short and long term priorities of the organization
  • Help to establish a strong culture of data that is used throughout the company and industry
  • Lead initiatives to integrate AI capabilities in new and existing tools
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Fospha is dedicated to building the world's most powerful measurement solution f...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blenheimchalcot.com Logo
Blenheim Chalcot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Excellent knowledge of PostgreSQL and SQL technologies
  • Fluent in Python
  • Understanding of data architecture, pipelines and ELT flows/ technology/ methodologies
  • Understanding of agile methodologies and practices
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
Job Responsibility
Job Responsibility
  • Implement and maintain ELT (Extract, Load, Transform) processes using scalable data pipelines and data architecture
  • Collaborate with cross-functional teams to understand data requirements and deliver effective solutions
  • Ensure data integrity and quality across various data sources
  • Support data-driven decision-making by providing clean, reliable, and timely data
  • Define the standards for high-quality data for Data Science and Analytics use-cases and help shape the data roadmap for the domain
  • Design, develop, and maintain the data models used by ML Engineers, Data Analysts and Data Scientists to access data
  • Conduct exploratory data analysis to uncover data patterns and trends
  • Identify opportunities for process improvement and drive continuous improvement in data operations
  • Stay updated on industry trends, technologies, and best practices in data engineering
What we offer
What we offer
  • Competitive salary
  • Be part of a leading global venture builder, Blenheim Chalcot, and learn from the incredible talent in BC
  • Be exposed to the right mix of challenges and learning and development opportunities
  • Flexible Benefits including Private Medical and Dental, Gym Subsidiaries, Life Assurance, Pension scheme etc.
  • 25 days of paid holiday + your birthday off
  • Free snacks in the office
  • Quarterly team socials
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Corporate Tools, you will work closely with our Sof...
Location
Location
United States
Salary
Salary:
150000.00 USD / Year
corporatetools.com Logo
Corporate Tools
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s (BA or BS) in computer science, or related field
  • 2+ years in a full stack development role
  • 4+ years of experience working in a data engineer role, or related position
  • 2+ years of experience standing up and maintaining a Redshift warehouse
  • 4+ years of experience with Postgres, specifically with RDS
  • 4+ years of AWS experience, specifically S3, Glue, IAM, EC2, DDB, and other related data solutions
  • Experience working with Redshift, DBT, Snowflake, Apache Airflow, Azure Data Warehouse, or other industry standard big data or ETL related technologies
  • Experience working with both analytical and transactional databases
  • Advanced working SQL (Preferably PostgreSQL) knowledge and experience working with relational databases
  • Experience with Grafana or other monitoring/charting systems
Job Responsibility
Job Responsibility
  • Focus on data infrastructure. Lead and build out data services/platforms from scratch (using OpenSource tech)
  • Creating and maintaining transparent, bulletproof ETL (extract, transform, and load) pipelines that cleans, transforms, and aggregates unorganized and messy data into databases or data sources
  • Consume data from roughly 40 different sources
  • Collaborate closely with our Data Analysts to get them the data they need
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Improve existing data models while implementing new business capabilities and integration points
  • Creating proactive monitoring so we learn about data breakages or inconsistencies right away
  • Maintaining internal documentation of how the data is housed and transformed
  • Improve existing data models, and design new ones to meet the needs of data consumers across Corporate Tools
  • Stay current with latest cloud technologies, patterns, and methodologies
What we offer
What we offer
  • 100% employer-paid medical, dental and vision for employees
  • Annual review with raise option
  • 22 days Paid Time Off accrued annually, and 4 holidays
  • After 3 years, PTO increases to 29 days. Employees transition to flexible time off after 5 years with the company—not accrued, not capped, take time off when you want
  • The 4 holidays are: New Year’s Day, Fourth of July, Thanksgiving, and Christmas Day
  • Paid Parental Leave
  • Up to 6% company matching 401(k) with no vesting period
  • Quarterly allowance
  • Use to make your remote work set up more comfortable, for continuing education classes, a plant for your desk, coffee for your coworker, a massage for yourself... really, whatever
  • Open concept office with friendly coworkers
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Darwin Recruitment are hiring for a Senior Data Engineer for a business in Luxem...
Location
Location
Luxembourg
Salary
Salary:
120000.00 EUR / Year
darwinrecruitment.com Logo
Darwin Recruitment GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of combined experience in Data Engineering, Cloud Engineering or similar roles
  • Proficiency in designing scalable, efficient, and maintainable data pipelines and architecture
  • Proficiency in deploying workloads on Kubernetes clusters
  • Strong experience with Apache Airflow
  • Experience in building and managing ETL workflows for data extraction, transformation, and loading
  • Experience with software development life cycle: design, development, test, deployment, operations
  • Proficiency in Python and Python data stack
  • Working knowledge of some infrastructure-as-code framework, preferably Terraform
  • Highly self-motivated, keen learner able to solve challenging problems with creative solutions
  • Strong team player with demonstrated ability to take ownership and drive execution
Job Responsibility
Job Responsibility
  • Design and implement cloud-native data pipelines
  • Optimize performance and scalability of existing data pipelines
  • Deploy and maintain cloud infrastructure (AWS)
  • Take ownership of system components from concept to delivery
  • Mentor team members
  • Collaborate with Data Scientists to bring scientific models to production
Read More
Arrow Right