This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Operations Engineer owns and evolves the technical infrastructure powering Tripleseat's data pipelines and analytics platforms. This role is responsible for building reliable data flows from production systems into our Snowflake data warehouse, developing data models that enable business insights, and integrating data from across the organization. The ideal candidate combines strong database fundamentals with modern data engineering practices, bringing a mindset of automation, reliability, and operational excellence to everything they build.
Job Responsibility:
Build, operate, and maintain reliable ETL/ELT pipelines moving data from MySQL and other source systems into Snowflake using Dagster, Stitch, and dbt Core
Create data pipelines that read from external APIs and write to Snowflake, integrating data from business systems across the organization (Salesforce, etc.)
Optimize the dbt project build to enhance build times and reduce warehouse costs
Automate ingestion, transformation, validation, and monitoring processes to increase reliability
Ensure pipelines meet SLAs for data availability, latency, and accuracy
Design and build data models in Snowflake that support analytics, reporting, and business intelligence needs
Partner with Developers and Product teams to build data products that drive business value
Contribute to data platform roadmap and technical strategy
Scale and optimize data platforms as data volumes, concurrency, and business demand grow
Identify and resolve performance bottlenecks: query optimization, indexing, warehouse sizing, and workload management
Implement and maintain data quality frameworks, schema validation, and drift detection
Partner with analytics teams to ensure upstream schema changes and downstream models remain aligned
Create and maintain documentation, runbooks, and operational procedures for data pipelines and systems
Work with DevOps/Engineering to integrate data workflows into CI/CD pipelines and infrastructure-as-code environments
Support incident response and root-cause analysis for data pipeline or platform issues
Drive technical direction and best practices for the data platform
Mentor team members on data engineering practices and tooling
Proactively identify opportunities to improve data architecture, reliability, and efficiency
Requirements:
5+ years in data engineering, analytics engineering, or similar data-focused technical roles
Strong experience with relational databases, especially MySQL, including schema design and SQL proficiency
Experience building and maintaining ETL/ELT pipelines connecting application databases to cloud data warehouses
Hands-on experience with Snowflake—warehouse configuration, performance tuning, cost optimization
Proficiency with dbt for data transformation and modeling
Experience with pipeline orchestration tools, preferably Dagster
Comfortable working with Git for version control and collaborative development
Experience working in AWS cloud environments
Experience leveraging AI-assisted development and analysis tools (e.g., GitHub Copilot, Cursor, Claude) to accelerate workflows and improve productivity
Strong communicator who can partner effectively with engineering, product, and analytics teams
Nice to have:
Experience with data modeling methodologies and dimensional design
Familiarity with infrastructure tools (Terraform, Docker, CI/CD)
Experience integrating data from CRM and business systems (Salesforce, HubSpot, etc.)
Background in advanced MySQL internals: query optimization, replication, slow query analysis
Experience with data cataloging, lineage tracking, or metadata systems
Background working in a fast-growing SaaS company or startup environment
What we offer:
Competitive Medical, Dental, and Vision Insurance
Company Paid Life Insurance, Short- and Long-Term Disability Plans