CrawlJobs Logo

Senior Data Integration Engineer

crusoe.ai Logo

Crusoe

Location Icon

Location:
United States , Sunnyvale

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

165000.00 - 200000.00 USD / Year

Job Description:

Crusoe Cloud is seeking a Data Integration Engineer to help build the foundation of our next-generation data platform. In this role, you’ll design and maintain scalable data pipelines and integrations across critical business systems, enabling reliable data flow, analytics, and forecasting across the organization. You’ll play a key role in connecting systems across construction, engineering, and enterprise platforms, with an initial focus on supporting our datacenter construction business.

Job Responsibility:

  • Designing, implementing, and maintaining scalable ETL/ELT pipelines using tools such as Fivetran, Workato, and DBT
  • Building integrations between core business systems, including PMIS, ERP, HCM, cost management, procurement, and cloud platforms (GCS/GCP)
  • Leading data integrations supporting our datacenter construction business, connecting DCIS, PMIS, BIM, ERP, and related systems
  • Building and managing ingestion pipelines to consolidate structured and unstructured data into a centralized data lake on Google Cloud Storage (GCS)
  • Ensuring data quality, reliability, and availability to support analytics, reporting, forecasting, and modeling initiatives
  • Developing integrations that enable visualization and reporting through tools such as Sigma and DBT
  • Partnering with business stakeholders to gather requirements and modernize data workflows
  • Collaborating with Operations to define and execute a roadmap of data integration initiatives

Requirements:

  • 5+ years of experience in data integration, data engineering, or ETL development roles
  • 3+ years designing and operating reliable, high-volume ETL/ELT pipelines
  • Strong expertise in cloud-based data platforms, particularly Google Cloud Platform (GCP) and Google Cloud Storage (GCS)
  • Hands-on experience with Fivetran, Workato, and DBT for data ingestion, transformation, and modeling
  • Strong SQL proficiency and solid understanding of data modeling principles (Kimball, Inmon), schema design, and database management
  • Experience integrating data across enterprise systems, ideally including construction-related platforms (DCIS, PMIS, CMMS, ERP)
  • Familiarity with BI/reporting tools such as Sigma
  • Understanding of data governance concepts including RBAC, audit logging, and environmental controls

Nice to have:

  • Workato Integration Developer Certification
  • Experience with Atlassian tools (JIRA, JSM, Confluence)
  • Prior experience supporting construction or infrastructure data ecosystems
  • Experience designing data platforms in high-growth environments
What we offer:
  • Restricted Stock Units in a fast growing, well-funded technology company
  • Health insurance package options that include HDHP and PPO, vision, and dental for you and your dependents
  • Employer contributions to HSA accounts
  • Paid Parental Leave
  • Paid life insurance, short-term and long-term disability
  • Teladoc
  • 401(k) with a 100% match up to 4% of salary
  • Generous paid time off and holiday schedule
  • Cell phone reimbursement
  • Tuition reimbursement
  • Subscription to the Calm app
  • MetLife Legal
  • Company paid commuter benefit
  • $300/month

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Integration Engineer

Senior Microsoft Stack Data Engineer

Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senio...
Location
Location
United States , West Des Moines
Salary
Salary:
155000.00 USD / Year
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of DATA WAREHOUSE EXPERIENCE / Data Lake experience
  • Advanced SQL Server
  • Strong SQL experience, working with structured and unstructured data
  • Strong in SSIS ETL
  • Proficiency in SQL and SQL Queries
  • Experience with SQL Server and SQL Server
  • Knowledge of Data Warehousing and Data Warehousing
  • Data Warehouse experience: Star Schema and Fact & Dimension data warehouse structure
  • Experience with Azure Data Lake and Data lakes
  • Proficiency in ETL / SSIS and SSAS
Job Responsibility
Job Responsibility
  • Modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD
  • REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting
  • ALL ASPECTS OF Data Engineering
  • Technical Leader of the team
What we offer
What we offer
  • Bonus
  • 2 1/2 day weekends
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Join Inetum as a Data Engineer! At Inetum, we empower innovation and growth thro...
Location
Location
Portugal , Lisbon
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Teradata – advanced SQL and data warehousing
  • CONTROL-M – job scheduling and automation
  • UNIX – working in a UNIX environment (directories, scripting, etc.)
  • SQL (Teradata) – strong querying and data manipulation skills
  • Ab Initio – data integration and ETL development
  • DevOps – CI/CD practices and automation
  • Collaborative tools – GIT, Jira, Confluence, MEGA, Zeenea
Job Responsibility
Job Responsibility
  • Design, development, and optimization of data solutions that support business intelligence and analytics
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adswerve is looking for a Senior Data Engineer to join our Adobe Services team. ...
Location
Location
United States
Salary
Salary:
130000.00 - 155000.00 USD / Year
adswerve.com Logo
Adswerve, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience)
  • 5+ years of experience in a data engineering, analytics, or marketing technology role
  • Hands-on expertise in Adobe Experience Platform (AEP), Real-Time CDP, Journey Optimizer, or similar tools is a big plus
  • Strong proficiency in SQL and hands-on experience with data transformation and modeling
  • Understanding of ETL/ELT workflows (e.g., dbt, Fivetran, Airflow, etc.) and cloud data platforms (e.g., GCP, Snowflake, AWS, Azure)
  • Experience with ingress/egress patterns and interacting with API’s to move data
  • Experience with Python, or JavaScript in a data or scripting context
  • Experience with customer data platforms (CDPs), event-based tracking, or customer identity management
  • Understanding of Adobe Experience Cloud integrations (e.g., Adobe Analytics, Target, Campaign) is a plus
  • Strong communication skills with the ability to lead technical conversations and present to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Lead the end-to-end architecture of data ingestion and transformation in Adobe Experience Platform (AEP) using Adobe Data Collection (Tags), Experience Data Model (XDM), and source connectors
  • Design and optimize data models, identity graphs, and segmentation strategies within Real-Time CDP to enable personalized customer experiences
  • Implement schema mapping, identity resolution, and data governance strategies
  • Collaborate with Data Architects to build scalable, reliable data pipelines across multiple systems
  • Conduct data quality assessments and support QA for new source integrations and activations
  • Write and maintain internal documentation and knowledge bases on AEP best practices and data workflows
  • Simplify complex technical concepts and educate team members and clients in a clear, approachable way
  • Contribute to internal knowledge sharing and mentor junior engineers in best practices around data modeling, pipeline development, and Adobe platform capabilities
  • Stay current on the latest Adobe Experience Platform features and data engineering trends to inform client strategies
What we offer
What we offer
  • Medical, dental and vision available for employees
  • Paid time off including vacation, sick leave & company holidays
  • Paid volunteer time
  • Flexible working hours
  • Summer Fridays
  • “Work From Home Light” days between Christmas and New Year’s Day
  • 401(k) Plan with 5% company match and no vesting period
  • Employer Paid Parental Leave
  • Health-care Spending Accounts
  • Dependent-care Spending Accounts
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right