CrawlJobs Logo

Senior Data Engineer (Data Warehouse)

plaud.ai Logo

Plaud

Location Icon

Location:
Singapore , Singapore

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Plaud is building the next generation intelligence infrastructure and interfaces to capture, extract, and utilize intelligence from what people say, hear, see, and think. Plaud is a bootstrapped, skyrocketing, profitable company with a $250M revenue run rate achieved in just three years.

Job Responsibility:

  • Design and develop data warehouse systems, including data modeling, ETL process development, and data quality assurance
  • Collaborate with business teams and data analysts to understand business needs and deliver warehouse solutions
  • Continuously optimize data processing performance to improve platform stability and efficiency
  • Research emerging technologies to drive the next-generation upgrade of the data platform

Requirements:

  • Bachelor's or Master’s degree in a technical field (Computer Science, Engineering or a related field
  • At least 3 years of experience in data warehouse development, including big data development, data warehouse modeling
  • Strong understanding of data warehouse architecture and principles
  • Proficient in at least 2 programming languages (e.g. SQL, Python, Scala, Java, Go)

Nice to have:

  • Experience with big data platforms (Hive, Spark, Airflow, Kafka, Flink, etc.) and relational databases
  • Experience in backend engineering skills are a plus
  • Familiar with AWS data tools (E.g. Snowflake, and data pipeline technologies)
  • Familiar with data security and compliance
  • Strong communication skills and team spirit, with curiosity and openness to new technologies
What we offer:

Market-competitive compensation, global exposure, and a vibrant, creativity-fueled work atmosphere

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer (Data Warehouse)

Senior Data Engineer

Join a leading global live-entertainment discovery tech platform. As a Senior Da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • You have a strong background in at least two of: data engineering, business intelligence, software engineering
  • You are an expert in Python3 and its data ecosystem
  • You have proven experience working with SQL languages
  • You have worked with complex data pipelines
  • You are a collaborative team player with strong communication skills
  • You are proactive, driven, and bring positive energy
  • You possess strong analytical and problem-solving abilities backed by solid software engineering skills
  • You are proficient in business English.
Job Responsibility
Job Responsibility
  • Own critical data pipelines of our data warehouse
  • Ideate and implement tools and processes to exploit data
  • Work closely with other business units to create structured and scalable solutions
  • Contribute to the development of a complex data and software ecosystem
  • Build trusted data assets
  • Build automatizations to create business opportunities
  • Design, build and support modern data infrastructure.
What we offer
What we offer
  • Attractive compensation package with potential bonus
  • Stock options
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Responsibility from day one
  • Great work environment with a young international team
  • Health insurance
  • Flexible remuneration with 100% tax exemption through Cobee
  • English lessons
  • Gympass membership
  • Fulltime
Read More
Arrow Right

Senior Microsoft Stack Data Engineer

Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senio...
Location
Location
United States , West Des Moines
Salary
Salary:
155000.00 USD / Year
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of DATA WAREHOUSE EXPERIENCE / Data Lake experience
  • Advanced SQL Server
  • Strong SQL experience, working with structured and unstructured data
  • Strong in SSIS ETL
  • Proficiency in SQL and SQL Queries
  • Experience with SQL Server and SQL Server
  • Knowledge of Data Warehousing and Data Warehousing
  • Data Warehouse experience: Star Schema and Fact & Dimension data warehouse structure
  • Experience with Azure Data Lake and Data lakes
  • Proficiency in ETL / SSIS and SSAS
Job Responsibility
Job Responsibility
  • Modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD
  • REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting
  • ALL ASPECTS OF Data Engineering
  • Technical Leader of the team
What we offer
What we offer
  • Bonus
  • 2 1/2 day weekends
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We're seeking an experienced Senior Data Engineer to help shape the future of he...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
audibene.de Logo
Audibene GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands on experience with complex ETL processes, data modeling, and large scale data systems
  • Production experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift) on AWS, GCP, or Azure
  • Proficiency in building and optimizing data transformations and pipelines in python
  • Experience with columnar storage, MPP databases, and distributed data processing architectures
  • Ability to translate complex technical concepts for diverse audiences, from engineers to business stakeholders
  • Experience with semantic layers, data catalogs, or metadata management systems
  • Familiarity with modern analytical databases like Snowflake, BigQuery, ClickHouse, DuckDB, or similar systems
  • Experience with streaming technologies like Kafka, Pulsar, Redpanda, or Kinesis
Job Responsibility
Job Responsibility
  • Design and build robust, high performance data pipelines using our modern stack (Airflow, Snowflake, Pulsar, Kubernetes) that feed directly into our semantic layer and data catalog
  • Create data products optimized for consumption by AI agents and LLMs where data quality, context, and semantic richness are crucial
  • Structure and transform data to be inherently machine readable, with rich metadata and clear lineage that powers intelligent applications
  • Take responsibility from raw data ingestion through to semantic modeling, ensuring data is not just accurate but contextually rich and agent ready
  • Champion best practices in building LLM consumable data products, optimize for both human and machine consumers, and help evolve our dbt transformation layer
  • Built data products for AI/LLM consumption, not just analytics dashboards
What we offer
What we offer
  • Work 4 days a week from our office (Berlin/Mainz) with a passionate team, and 1 day a week from home
  • Regularly join on- and offline team events, company off-sites, and the annual audibene Wandertag
  • Cost of the Deutschland-Ticket covered
  • Access to over 50,000 gyms and wellness facilities through Urban Sports Club
  • Support for personal development with a wide range of programs, trainings, and coaching opportunities
  • Dog-friendly office
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Corporate Tools, you will work closely with our Sof...
Location
Location
United States
Salary
Salary:
150000.00 USD / Year
corporatetools.com Logo
Corporate Tools
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s (BA or BS) in computer science, or related field
  • 2+ years in a full stack development role
  • 4+ years of experience working in a data engineer role, or related position
  • 2+ years of experience standing up and maintaining a Redshift warehouse
  • 4+ years of experience with Postgres, specifically with RDS
  • 4+ years of AWS experience, specifically S3, Glue, IAM, EC2, DDB, and other related data solutions
  • Experience working with Redshift, DBT, Snowflake, Apache Airflow, Azure Data Warehouse, or other industry standard big data or ETL related technologies
  • Experience working with both analytical and transactional databases
  • Advanced working SQL (Preferably PostgreSQL) knowledge and experience working with relational databases
  • Experience with Grafana or other monitoring/charting systems
Job Responsibility
Job Responsibility
  • Focus on data infrastructure. Lead and build out data services/platforms from scratch (using OpenSource tech)
  • Creating and maintaining transparent, bulletproof ETL (extract, transform, and load) pipelines that cleans, transforms, and aggregates unorganized and messy data into databases or data sources
  • Consume data from roughly 40 different sources
  • Collaborate closely with our Data Analysts to get them the data they need
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Improve existing data models while implementing new business capabilities and integration points
  • Creating proactive monitoring so we learn about data breakages or inconsistencies right away
  • Maintaining internal documentation of how the data is housed and transformed
  • Improve existing data models, and design new ones to meet the needs of data consumers across Corporate Tools
  • Stay current with latest cloud technologies, patterns, and methodologies
What we offer
What we offer
  • 100% employer-paid medical, dental and vision for employees
  • Annual review with raise option
  • 22 days Paid Time Off accrued annually, and 4 holidays
  • After 3 years, PTO increases to 29 days. Employees transition to flexible time off after 5 years with the company—not accrued, not capped, take time off when you want
  • The 4 holidays are: New Year’s Day, Fourth of July, Thanksgiving, and Christmas Day
  • Paid Parental Leave
  • Up to 6% company matching 401(k) with no vesting period
  • Quarterly allowance
  • Use to make your remote work set up more comfortable, for continuing education classes, a plant for your desk, coffee for your coworker, a massage for yourself... really, whatever
  • Open concept office with friendly coworkers
  • Fulltime
Read More
Arrow Right

Senior Data Warehouse Administrator

We are looking for a Senior Data Warehouse Administrator to bolster our expandin...
Location
Location
India , Pune
Salary
Salary:
Not provided
floqast.com Logo
FloQast
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–8 years of experience as a Data WareHouse Administrator or Data Platform Engineer
  • Expert-level knowledge of Data WareHouse administration
  • Proven experience with CDC tools, FiveTran, CData or similar ELT tools (e.g., Stitch, Airbyte)
  • Strong understanding of SQL tuning, partitioning, and query optimization
  • Deep knowledge of data warehousing concepts and modern data platforms
  • Experience with CI/CD, infrastructure-as-code (e.g., Terraform), and monitoring tools
  • Familiarity with data modeling (star/snowflake schemas) and governance practices
  • Strong scripting skills in Python or Bash for automation
Job Responsibility
Job Responsibility
  • Design, implement, and maintain scalable and secure data warehouse environments
  • Optimize data warehouse performance, including fine-tuning complex SQL queries, managing indexing, and monitoring workloads to ensure peak efficiency
  • Lead all administration tasks, encompassing user access control, Role-Based Access Control (RBAC), schema design, partitioning strategies, and ongoing cost optimization
  • Manage and monitor data ingestion pipelines, ensuring reliable ETL/ELT processes and demonstrating awareness of Change Data Capture (CDC) tools for efficient data flow
  • Collaborate closely with data engineers and data analysts to design and implement efficient data models and robust data transformations
  • Contribute significantly to our modern data lake architecture, specifically leveraging Apache Iceberg for data organization and schema evolution
  • Implement and enforce data governance and compliance policies across the data warehouse and data lake environments
  • Tooling and Automation: Building and maintaining tools to automate common administrative tasks, such as table compaction, data expiration policies, and health checks
  • Lead troubleshooting and Root Cause Analysis (RCA) efforts for critical data issues, ensuring rapid resolution and preventing recurrence
  • Mentor junior data warehouse administrators and actively share best practices across the broader data and engineering teams
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

The Data Engineer is responsible for designing, building, and maintaining robust...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
ibvogt.com Logo
ib vogt GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • experience in renewable energy, engineering, or asset-heavy industries is a plus
  • Strong experience with modern data stack (e.g., PowerPlatform, Azure Data Factory, Databricks, Airflow, dbt, Synapse, Snowflake, BigQuery, etc.)
  • Proficiency in Python and SQL for data transformation and automation
  • Experience with APIs, message queues (Kafka, Event Hub), data streaming and knowledge of data lakehouse and data warehouse architectures
  • Familiarity with CI/CD pipelines, DevOps practices, and containerization (Docker, Kubernetes)
  • Understanding of cloud environments (preferably Microsoft Azure, PowerPlatform)
  • Strong analytical mindset and problem-solving attitude paired with a structured, detail-oriented, and documentation-driven work style
  • Team-oriented approach and excellent communication skills in English
Job Responsibility
Job Responsibility
  • Design, implement, and maintain efficient ETL/ELT data pipelines connecting internal systems (M365, Sharepoint, ERP, CRM, SCADA, O&M, etc.) and external data sources
  • Integrate structured and unstructured data from multiple sources into the central data lake / warehouse / Dataverse
  • Build data models and transformation workflows to support analytics, reporting, and AI/ML use cases
  • Implement data quality checks, validation rules, and metadata management according to the company’s data governance framework
  • Automate workflows, optimize performance, and ensure scalability of data pipelines and processing infrastructure
  • Work closely with Data Scientists, Software Engineers, and Domain Experts to deliver reliable datasets for Digital Twin and AI applications
  • Maintain clear documentation of data flows, schemas, and operational processes
What we offer
What we offer
  • Competitive remuneration and motivating benefits
  • Opportunity to shape the data foundation of ib vogt’s digital transformation journey
  • Work on cutting-edge data platforms supporting real-world renewable energy assets
  • A truly international working environment with colleagues from all over the world
  • An open-minded, collaborative, dynamic, and highly motivated team
  • Fulltime
Read More
Arrow Right