CrawlJobs Logo

DBT Junior Engineer

nttdata.com Logo

NTT DATA

Location Icon

Location:
India

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The DBT Junior Engineer role involves translating Informatica mappings into dbt models on Snowflake, ensuring data quality through testing and documentation. Candidates should have 3-6 years of experience in Data Engineering, strong SQL skills, and familiarity with dbt concepts. Collaboration with Snowflake developers and functional teams is essential, along with participation in performance tuning and CI/CD integration.

Job Responsibility:

  • Translate Informatica mappings, transformations, and business rules into dbt models (SQL) on Snowflake
  • Design and implement staging, core, and mart layers using standard dbt patterns and folder structures
  • Develop and maintain dbt tests (schema tests, data tests, custom tests) to ensure data quality and integrity
  • Implement snapshots, seeds, macros, and reusable components where appropriate
  • Collaborate with Snowflake developers to ensure physical data structures support dbt models efficiently
  • Work with functional teams to ensure functional equivalence between legacy Informatica outputs and new dbt outputs
  • Participate in performance tuning of dbt models and Snowflake queries
  • Integrate dbt with CI/CD pipelines (e.g., Azure DevOps, GitHub Actions) for automated runs and validations
  • Contribute to documentation of dbt models, data lineage, and business rules
  • Participate in defect analysis, bug fixes, and enhancements during migration and stabilization phases

Requirements:

  • 3–6 years of experience in Data Engineering / ETL / DW
  • 1–3+ years working with dbt (Core or Cloud)
  • Strong SQL skills, especially on Snowflake or another modern cloud DW
  • Experience with dbt concepts: models, tests, sources, seeds, snapshots, macros, exposures
  • Prior experience with Informatica (developer-level understanding of mappings/workflows) is highly desirable
  • Understanding of CI/CD practices and integrating dbt into automated pipelines
  • Knowledge of data modeling (dimensional models, SCDs, fact/dimension design)
  • Experience working in offshore delivery with onshore coordination
  • Good communication skills and ability to read/understand existing ETL logic and requirements documentation

Additional Information:

Job Posted:
February 14, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for DBT Junior Engineer

Sr. Data Engineer - Snowflake

Data Ideology is seeking a Sr. Snowflake Data Engineer to join our growing team ...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, data warehousing, or data architecture
  • 3+ years of hands-on Snowflake experience (performance tuning, data sharing, Snowpark, Snowpipe, etc.)
  • Strong SQL and Python skills, with production experience using dbt
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and modern data tooling (Airflow, Fivetran, Power BI, Looker, Informatica, etc.)
  • Prior experience in a consulting or client-facing delivery role
  • Excellent communication skills, with the ability to collaborate across technical and business stakeholders
  • SnowPro Core Certification required (or willingness to obtain upon hire)
  • advanced Snowflake certifications preferred
Job Responsibility
Job Responsibility
  • Design and build scalable, secure, and cost-effective data solutions in Snowflake
  • Develop and optimize data pipelines using tools such as dbt, Python, CloverDX, and cloud-native services
  • Participate in discovery sessions with clients to gather requirements and translate them into solution designs and project plans
  • Collaborate with engagement managers and account teams to help scope work and provide technical input for Statements of Work (SOWs)
  • Serve as a Snowflake subject matter expert, guiding best practices in performance tuning, cost optimization, access control, and workload management
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake
  • Integrate Snowflake with BI tools, governance platforms, and AI/ML frameworks
  • Contribute to internal accelerators, frameworks, and proofs of concept
  • Mentor junior engineers and support knowledge sharing across the team
What we offer
What we offer
  • Flexible Time Off Policy
  • Eligibility for Health Benefits
  • Retirement Plan with Company Match
  • Training and Certification Reimbursement
  • Utilization Based Incentive Program
  • Commission Incentive Program
  • Referral Bonuses
  • Work from Home
  • Fulltime
Read More
Arrow Right

Lead Analytics Engineer

As a Analytics Engineer in the Data Infrastructure Team at Prolific, you'll be a...
Location
Location
United Kingdom
Salary
Salary:
Not provided
prolific.com Logo
Prolific
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expertise in dbt & SQL: Deep experience with dbt and SQL to design, build, and maintain scalable data models
  • Cloud Technology Knowledge: Strong familiarity with cloud platforms like AWS, GCP etc
  • Data Accuracy Focus: Passion for ensuring high data quality through tests/assertions and robust documentation
  • Commercial Acumen: Ability to understand business needs and communicate effectively with non-technical stakeholders
  • Mentorship Ability: Advocate for best practices in logging and data modeling that supports robust and effective analysis, reporting , and experimentation
  • Collaboration: Skilled at working cross-functionally and translating complex technical concepts into actionable insights for the business
  • Process-Driven: Proficiency in designing repeatable and scalable workflows for data transformation
Job Responsibility
Job Responsibility
  • Building Data Models: Create complex dbt models, custom macros, and reusable packages. Optimise transformations and implement robust testing strategies to ensure data integrity and model performance
  • Ownership: Monitoring and maintaining dbt workflow jobs, ensuring smooth data refreshes and up-to-date pipelines.You will also be responsible for data models for BI analytics & company level reporting
  • Ensuring Data Accuracy: Writing tests and assertions to validate data integrity and consistency across models
  • Documenting and Standardizing: Creating and maintaining thorough documentation of dbt processes to ensure best practices within the BI team
  • Translating Complex Data Concepts: Acting as a key communicator, translating technical data issues into understandable business terms for stakeholders
  • Mentoring Team Members: Supporting junior analysts and data engineers, especially in setting up experimentation platforms and data best practices
  • Collaborating Across Teams: Working closely with the product, engineering, and BI teams to ensure data infrastructure supports evolving business needs
Read More
Arrow Right

Senior Analytics Engineer II

Articulate is looking for a Sr. Analytics Engineer to join our amazing Data team...
Location
Location
United States
Salary
Salary:
137700.00 - 206500.00 USD / Year
articulate.com Logo
Articulate
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data or analytics roles
  • At least 5 years in analytics engineering, preferably within a fast-paced tech company or data-driven organization
  • Expert in end-to-end data workflows, from data collection to analysis and presentation, with most expertise in data modeling
  • Confident in translating raw data to reusable, business-friendly data models
  • Proven experience owning and leading the creation of reliable, flexible data models in an enterprise data warehouse
  • Expertise in SQL (writing and analyzing complex queries)
  • Expertise in Data Build Tool (dbt)
  • Expertise in Looker/Tableau (defining data models and measures)
  • Experience building semantic layers using dbt, Looker, or Snowflake’s semantic tables
  • Ability to write complex SQL, run ad-hoc data discovery, and build data models
Job Responsibility
Job Responsibility
  • Provide flexible, trustworthy data models by transforming data from multiple sources with DBT, testing for quality, deploying to visualization tools such as Looker or Tableau, and publishing documentation
  • Collaborate directly with stakeholders to define problems and determine requirements for a solution
  • Set and maintain best practices for data models and processes, including project architecture and QA
  • Proactively identify gaps or design flaws in our data models, and bring recommendations for how to fix them
  • Own documentation of our tools and data, both for our team and for external users
  • Lead discovery on data tools and infrastructure, including setting goals for tooling, continuous analysis of existing tooling, and exploring new tools and features we may adopt
  • Participate as a technical leader in project planning, helping to create well-defined tasks to address data initiatives
  • Mentor more junior members of the data team
  • Represent the data team in some technical architecture and planning conversations
  • Identify and share data risks and dependencies in these contexts
What we offer
What we offer
  • Bonus eligible
  • Robust suite of benefits
  • Fulltime
Read More
Arrow Right
New

DBT Junior Engineer

The DBT Junior Engineer role involves translating Informatica mappings into dbt ...
Location
Location
India
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–6 years of experience in Data Engineering / ETL / DW
  • 1–3+ years working with dbt (Core or Cloud)
  • Strong SQL skills, especially on Snowflake or another modern cloud DW
  • Experience with dbt concepts: models, tests, sources, seeds, snapshots, macros, exposures
  • Prior experience with Informatica (developer-level understanding of mappings/workflows) is highly desirable
  • Understanding of CI/CD practices and integrating dbt into automated pipelines
  • Knowledge of data modeling (dimensional models, SCDs, fact/dimension design)
  • Experience working in offshore delivery with onshore coordination
  • Good communication skills and ability to read/understand existing ETL logic and requirements documentation
Job Responsibility
Job Responsibility
  • Translate Informatica mappings, transformations, and business rules into dbt models (SQL) on Snowflake
  • Design and implement staging, core, and mart layers using standard dbt patterns and folder structures
  • Develop and maintain dbt tests (schema tests, data tests, custom tests) to ensure data quality and integrity
  • Implement snapshots, seeds, macros, and reusable components where appropriate
  • Collaborate with Snowflake developers to ensure physical data structures support dbt models efficiently
  • Work with functional teams to ensure functional equivalence between legacy Informatica outputs and new dbt outputs
  • Participate in performance tuning of dbt models and Snowflake queries
  • Integrate dbt with CI/CD pipelines (e.g., Azure DevOps, GitHub Actions) for automated runs and validations
  • Contribute to documentation of dbt models, data lineage, and business rules
  • Participate in defect analysis, bug fixes, and enhancements during migration and stabilization phases
Read More
Arrow Right

Snowflake Junior Engineer

The Snowflake Junior Engineer role involves designing and implementing Snowflake...
Location
Location
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-4 years of experience in Data Engineering
  • experience with Snowflake
  • experience with SQL
Job Responsibility
Job Responsibility
  • designing and implementing Snowflake schemas
  • optimizing SQL scripts
  • collaborating with dbt developers
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are seeking a Senior Data Engineer to help evolve and enhance our data platfo...
Location
Location
South Africa
Salary
Salary:
Not provided
dotdigital.com Logo
Dotdigital
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant experience delivering Python-based projects for data engineering
  • Experience building and tuning spark pipelines that run at scale across large quantities of data
  • Strong hands-on experience with SQL and NoSQL databases (e.g. SQL Server, MongoDB, Cassandra)
  • Proven experience with modern data warehousing and large-scale processing (e.g. Snowflake, DBT, BigQuery, Clickhouse)
  • Proficient with data orchestration tools such as Airflow, Dagster, or Prefect
  • Experience with cloud platforms (Azure, AWS, or GCP) for data processing and storage
  • Practical experience with Kafka or equivalent event-driven architectures (e.g. AWS SQS, Azure EventHubs, AWS Kinesis)
  • Good understanding of data modelling for OLAP and OLTP workloads
  • Familiar with agile methodologies and CI/CD processes in the context of data solutions
  • Experienced as a senior team member on complex data engineering projects
Job Responsibility
Job Responsibility
  • Design and build scalable, reliable, and secure data pipelines for streaming, batch, and real-time processing
  • Work in partnership with the Data Science teams to build scalable Pyspark workloads that can be leveraged to generate advanced models
  • Implement and optimise data models and storage solutions using Python and SQL with orchestration tools in a cloud environment
  • Leverage AI to automate both data processing and engineering processes
  • Advocate and uphold best practices for data governance, security, and monitoring
  • Collaborate cross-functionally with engineers, analysts, and data scientists to deliver impactful data solutions
  • Mentor and support junior engineers in data engineering principles and practices
  • Evaluate and recommend new tools and technologies to strengthen data services
What we offer
What we offer
  • Parental leave
  • Medical benefits
  • Paid sick leave
  • Dotdigital day
  • Share reward
  • Wellbeing reward
  • Wellbeing Days
  • Loyalty reward
  • Fulltime
Read More
Arrow Right

Solutions Architect

Lead the design and implementation of scalable, secure, and high-performing data...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
arrow.com Logo
Arrow Electronics
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 10 years of experience in enterprise data architecture, including design and implementation of large-scale data platforms
  • 5+ years of relevant data engineering experience on Databricks
  • Strong expertise in Databricks (Workspace, Clusters, Jobs, Repos, Delta Live Tables)
  • Deep hands-on expertise with Azure Databricks, PySpark, Delta Lake, Unity catalog, MLflow, DBT, and associated Azure data services (Data Lake, SQL, Synapse, ADF)
  • Proven experience migrating from legacy data warehouse and reporting systems to modern cloud platforms
  • Experience in data modeling, data warehousing (OLTP, OLAP), security, governance, DevOps, and MLOps
  • Hands-on with CI/CD, version control (Git), and DevOps practices for data engineering
  • Excellent communication, collaboration, problem-solving and analytical skills with the ability to collaborate effectively with cross-functional teams and influence decision-making at all levels of the organization
  • Architect-level certifications in Databricks and DBT are preferred
  • Ability to mentor and coach junior architects, engineers, and BI/reporting developers
Job Responsibility
Job Responsibility
  • Lead the design and implementation of scalable, secure, and high-performing data solutions using Azure Databricks, Delta Lake, and Delta Live Tables
  • Own the complete architecture process, including requirements gathering, solution design, documentation, technical reviews and implement advanced data solutions on the Databricks platform
  • Define best practices for cloud data architecture, data modeling, ELT/ETL pipelines, workspace setup, cluster management, repos, and job orchestration
  • Coordinate and communicate with onshore and offshore teams, including end-users, data engineers, reporting specialists, and business analysts
  • Ensure solution compliance with data privacy, security, and governance standards
  • Conduct performance tuning and optimization of Databricks clusters
  • Ensure data quality, lineage, and observability across all pipelines
  • Monitor and troubleshoot data pipelines to ensure data quality and reliability
  • Lead the integration of Databricks with other data platforms and tools
  • Create processes and workflows to support data solutions documents, and lead solution reviews and audits for quality
  • Fulltime
Read More
Arrow Right