CrawlJobs Logo

Azure Data Factory Engineer

https://www.roberthalf.com Logo

Robert Half

Location Icon

Location:
United States, Des Moines

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

120000.00 USD / Year

Job Description:

Join our vibrant client company team as a Data Engineer! Harness your passion for data, unlocking its mysteries, and fashioning it into crucial business insights. Work fully remote and continue to grow your skills and work on interesting projects.

Job Responsibility:

  • Dive into creative collaborations with data scientists, analysts, and stakeholders
  • Bring to life scalable data pipelines using cutting-edge tools: Azure Data Factory, PySpark, and Spark SQL
  • Be the data whisperer integrating diverse data streams into a powerful, unified platform
  • Showcase your governance skills in managing cutting-edge data storage solutions
  • Keep pulse on the heartbeat of industry trends, injecting fresh tech ideas

Requirements:

  • Bachelor's in IT or related field
  • Adept at Python, SQL, and Microsoft Fabric tools
  • Experience with Microsoft SQL trinity - Server, Azure SQL, and SSIS

Nice to have:

  • Microsoft Certified: Azure Data Associate
  • Knowledge of Azure Data Lake Storage Gen2, Azure Blob Storage, Azure Files, Power BI
What we offer:
  • Competitive salary
  • Exceptional benefits
  • Free online training
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan

Additional Information:

Job Posted:
March 23, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Azure Data Factory Engineer

New

Senior Data Engineer

We are seeking a highly skilled and motivated Senior Data Engineer/s to architec...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
techmahindra.com Logo
Tech Mahindra
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in data engineering with a focus on Microsoft Azure and Fabric technologies
  • Strong expertise in: Microsoft Fabric (Lakehouse, Dataflows Gen2, Pipelines, Notebooks)
  • Strong expertise in: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
  • Strong expertise in: Power BI and/or other visualization tools
  • Strong expertise in: Azure Functions, Logic Apps, and orchestration frameworks
  • Strong expertise in: SQL, Python and PySpark/Scala
  • Experience working with structured and semi structured data (JSON, XML, CSV, Parquet)
  • Proven ability to build metadata driven architectures and reusable components
  • Strong understanding of data modeling, data governance, and security best practices
Job Responsibility
Job Responsibility
  • Design and implement ETL pipelines using Microsoft Fabric (Dataflows, Pipelines, Lakehouse ,warehouse, sql) and Azure Data Factory
  • Build and maintain a metadata driven Lakehouse architecture with threaded datasets to support multiple consumption patterns
  • Develop agent specific data lakes and an orchestration layer for an uber agent that can query across agents to answer customer questions
  • Enable interactive data consumption via Power BI, Azure OpenAI, and other analytics tools
  • Ensure data quality, lineage, and governance across all ingestion and transformation processes
  • Collaborate with product teams to understand data needs and deliver scalable solutions
  • Optimize performance and cost across storage and compute layers
Read More
Arrow Right
New

Azure Data Engineer

As an Azure Data Engineer, you will be expected to design, implement, and manage...
Location
Location
India , Hyderabad / Bangalore
Salary
Salary:
Not provided
quadranttechnologies.com Logo
Quadrant Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field
  • Hands-on experience in writing complex T-SQL queries and stored procedures
  • Good experience in data integration and database development
  • Proficiency in T-SQL and Spark SQL/PySpark. (Synapse/ Databricks)
  • Extensive experience with Azure Data Factory
  • Excellent problem-solving skills and attention to detail
  • 5 – 8 years experience
  • Proven track record of writing complex SQL stored procedures with implementing OTLP database solutions (using Microsoft SQL Server)
  • Experience with Azure Synapse / PySpark / Azure Databricks for big data processing
  • Expertise in T-SQL, Dynamic SQL, Spark SQL, and ability to write complex stored procedures
Job Responsibility
Job Responsibility
  • Collaborate with cross-functional teams to gather, analyze, and document business requirements for data integration projects
  • Write complex stored procedures to support data transformation and to implement business validation logic
  • Develop and maintain robust data pipelines using Azure Data Factory ensuring seamless data flow between systems
  • Work closely with the team to ensure data quality, integrity, and accuracy across all systems
  • Contribute to the enhancement and optimization of OLTP systems
  • Fulltime
Read More
Arrow Right
New

Azure Data Engineer

At LeverX, we have had the privilege of delivering over 1,500 projects for vario...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with strong expertise in Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Synapse, Microsoft Fabric, and Azure Cosmos DB
  • Advanced SQL skills, including complex query development, optimization, and troubleshooting
  • Strong knowledge of indexing, partitioning, and query execution plans to ensure scalability and performance
  • Proven expertise in database modeling, schema design, and normalization/denormalization strategies
  • Ability to design and optimize data architectures to support both transactional and analytical workloads
  • Proficiency in at least one programming language such as Python, C#, or Scala
  • Strong background in cloud-based data storage and processing (e.g., Azure Data Lake, Databricks, or equivalent) and data warehouse platforms (e.g., Snowflake)
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and maintain efficient and scalable data architectures and workflows
  • Build and optimize SQL-based solutions for data transformation, extraction, and loading (ETL) processes
  • Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions
  • Manage and optimize data storage platforms, including databases, data lakes, and data warehouses
  • Troubleshoot and resolve data-related issues, ensuring accuracy, integrity, and performance across all systems
What we offer
What we offer
  • Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc
  • Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay with the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right
New

Azure Data Engineer

Job Description: Designs, modifies, and builds new and scalable data processes. ...
Location
Location
United States , Kennesaw
Salary
Salary:
Not provided
cpctechno.com Logo
CPC Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6 - 8 years’ of experience on developing data solutions in Python using Spark framework
  • Ability to perform root cause analysis and identify performance bottlenecks in Spark Jobs
  • Expert in Data Engineering and building data pipelines, implementing Algorithms in a distributed environment
  • Ability to design and develop parallel processing data platform in PySpark
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues
  • Strong Proficiency in SQL
  • Cloud knowledge especially Azure
  • Collaborates with stakeholders, IT, database engineers and other scientists
  • Hands-on knowledge in Azure Synapse and Azure Data Factory is a plus
Job Responsibility
Job Responsibility
  • Designs, modifies, and builds new and scalable data processes
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

As an Azure Data Engineer, you will design and maintain scalable data pipelines ...
Location
Location
Salary
Salary:
Not provided
aciinfotech.com Logo
ACI Infotech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience as a Data Engineer with Azure ecosystem
  • Strong skills in SQL, Databricks, and Python
  • Hands-on experience with Azure Data Factory (ADF)
  • Power BI experience preferred
  • Familiarity with Delta Lake and/or Azure Synapse is a plus
Job Responsibility
Job Responsibility
  • Develop, manage, and optimize ADF pipelines
  • Design and implement Databricks notebooks for ETL processes
  • Write and optimize SQL scripts for large-scale datasets
  • Collaborate with BI teams to support dashboard and reporting solutions
  • Ensure data quality, security, and compliance with governance policies
  • Fulltime
Read More
Arrow Right

Data Engineer

We are seeking a skilled and innovative Data Engineer to join our team in Nieuwe...
Location
Location
Netherlands , Nieuwegein
Salary
Salary:
3000.00 - 6000.00 EUR / Month
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BSc or MSc degree in IT or a related field
  • Minimum of 2 years of relevant work experience in data engineering
  • Proficiency in building data pipelines using tools such as Azure Data Factory, Informatica Cloud, Synapse Pro, Spark, Python, R, Kubernetes, Snowflake, Databricks, or AWS
  • Advanced SQL knowledge and experience with relational databases
  • Hands-on experience in data modelling and data integration (both on-premise and cloud-based)
  • Strong problem-solving skills and analytical mindset
  • Knowledge of data warehousing concepts and big data technologies
  • Experience with version control systems, preferably Git
  • Excellent communication skills and ability to work collaboratively in a team environment
  • Fluency in Dutch language (required)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable data pipelines and ETL/ELT processes
  • Collaborate with Information Analysts to provide technical frameworks for business requirements of medium complexity
  • Contribute to architecture discussions and identify potential technical and process bottlenecks
  • Implement data quality checks and ensure data integrity throughout the data lifecycle
  • Optimise data storage and retrieval systems for improved performance
  • Work closely with cross-functional teams to understand data needs and deliver efficient solutions
  • Stay up-to-date with emerging technologies and best practices in data engineering
  • Troubleshoot and resolve data-related issues in a timely manner
  • Document data processes, architectures, and workflows for knowledge sharing and future reference
What we offer
What we offer
  • A permanent contract and a gross monthly salary between €3,000 and €6,000 (based on 40 hours per week)
  • 8% holiday allowance
  • A generous mobility budget, including options such as an electric lease car with an NS Business Card, a lease bike, or alternative transportation that best suits your travel needs
  • 8% profit sharing on target (or a fixed OTB amount, depending on the role)
  • 27 paid vacation days
  • A flex benefits budget of €1,800 per year, plus an additional percentage of your salary. This can be used for things like purchasing extra vacation days or contributing more to your pension
  • A home office setup with a laptop, phone, and a monthly internet allowance
  • Hybrid working: from home or at the office, depending on what works best for you
  • Development opportunities through training, knowledge-sharing sessions, and inspiring (networking) events
  • Social activities with colleagues — from casual drinks to sports and content-driven outings
  • Fulltime
Read More
Arrow Right

Data Engineer (Azure)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in Azure, such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert
  • Hands-on experience with Azure services, especially those related to data engineering and analytics, such as Azure SQL Database, Azure Data Lake, Azure Synapse Analytics, Azure Databricks, Azure Data Factory, among others
  • Familiarity with Azure storage and compute services, including Azure Blob Storage, Azure SQL Data Warehouse, Azure HDInsight, and Azure Functions
  • Proficiency in programming languages such as Python, SQL, or C# for developing data pipelines, data processing, and automation
  • Knowledge of data manipulation and transformation techniques using tools like Azure Databricks or Apache Spark
  • Experience in data modeling, data cleansing, and data transformation for analytics and reporting purposes
  • Understanding of data architecture principles and best practices, including data lake architectures, data warehousing, and ETL/ELT processes
  • Knowledge of security and compliance features offered by Azure, including data encryption, role-based access control (RBAC), and Azure Security Center
  • Excellent communication skills, both verbal and written, to collaborate effectively with technical and non-technical teams
  • Fulltime
Read More
Arrow Right
New

Azure Data Engineer

Experience: 3-6+ Years Location: Noida/Gurugram/Remote Skills: PYTHON, PYSPARK...
Location
Location
India , Noida; Gurugram
Salary
Salary:
Not provided
NexGen Tech Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-6+ Years experience
  • PYTHON
  • PYSPARK
  • SQL
  • AZURE DATA FACTORY
  • DATABRICKS
  • DATA LAKE
  • AZURE FUNCTION
  • DATA PIPELINE
Job Responsibility
Job Responsibility
  • Design and engineer the cloud/big data solutions, develop a modern data analytics lake
  • Develop & maintain data pipelines for batch & stream processing using modern cloud or open source ETL/ELT tools
  • Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT
  • Implement continuous integration, continuous deployment, DevOps practice
  • Create, document, and manage data guidelines, governance, and lineage metrics
  • Technically lead, design and develop distributed, high-throughput, low-latency, highly available data processing and data systems
  • Build monitoring tools for server-side components
  • work cohesively in India-wide distributed team
  • Identify, design, and implement internal process improvements and tools to automate data processing and ensure data integrity while meeting data security standards
  • Build tools for better discovery and consumption of data for various consumption models in the organization – DataMarts, Warehouses, APIs, Ad Hoc Data explorations
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.