CrawlJobs Logo

Azure Data Engineer

nexgentechsolutions.com Logo

NexGen Tech Solutions

Location Icon

Location:
India, Noida

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Experience: 3-6+ Years Location: Noida/Gurugram/Remote Skills: PYTHON, PYSPARK, SQL, AZURE DATA FACTORY, DATABRICKS, DATA LAKE, AZURE FUNCTION, DATA PIPELINE. Design and engineer the cloud/big data solutions, develop a modern data analytics lake. Develop & maintain data pipelines for batch & stream processing using modern cloud or open source ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Implement continuous integration, continuous deployment, DevOps practice. Create, document, and manage data guidelines, governance, and lineage metrics. Technically lead, design and develop distributed, high-throughput, low-latency, highly available data processing and data systems. Build monitoring tools for server-side components; work cohesively in India-wide distributed team. Identify, design, and implement internal process improvements and tools to automate data processing and ensure data integrity while meeting data security standards. Build tools for better discovery and consumption of data for various consumption models in the organization – DataMarts, Warehouses, APIs, Ad Hoc Data explorations. Create data views, data as a service APIs from big data stores to feed into analysis engines, visualization engines, etc. Work with a data scientist and business analytics team to assist in data ingestion and data-related technical issues

Job Responsibility:

  • Design and engineer the cloud/big data solutions, develop a modern data analytics lake
  • Develop & maintain data pipelines for batch & stream processing using modern cloud or open source ETL/ELT tools
  • Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT
  • Implement continuous integration, continuous deployment, DevOps practice
  • Create, document, and manage data guidelines, governance, and lineage metrics
  • Technically lead, design and develop distributed, high-throughput, low-latency, highly available data processing and data systems
  • Build monitoring tools for server-side components
  • work cohesively in India-wide distributed team
  • Identify, design, and implement internal process improvements and tools to automate data processing and ensure data integrity while meeting data security standards
  • Build tools for better discovery and consumption of data for various consumption models in the organization – DataMarts, Warehouses, APIs, Ad Hoc Data explorations
  • Create data views, data as a service APIs from big data stores to feed into analysis engines, visualization engines, etc
  • Work with a data scientist and business analytics team to assist in data ingestion and data-related technical issues

Requirements:

  • 3-6+ Years experience
  • PYTHON
  • PYSPARK
  • SQL
  • AZURE DATA FACTORY
  • DATABRICKS
  • DATA LAKE
  • AZURE FUNCTION
  • DATA PIPELINE

Additional Information:

Job Posted:
December 10, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Azure Data Engineer

New

Azure Data Engineer

As an Azure Data Engineer, you will be expected to design, implement, and manage...
Location
Location
India , Hyderabad / Bangalore
Salary
Salary:
Not provided
quadranttechnologies.com Logo
Quadrant Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field
  • Hands-on experience in writing complex T-SQL queries and stored procedures
  • Good experience in data integration and database development
  • Proficiency in T-SQL and Spark SQL/PySpark. (Synapse/ Databricks)
  • Extensive experience with Azure Data Factory
  • Excellent problem-solving skills and attention to detail
  • 5 – 8 years experience
  • Proven track record of writing complex SQL stored procedures with implementing OTLP database solutions (using Microsoft SQL Server)
  • Experience with Azure Synapse / PySpark / Azure Databricks for big data processing
  • Expertise in T-SQL, Dynamic SQL, Spark SQL, and ability to write complex stored procedures
Job Responsibility
Job Responsibility
  • Collaborate with cross-functional teams to gather, analyze, and document business requirements for data integration projects
  • Write complex stored procedures to support data transformation and to implement business validation logic
  • Develop and maintain robust data pipelines using Azure Data Factory ensuring seamless data flow between systems
  • Work closely with the team to ensure data quality, integrity, and accuracy across all systems
  • Contribute to the enhancement and optimization of OLTP systems
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right
New

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right
New

Azure Data Engineer

At LeverX, we have had the privilege of delivering over 1,500 projects for vario...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with strong expertise in Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Synapse, Microsoft Fabric, and Azure Cosmos DB
  • Advanced SQL skills, including complex query development, optimization, and troubleshooting
  • Strong knowledge of indexing, partitioning, and query execution plans to ensure scalability and performance
  • Proven expertise in database modeling, schema design, and normalization/denormalization strategies
  • Ability to design and optimize data architectures to support both transactional and analytical workloads
  • Proficiency in at least one programming language such as Python, C#, or Scala
  • Strong background in cloud-based data storage and processing (e.g., Azure Data Lake, Databricks, or equivalent) and data warehouse platforms (e.g., Snowflake)
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and maintain efficient and scalable data architectures and workflows
  • Build and optimize SQL-based solutions for data transformation, extraction, and loading (ETL) processes
  • Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions
  • Manage and optimize data storage platforms, including databases, data lakes, and data warehouses
  • Troubleshoot and resolve data-related issues, ensuring accuracy, integrity, and performance across all systems
What we offer
What we offer
  • Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc
  • Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay with the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right
New

Azure Data Engineer

Job Description: Designs, modifies, and builds new and scalable data processes. ...
Location
Location
United States , Kennesaw
Salary
Salary:
Not provided
cpctechno.com Logo
CPC Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6 - 8 years’ of experience on developing data solutions in Python using Spark framework
  • Ability to perform root cause analysis and identify performance bottlenecks in Spark Jobs
  • Expert in Data Engineering and building data pipelines, implementing Algorithms in a distributed environment
  • Ability to design and develop parallel processing data platform in PySpark
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues
  • Strong Proficiency in SQL
  • Cloud knowledge especially Azure
  • Collaborates with stakeholders, IT, database engineers and other scientists
  • Hands-on knowledge in Azure Synapse and Azure Data Factory is a plus
Job Responsibility
Job Responsibility
  • Designs, modifies, and builds new and scalable data processes
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

As an Azure Data Engineer, you will design and maintain scalable data pipelines ...
Location
Location
Salary
Salary:
Not provided
aciinfotech.com Logo
ACI Infotech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience as a Data Engineer with Azure ecosystem
  • Strong skills in SQL, Databricks, and Python
  • Hands-on experience with Azure Data Factory (ADF)
  • Power BI experience preferred
  • Familiarity with Delta Lake and/or Azure Synapse is a plus
Job Responsibility
Job Responsibility
  • Develop, manage, and optimize ADF pipelines
  • Design and implement Databricks notebooks for ETL processes
  • Write and optimize SQL scripts for large-scale datasets
  • Collaborate with BI teams to support dashboard and reporting solutions
  • Ensure data quality, security, and compliance with governance policies
  • Fulltime
Read More
Arrow Right

Data Engineer (Azure)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in Azure, such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert
  • Hands-on experience with Azure services, especially those related to data engineering and analytics, such as Azure SQL Database, Azure Data Lake, Azure Synapse Analytics, Azure Databricks, Azure Data Factory, among others
  • Familiarity with Azure storage and compute services, including Azure Blob Storage, Azure SQL Data Warehouse, Azure HDInsight, and Azure Functions
  • Proficiency in programming languages such as Python, SQL, or C# for developing data pipelines, data processing, and automation
  • Knowledge of data manipulation and transformation techniques using tools like Azure Databricks or Apache Spark
  • Experience in data modeling, data cleansing, and data transformation for analytics and reporting purposes
  • Understanding of data architecture principles and best practices, including data lake architectures, data warehousing, and ETL/ELT processes
  • Knowledge of security and compliance features offered by Azure, including data encryption, role-based access control (RBAC), and Azure Security Center
  • Excellent communication skills, both verbal and written, to collaborate effectively with technical and non-technical teams
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

We are seeking a highly skilled and motivated Senior Data Engineer/s to architec...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
techmahindra.com Logo
Tech Mahindra
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in data engineering with a focus on Microsoft Azure and Fabric technologies
  • Strong expertise in: Microsoft Fabric (Lakehouse, Dataflows Gen2, Pipelines, Notebooks)
  • Strong expertise in: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
  • Strong expertise in: Power BI and/or other visualization tools
  • Strong expertise in: Azure Functions, Logic Apps, and orchestration frameworks
  • Strong expertise in: SQL, Python and PySpark/Scala
  • Experience working with structured and semi structured data (JSON, XML, CSV, Parquet)
  • Proven ability to build metadata driven architectures and reusable components
  • Strong understanding of data modeling, data governance, and security best practices
Job Responsibility
Job Responsibility
  • Design and implement ETL pipelines using Microsoft Fabric (Dataflows, Pipelines, Lakehouse ,warehouse, sql) and Azure Data Factory
  • Build and maintain a metadata driven Lakehouse architecture with threaded datasets to support multiple consumption patterns
  • Develop agent specific data lakes and an orchestration layer for an uber agent that can query across agents to answer customer questions
  • Enable interactive data consumption via Power BI, Azure OpenAI, and other analytics tools
  • Ensure data quality, lineage, and governance across all ingestion and transformation processes
  • Collaborate with product teams to understand data needs and deliver scalable solutions
  • Optimize performance and cost across storage and compute layers
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.