CrawlJobs Logo
Briefcase Icon
Category Icon

Filters

×
Countries

Pyspark Data Engineer Jobs

9 Job Offers

Filters
New
Data Engineer - Pyspark
Save Icon
Seeking a skilled Data Engineer in Haryana, India, to design and optimize big data systems. This role requires 4+ years of expertise in PySpark, Hadoop, Hive, and cloud technologies, with strong SQL, Python, and Scala skills. You will re-engineer complex data processes and work with large-scale d...
Location Icon
Location
India , Haryana
Salary Icon
Salary
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Pyspark Data Engineer
Save Icon
Seeking a Pyspark Data Engineer in Chennai with 4-8 years of experience. The role requires expertise in Python, SQL, PySpark, and big data frameworks to build and optimize large-scale data processing systems. You will apply analytical skills to develop robust solutions and ensure data quality. Th...
Location Icon
Location
India , Chennai
Salary Icon
Salary
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Data Engineer - AWS, PySpark
Save Icon
Seeking a Data Engineer in Bengaluru or Pune to build and maintain advanced AWS and PySpark data pipelines. You will design scalable solutions using Snowflake, DBT, and AWS analytics services (Glue, S3, Lambda). The role offers private medical care, pension, and a chance to revolutionize data arc...
Location Icon
Location
India , Bengaluru; Pune
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer - AWS, PySpark, DevOps
Save Icon
Seeking a Data Engineer in Bengaluru or Pune to build and maintain scalable data pipelines and lakes using AWS and Snowflake. You will leverage PySpark, DBT, and advanced SQL to drive innovation and ensure data accuracy and security. This hybrid role offers a competitive package, private healthca...
Location Icon
Location
India , Bengaluru; Pune
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer - Pyspark
Save Icon
Seeking a skilled Data Engineer with 5-8 years of experience to design and build scalable data pipelines in Chennai. The role requires deep expertise in PySpark, Spark Java, and Big Data frameworks (Hadoop, Kafka) for processing large-scale financial data. You will develop solutions for risk mode...
Location Icon
Location
India , Chennai
Salary Icon
Salary
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Data Engineer - PySpark
Save Icon
Join Barclays in Pune as a Data Engineer - PySpark. You will design scalable data pipelines and lakes using PySpark, AWS (Glue, S3, Lambda), and open-table formats. Collaborate with data scientists to deploy ML models and ensure robust data architecture. We offer private medical care, pension con...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer - AWS, Pyspark
Save Icon
Seeking a skilled Data Engineer in Bengaluru to build and maintain advanced AWS and Snowflake data pipelines. You will leverage PySpark, Glue, and DBT to design scalable data solutions and warehouses. This role offers private medical care, pension contributions, and a chance to drive digital inno...
Location Icon
Location
India , Bengaluru
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer - PySpark
Save Icon
Seeking a Data Engineer in Bengaluru to build and maintain scalable data pipelines using PySpark and AWS. You will design efficient data solutions with Snowflake and DBT, ensuring data accuracy and security. This role offers private medical care, pension contributions, and a competitive holiday a...
Location Icon
Location
India , Bengaluru
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer - PySpark
Save Icon
Seeking a Data Engineer - PySpark in Pune to build and maintain robust data pipelines, warehouses, and lakes on AWS and Snowflake. You will design scalable solutions using PySpark, Glue, and DBT to ensure data accuracy and security. This role offers private medical care, pension contribution, and...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Master the intersection of big data and advanced analytics by exploring PySpark Data Engineer jobs. A PySpark Data Engineer is a specialized professional who designs, builds, and manages large-scale data processing systems using PySpark, the Python API for Apache Spark. This role is central to the modern data ecosystem, transforming raw, often massive, datasets into clean, structured, and reliable information that powers business intelligence, machine learning models, and data-driven decision-making across an organization. Professionals in these roles are the architects of data pipelines. Their day-to-day responsibilities typically involve developing and maintaining scalable, high-performance data processing applications. They write complex PySpark code to perform ETL (Extract, Transform, Load) and ELT processes, efficiently handling data from diverse sources like data lakes, databases, and streaming services. A core part of their work is data modeling, where they design and optimize data structures, such as data warehouses and data marts, to ensure they are efficient for analytical queries and reporting. They are also responsible for ensuring the robustness of these pipelines by implementing data quality checks, monitoring performance, and troubleshooting issues to guarantee the timely and accurate delivery of data products. The skill set for a PySpark Data Engineer is both deep and broad. Mastery of PySpark is fundamental, including a strong understanding of its core concepts like DataFrames, RDDs, and Spark SQL for distributed computing. Proficiency in Python programming is a given. Beyond this, a comprehensive grasp of SQL is essential for data querying and manipulation. These roles also demand experience with big data ecosystems, which often includes familiarity with technologies like Hadoop, Hive, and cloud-based data platforms such as AWS, Azure, or GCP. Knowledge of data warehousing concepts, both relational and NoSQL databases, is highly valued. As the field evolves, modern data engineers are expected to have skills in DevOps practices, including using CI/CD pipelines, version control (like Git), and infrastructure-as-code tools (like Terraform) to automate the deployment and management of data infrastructure. A strong grasp of data governance principles, including data security, privacy, and quality, is also a common requirement. For those with a problem-solving mindset and a passion for building robust data infrastructure, PySpark Data Engineer jobs offer a challenging and rewarding career path at the forefront of technology. These positions are critical for any organization looking to leverage its data assets, making skilled engineers highly sought after. If you excel at creating order from data chaos and want to enable impactful business insights, a career as a PySpark Data Engineer could be your ideal next step.

Filters

×
Countries
Category
Location
Work Mode
Salary