This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Working as part of the Big Data Engineering team which is responsible for transforming data into useful information for the data science team and product team. Working with Linux systems and Hadoop databases to extract data from Hadoop database and ingested using Scoop. Staging the real time data from gateway into AWS S3 or Azure Blob storage. Analyzing data using SQL Queries and transforming data into various stages such as preprocessed, standardized and filtered. Implementing Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data. Responsible to Use partitions in Spark session to improve the performance of the load time. Creating Pipelines in ADF using Linked Services/Datasets /Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards. Responsible for importing data and develop Spark streaming pipeline in Java Work under supervision. Travel And/or Relocation to unanticipated client sites is required.
Job Responsibility:
Working as part of the Big Data Engineering team which is responsible for transforming data into useful information for the data science team and product team.
Working with Linux systems and Hadoop databases to extract data from Hadoop database and ingested using Scoop.
Staging the real time data from gateway into AWS S3 or Azure Blob storage.
Analyzing data using SQL Queries and transforming data into various stages such as preprocessed, standardized and filtered.
Implementing Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data.
Responsible to Use partitions in Spark session to improve the performance of the load time.
Creating Pipelines in ADF using Linked Services/Datasets /Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.
Responsible for importing data and develop Spark streaming pipeline in Java Work under supervision.
Travel And/or Relocation to unanticipated client sites is required.
Requirements:
Master's degree in Computer Science/IT/IS/Engineering (Any) or closely related field.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.