This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Seamless.AI, we’re seeking a highly skilled and experienced Freelance Principal Data Engineer; Contratista Independiente with expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies. The ideal candidate will have a proven track record in data acquisition and transformation, as well as experience working with large data sets and applying methodologies for data matching and aggregation methodologies. This independent contractor role requires exceptional organizational skills and the ability to work autonomously while delivering high-quality solutions.
Job Responsibility:
Design, develop, and maintain scalable ETL pipelines to acquire, transform, and load data from various sources into the data ecosystem
Work with stakeholders to understand data requirements and propose effective data acquisition and integration strategies
Implement data transformation logic using Python and relevant frameworks, ensuring efficiency and reliability
Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs
Optimize ETL processes to improve performance and scalability, particularly for large datasets
Apply data matching, deduplication, and aggregation techniques to enhance data accuracy and quality
Ensure compliance with data governance, security, and privacy best practices within the scope of project deliverables
Provide recommendations on emerging technologies and tools that enhance data processing efficiency
Requirements:
Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience
7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration
Professional experience with Spark and AWS pipeline development required
Fluency in English and Spanish is required
Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark)
Hands-on experience with AWS Glue or similar ETL tools and technologies
Solid understanding of data modeling, data warehousing, and data architecture principles
Expertise in working with large data sets, data lakes, and distributed computing frameworks
Experience developing and training machine learning models
Strong proficiency in SQL
Familiarity with data matching, deduplication, and aggregation methodologies
Experience with data governance, data security, and privacy practices
Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues
Excellent communication and collaboration skills, with the ability to work effectively independently
Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.