This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Cloudera, our Data Services Pillar is the heart of data innovation. We don’t just work with technology; we build it. Our mission is to empower data practitioners by creating seamless, enterprise-grade experiences for data engineering, warehousing, streaming, operational databases, and AI. This is your opportunity to build cloud-native solutions that are deployable anywhere—whether in massive clusters on any cloud provider or in private data centers. You’ll work with cutting-edge technologies like Trino, Spark, Airflow, and advanced AI inferencing systems to shape the future of analytics. Your code will directly influence how data engineers, analysts, and developers worldwide find value in their data. We believe in the power of open source. You’ll collaborate with project committers, contributing upstream to keep technologies like Apache Hive and Impala evolving. You’ll harden these engines for rock-solid security, optimize them for peak performance, and make them effortlessly run across all environments. Join us and help build the trusted, cloud-native platform that powers insights for the most data-intensive companies on the planet.
Job Responsibility:
Review, simplify, and rationalize already existing test cases and our internal testing framework code
Prepare and implement test plans for newly developed features, and be part of the design process to ensure that testability is a concern from the beginning of the feature development
Review and work on the different levels of testing within open source projects
Work with our internal teams to integrate different layers of tests into our internal workflows related to development and supporting our customers
Will be responsible for continuously increasing the quality of the storage layer within Cloudera's Data Platform
Develop an understanding of popular open source projects of Apache Hadoop
hyperscale cloud platforms like AWS, Azure, and Container technologies like Kubernetes and Docker
Requirements:
Strong programming skills in Python and any of the following languages: Java/JavaScript
Ability to design, build, and maintain automated testing frameworks, tools, and automated test suites, in Python (pytest), preferred or Java (TestNG/JUnit)
Sound knowledge of test methodologies, including creation of test cases and test plans
Good Debugging skills, esp. involving distributed systems, preferably on Linux
Ability to work closely with the Engineering teams and come up with test scenarios for new features, involving Big Data technologies
Ability to design and maintain CI/CD pipelines for enabling fast-paced, low-touch releases of our product
Ability to work effectively both independently and as part of a team
BS/MS in Computer Science or related field
4+ years experience in test development, automation framework and tools development
Strong knowledge in back-end testing on any of the following: Web Services, Databases, enterprise storage products, or large-scale distributed systems
Strong knowledge in popular test automation frameworks and test automation methodologies
Familiarity with DevOps technologies such as Docker, Kubernetes, Ansible, Jenkins, Github, Maven, etc
Excellent communication and collaboration skills
Comfortable working in fast-paced environments
Nice to have:
Working knowledge in storage systems and experience in developing and executing comprehensive storage testing strategies, evaluating functional, performance, scalability, stress, integrity, and security aspects of storage systems will be considered a strong asset
Knowledge of Public Clouds (AWS/Azure) and/or Container Technologies (Docker, Kubernetes) is a plus
Working knowledge of Apache Hive, Impala, Hue, and the Big Data ecosystem is an added advantage