This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As an experienced AI Security Engineer, you will play a critical role in advancing our data and AI security capabilities across the organisation. You will help shape the next generation of our data security platform, driving innovation, resilience, and automation at scale. Your expertise will expand the product roadmap, enhance our security posture, and ensure we proactively identify, mitigate, and respond to risks associated with both traditional data systems and emerging GenAI technologies. In this role, you will combine practical engineering experience with deep knowledge of Data Security, AI Security, and cloud-native security practices. Your work will draw on insights from enterprise scanning across Data Loss Prevention (DLP), Data Privacy and Protection, GenAI specific, SIEM (Security Information and Event Management), and vulnerability management security controls.
Job Responsibility:
Drive enhancements to the AI/GenAI security platform and scanning infrastructure, ensuring alignment with strategic goals and approved budgets
Work alongside engineers to contribute to Design, build, and optimise cloud-native Data & GenAI Security services
Support stakeholders in leveraging and adopting the latest Data & GenAI Security cloud-native application architectures
Ensure all Data Security architecture and cloud infrastructure accommodates the latest security and software lifecycle patterns
Conduct regular threat assessments to identify vulnerabilities in AI/ML systems
Develop benchmarks, tools and scripts to automate vulnerability testing for AI/ML applications
Perform code reviews and penetration testing of AI-related software
Stay updated on emerging trends, technologies, and best practices in AI/ML security
Execute PoCs/PoVs to evaluate emerging trends, technologies and practices in the industry
Requirements:
Expert knowledge in at least one industry-leading AI/GenAI security platform or framework, including tools for model governance, data privacy, and risk mitigation-such as Azure AI Security, AWS AI Guardrails, Google Vertex AI Security, IBM Watson OpenScale, or equivalent solutions
Experience conducting threat modelling, penetration testing, and vulnerability assessments across AI/ML ecosystems, including data pipelines, model APIs, and supporting infrastructure
Hands on expertise with core cybersecurity technologies, such as firewalls, SIEM platforms, IDS/IPS, and related security tooling
Proficiency in Python, with the ability to develop automation, security tooling, and data driven scripts
Strong understanding of secure software development practices, including OWASP principles and DevSecOps methodologies integrated into CI/CD workflows
Experience securing cloud and AI workloads on major cloud platforms such as AWS and Azure
Working knowledge of machine learning concepts, data processing techniques, and common AI/ML frameworks (e.g., PyTorch, Scikit learn, LangChain) – considered a strong advantage