This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Microsoft Purview is redefining how data protection works in an AI-native world. As Copilot adoption accelerates and collaboration patterns evolve, protection can no longer live as complex policy configuration buried in portals. It must be intelligent, adaptive, and operational by default. We are looking for a Senior Machine Learning Engineer to build AI-native experiences at the core of our data security platform. This role focuses on applying Generative AI and applied machine learning to solve real customer problems in security, risk reasoning, and automation. You will design, build, and productionize AI systems that power risk scoring, reasoning layers, explainability, AI graders, and intelligent recommendations across multi-tenant SaaS environments. This is a hands-on role requiring strong depth in LLMs, applied ML systems, evaluation frameworks, and large-scale distributed architectures. You will work closely with Product, Applied Scientists, and Engineering teams to translate ambiguous product problems into robust, reliable, and trustworthy AI capabilities that operate at enterprise scale. This is not a research-only role — it is a build-and-ship role focused on measurable customer impact.
Job Responsibility:
Architect and productionize AI systems that power risk scoring, classifier-level reasoning, explainability narratives, and intelligent recommendations across multi-tenant SaaS environments
Design and build GenAI pipelines (RAG + agentic orchestration) including indexing, embedding strategies, retrieval optimization, prompt frameworks, tool integration, and memory/control flows tailored for security use cases
Own model evaluation and guardrails, defining offline/online metrics (precision, recall, risk calibration, hallucination control), building evaluation harnesses, and implementing responsible AI safeguards
Optimize model trade-offs (latency, cost, accuracy, safety) and deploy scalable inference systems with monitoring, drift detection, and feedback loops from customer signals
Partner with Product and Engineering to translate ambiguous security requirements into reliable AI-native workflows that are simple, trustworthy, and adoption-ready for SMB/SMC and Enterprise customers
Requirements:
8+ years of experience building and deploying machine learning systems in production environments
Strong hands-on experience with LLMs, RAG architectures, and applied Generative AI systems, including prompt design and grounding techniques
Proven expertise in designing end-to-end ML pipelines (data ingestion → feature engineering → training → evaluation → serving)
Experience defining and operationalizing offline and online evaluation frameworks (precision/recall, calibration, drift detection, A/B testing) and implementing guardrails for reliability and safety
Strong programming skills in Python, with experience in modern ML frameworks (e.g., PyTorch/TensorFlow) and cloud-based ML infrastructure
experience building multi-tenant SaaS AI systems with attention to scalability, latency, and cost optimization
Experience fine-tuning or adapting foundation models (e.g., LoRA/PEFT) and building agentic orchestration systems is preferred
knowledge of data security, compliance, or risk-scoring domains will be an additional advantage
Nice to have:
Experience fine-tuning or adapting foundation models (e.g., LoRA/PEFT) and building agentic orchestration systems
knowledge of data security, compliance, or risk-scoring domains