This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
LogicMonitor® is the AI-first hybrid observability platform powering the next generation of digital infrastructure. LogicMonitor delivers complete visibility and actionable intelligence across on-premises, cloud, and edge environments. By anticipating issues before they strike, optimizing resources in real time, and enabling faster, smarter decisions, LogicMonitor helps IT and business leaders protect margins, accelerate innovation, and deliver exceptional digital experiences without compromise.
Job Responsibility:
Design, build, and deploy AI-powered solutions that enhance observability, automation, and insights across LogicMonitor’s platform
Develop some NLP models, generative AI solutions, and AI-driven analytics that drive automation and efficiency
Work closely with cross-functional teams, including product, engineering, and infrastructure teams, to integrate state-of-the-art AI models into LogicMonitor’s observability platform
Build and optimize large language model (LLM)-based solutions that generate human-readable incident reports, root cause explanations, and remediation suggestions
Leverage retrieval-augmented generation (RAG) to surface relevant documentation, logs, and knowledge base insights for IT teams
Develop semantic search and context-aware retrieval solutions to help IT teams quickly find relevant data, logs, and metrics
Optimize AI models for real-time performance in distributed, cloud-native environments (AWS, GCP, Kubernetes, etc.)
Work cross-functionally with platform engineers, product teams, and data scientists to align AI-driven features with customer needs
Stay ahead of AI/ML advancements
Requirements:
4-5 years of experience in AI/ML engineering, with a focus on generative AI, and AI-driven automation and NLP
Strong proficiency in Python, Java, or similar languages for AI model development and deployment
Hands-on experience with LLMs, RAG (Retrieval-Augmented Generation), and prompt engineering
Expertise in embedding-based search, semantic retrieval, and transformer models (e.g., OpenAI, Hugging Face)
Experience with microservices, REST APIs, and cloud platforms (AWS, GCP, Kubernetes, Docker, etc.)
Strong understanding of search algorithms, ranking techniques, and text clustering methods
Strong problem-solving skills with the ability to drive innovation in AI-powered observability solutions
Excellent communication and collaboration skills, with experience working in cross-functional teams
Nice to have:
Familiarity with observability, ITOps, or monitoring platforms is a plus