This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
LogicMonitor is advancing observability through AI‑driven data intelligence, connecting massive telemetry streams with the reasoning capabilities of large language models. We’re looking for a Senior Software Engineer who sits at the intersection of backend systems and data engineering, capable of building scalable data pipelines, APIs, and retrieval frameworks that fuel Edwin AI, Dexda, and other AIOps products. You’ll design, build, and optimize the data infrastructure that makes GenAI‑powered insights reliable, explainable, and real‑time.
Job Responsibility:
Design and build streaming and batch data pipelines that process metrics, logs, and events for AI workflows
Develop ETL and feature‑extraction pipelines using Python and Java microservices
Integrate data ingestion and enrichment from multiple observability sources into AI‑ready formats
Build resilient data orchestration using Kafka, Airflow, and Redis Streams
Develop data indexing and semantic search for large‑scale observability and operational data
Work with structured and unstructured data lakes and warehouses (Delta Lake, Iceberg, ClickHouse)
Collaborate with the AI Platform team to manage embeddings, metadata, and model context storage
Optimize latency and throughput for retrieval, query expansion, and AI response generation
Build and maintain Java microservices (Spring Boot) that serve AI and analytics data to Edwin and AIOps applications
Develop Python APIs (FastAPI / LangGraph) for LLM orchestration, summarization, and correlation reasoning
Implement schema contracts and streaming protocols (REST, gRPC, SSE, WebSockets) between services
Ensure fault‑tolerant, observable, and performant API infrastructure
Instrument services with OpenTelemetry for unified metrics, tracing, and logging
Implement data validation, schema evolution, and lineage tracking across AI pipelines
Enforce data privacy, RBAC, and compliance for model inputs and stored context
Collaborate with SRE and AI teams to monitor and optimize end‑to‑end AI system performance
Requirements:
Bachelor’s degree in Computer Science, Data Engineering, or a related field
4-5 years of experience in backend or data systems engineering
Experience building streaming data pipelines (Kafka / Spark or any similar technology)
Strong programming background in Java and Python, including microservice design
Experience with ETL, data modeling, and distributed storage systems
Familiarity with LLM pipelines, embeddings, and vector retrieval
Understanding of Kubernetes, containerization, and CI/CD workflows
Awareness of data governance, validation, and lineage best practices
Strong communication and collaboration across AI, Data, and Platform teams