CrawlJobs Logo

Graph Database Engineer

leadingpath.com Logo

Leading Path Consulting

Location Icon

Location:
United States , Chantilly

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for an experienced Systems Software Engineer to join our team, interested in designing, developing and maintaining essential software systems. The optimal candidate will have demonstrated experience integrating solutions with graph databases, such as JanusGraph. They will also have a strong background in systems software development, experience with 3rd party system integration, expertise in ElasticSearch, and a solid understanding of data modeling concepts. This is a fantastic opportunity to work on a challenging project that requires innovative solutions and collaborative problem-solving.

Job Responsibility:

  • Design, develop, test, and deploy scalable and efficient software solutions
  • Collaborate with cross-functional teams to identify and prioritize project requirements
  • Participate with code reviews and ensure high-quality, modular, and reusable code
  • Troubleshoot and debug issues in the application, including performance optimization and error handling
  • Stay up-to-date with industry trends and emerging technologies, applying this knowledge to improve our application
  • Design and implement data models that meet the needs of the application, ensuring data consistency and integrity

Requirements:

  • Bachelor’s Degree in Computer Science, Electrical or Computer Engineering or a related technical discipline, or the equivalent combination of education, technical training, or work/military experience
  • 5+ years of related software development experience
  • Extensive expertise in Python and NodeJS
  • 5+ years of experience in systems software development in NodeJS or Python
  • Strong proficiency working with graph databases (for example JANUS Graph) and graph query languages
  • Proven experience with 3rd party system integration using APIs, webhooks and other integration methods
  • Strong understanding of software design patterns, principles and best practices
  • Excellent problem-solving skills, with the ability to work effectively in a team environment
  • Basic understanding of data modeling concepts, including entity-relationship, data normalization, and denormalization
  • Experience with Git workflows, including feature branching, pull requests and code reviews
  • Ability to work effectively in a Linux-based development environment

Nice to have:

  • Hands on experience with AWS Lambda, EventBridge and SQS
  • Experience with Node.js, Express, MongoDB and Cassandra
  • In-depth knowledge of ElasticSearch, including indexing, querying and aggregation
  • Knowledge of containerization leveraging Kubernetes
  • Familiarity with CI/CD pipelines and automation tools such as Jenkins or CircleCI
  • Hands-on experience working with message brokers such as RabbitMQ or AWS SQS
What we offer:
  • Vacation – 5 weeks of accrued paid vacation per year
  • Holidays - Paid holidays published annually by the Office of Personnel Management, excluding Inauguration Day
  • 100% paid for Health Benefits (United Healthcare, Guardian Dental, VSP Vision, MetLife, Life and Disability Insurance and annual $1500 employer HSA contribution on qualified plans)
  • 6% 401k Contribution
  • Training Reimbursement – Approved training and education expenses will be reimbursed
  • Travel Expenses – Approved travel expenses will be reimbursed

Additional Information:

Job Posted:
January 22, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Graph Database Engineer

Learning Process Engineer

This is a career-defining opportunity to play a crucial role in a hyper-scale AI...
Location
Location
United States , Salt Lake City
Salary
Salary:
Not provided
passivelogic.com Logo
PassiveLogic
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Technical background in computer science, AI/ML, data engineering, or knowledge systems
  • Experienced with graph databases (Neo4j, TigerGraph, Weaviate, Neptune), Python/C++, graph query languages (Cypher, Gremlin, GraphQL, SPARQL), graph ML/embeddings, and building ETL pipelines, event-driven systems, and real-time feedback loops
  • Understanding of feedback-driven model improvement, reinforcement learning, or adaptive systems
  • Experience working cross-functionally with engineers, designers, and product managers
  • Analytical mindset: ability to define success metrics, run experiments, and interpret results
  • Excellent communication skills and a collaborative, problem-solving approach
  • Background in process engineering, systems design, product operations, or applied AI/ML
  • Strong systems thinking: ability to model complex workflows and simplify them into actionable processes
  • Familiarity with human-in-the-loop learning, adaptive systems, or feedback-driven workflows
  • Proven experience: 5+ years in developing software with an ecosystem nature
Job Responsibility
Job Responsibility
  • Architect feedback pipelines: Build and maintain data ingestion and labeling processes that transform user interactions into structured learning signals
  • Design graph-based knowledge structures: Model, update, and optimize workflows in a graph database (e.g., Neo4j, ArangoDB, Weaviate, or similar)
  • Implement adaptive logic: Use graph queries and embeddings to inform recommendations, predictions, and workflow adaptation
  • Integrate human-in-the-loop learning: Deploy mechanisms that incorporate user corrections and contextual feedback into graph representations and model updates
  • Collaborate with ML and software engineers: Define retraining strategies, model evaluation criteria, and experiment frameworks that leverage graph-based data
  • Automate performance monitoring: Develop dashboards and metrics for tracking how graph-driven learning impacts system accuracy, adoption, and efficiency
What we offer
What we offer
  • Competitive compensation
  • Generous equity share package
  • Medical, dental and vision coverage
  • Disability and life Insurance options
  • Flex PTO
  • Team-building events
  • Free catered lunch in the office Monday — Friday
  • Free ski pass (We are at the base of Big Cottonwood Canyon)
  • Free National Park pass
  • Onsite Gym
  • Fulltime
Read More
Arrow Right

Learning Process Engineer

We’re building the next generation of AI-powered productivity tools for autonomo...
Location
Location
Netherlands , Amsterdam
Salary
Salary:
Not provided
passivelogic.com Logo
PassiveLogic
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Technical background in computer science, AI/ML, data engineering, or knowledge systems
  • Experienced with graph databases (Neo4j, TigerGraph, Weaviate, Neptune), Python/C++, graph query languages (Cypher, Gremlin, GraphQL, SPARQL), graph ML/embeddings, and building ETL pipelines, event-driven systems, and real-time feedback loops
  • Understanding of feedback-driven model improvement, reinforcement learning, or adaptive systems
  • Experience working cross-functionally with engineers, designers, and product managers
  • Analytical mindset: ability to define success metrics, run experiments, and interpret results
  • Excellent communication skills and a collaborative, problem-solving approach
  • Background in process engineering, systems design, product operations, or applied AI/ML
  • Strong systems thinking: ability to model complex workflows and simplify them into actionable processes
  • Familiarity with human-in-the-loop learning, adaptive systems, or feedback-driven workflows
  • Proven experience: 5+ years in developing software with an ecosystem nature
Job Responsibility
Job Responsibility
  • Architect feedback pipelines: Build and maintain data ingestion and labeling processes that transform user interactions into structured learning signals
  • Design graph-based knowledge structures: Model, update, and optimize workflows in a graph database (e.g., Neo4j, ArangoDB, Weaviate, or similar)
  • Implement adaptive logic: Use graph queries and embeddings to inform recommendations, predictions, and workflow adaptation
  • Integrate human-in-the-loop learning: Deploy mechanisms that incorporate user corrections and contextual feedback into graph representations and model updates
  • Collaborate with ML and software engineers: Define retraining strategies, model evaluation criteria, and experiment frameworks that leverage graph-based data
  • Automate performance monitoring: Develop dashboards and metrics for tracking how graph-driven learning impacts system accuracy, adoption, and efficiency
What we offer
What we offer
  • Competitive compensation
  • Generous equity share package
  • Pension plan
  • Paid time off
  • Commute Coverage (NS Business Card or Car allowance)
  • In Office Lunch
  • Fun office-wide activities quarterly
  • Worldwide ski/snowboard pass
Read More
Arrow Right

Staff Software Engineer, Social Graph

We are seeking a Staff Software Engineer with deep expertise in graph theory, gr...
Location
Location
United States , San Francisco
Salary
Salary:
181000.00 - 271000.00 USD / Year
gofundme.com Logo
GoFundMe
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of industry experience, including significant experience at senior / staff / principal levels
  • Demonstrated expertise launching and scaling graph-based applications in production
  • Deep understanding of graph theory, graph algorithms (e.g., traversal, clustering, centrality), and modern graph data structures
  • Expert-level experience with graph databases (Neo4j, TigerGraph, JanusGraph, DGL-backed systems, etc.) and efficient graph querying
  • Proven ability to design high-scale pipelines for ingesting and transforming social or behavioral data
  • Experience with distributed streaming frameworks (Kafka, Flink, Spark Streaming)
  • Hands-on experience incorporating graph-derived features into recommendation, ranking, trust, or safety models
  • Familiarity with Graph Neural Networks (GNNs), graph embeddings, or graph-based ranking systems
  • Strong product intuition and ability to articulate how graph systems drive business outcomes
  • Ability to influence architectural direction and mentor teams
Job Responsibility
Job Responsibility
  • Serve as the technical lead for initiatives related to social graph modeling, storage, retrieval, and computation
  • Architect and scale graph databases and graph query systems capable of supporting billions of nodes and edges with low-latency performance
  • Design and ship pipelines for ingesting, cleaning, and transforming social and behavioral data into graph structures
  • Partner with ML teams to productionize graph-based features, including embeddings, similarity signals, trust metrics, and GNN-powered ranking features
  • Lead the development of graph-informed recommendation, trust, and safety systems, ensuring models reflect real-world connectivity patterns
  • Define and implement feature engineering strategies leveraging graph topology (e.g., mutual connections, influence scoring, community structure)
  • Contribute to architecture decisions related to streaming systems (Kafka, Flink, Spark Streaming) and real-time graph updates
  • Mentor engineers and guide best practices on graph design, distributed systems, feature computation, and ML integration
  • Collaborate with Product to translate graph capabilities into business-impacting features that drive trust, engagement, and discovery
  • Ensure reliability, scalability, observability, and data quality in all graph-related systems
What we offer
What we offer
  • Competitive pay and comprehensive healthcare benefits
  • Financial assistance for things like hybrid work, family planning
  • Generous parental leave
  • Flexible time-off policies
  • Mental health and wellness resources
  • Learning, development, and recognition programs
  • Volunteering program
  • Equity
  • Fulltime
Read More
Arrow Right

Senior Data Engineer (Graph)

As a Senior Data Engineer, you will play a pivotal role in transforming data int...
Location
Location
United States , San Francisco
Salary
Salary:
90.00 - 93.00 USD / Hour
softwareresources.com Logo
Software Resources
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of data engineering experience developing data pipelines
  • Understanding core concepts of graph databases and its advantages over a traditional RDBMS for modeling data. Knowing use cases.
  • Proficiency in at least one major programming language (e.g., Python)
  • ETL development for graph databases (extracting or loading into a graph databases)
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience Neo4j with Snowflake
  • Strong algorithmic problem-solving expertise
  • Comfortable working in a fast-paced and highly collaborative environment.
  • Excellent written and verbal communication
  • Willingness and ability to learn and pick up new skill sets
Job Responsibility
Job Responsibility
  • Create and maintain Data Platform pipelines
  • supporting structured, graph, and unstructured datasets
  • Architect and implement graph database models, schema design, and build robust, scalable solutions.
  • Fluency with data engineering concepts and platforms (AWS: S3, Lambda, SNS, SQS…
  • Iceberg), data platforms (Snowflake), configuration (data contracts), transformation, orchestration (dbt, Airflow), data quality (Great Expectations, Anomalo, Soda, Collibra).
  • Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data Platform
  • Document standards and best practices for pipeline configurations, naming conventions, etc.
  • Ensure high operational efficiency and quality of the Core Data Platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
  • Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental technology improvements
What we offer
What we offer
  • medical, dental, and vision coverage
  • a 401(k) with company match
  • short-term disability
  • life insurance with AD&D
  • Fulltime
Read More
Arrow Right

Staff Software Engineer

GEICO is seeking an experienced Staff Software Engineer to join our Knowledge Gr...
Location
Location
United States , Chevy Chase; Seattle
Salary
Salary:
105000.00 - 230000.00 USD / Year
geico.com Logo
Geico
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience designing and implementing knowledge management platforms, semantic data systems, content generation tools, or AI-driven developer platforms
  • Full-stack developer with extensive experience in modern front-end frameworks (React, TypeScript), web technologies (JavaScript, HTML, CSS/SASS), backend languages (Node.js, Python, Java), and cloud platforms (Azure, AWS, GCP)
  • Strong ability to architect distributed semantic systems and graph-based microservice architectures that handle complex data relationships and scale reliably
  • Experience with knowledge graphs, semantic technologies, and AI/ML platforms such as Neo4j, Apache Jena, TigerGraph, or similar graph databases, along with NLP frameworks and content generation models
  • Familiarity with semantic web standards (RDF, OWL, SPARQL), ontology design, knowledge representation, and automated reasoning systems
  • Deep understanding of content management ecosystems, headless CMS architectures, API-driven publishing workflows, and content delivery optimization
  • Experience with AI/ML frameworks for natural language processing, content generation (GPT, BERT, T5), recommendation systems, and knowledge extraction from unstructured data
  • Product mindset and passion for building intelligent tools that solve complex content challenges and enhance user experiences through semantic understanding
  • Excellent collaboration and communication skills with ability to explain complex semantic concepts to technical and non-technical stakeholders
  • In-depth knowledge of CS data structures, algorithms, particularly graph algorithms, semantic matching, and distributed system design patterns
Job Responsibility
Job Responsibility
  • Architect and design enterprise-scale knowledge graph platforms that capture and model GEICO's comprehensive insurance domain expertise, customer insights, product relationships, and market intelligence
  • Build automated semantic content generation systems that leverage knowledge graphs to create personalized insurance content, product descriptions, educational materials, and customer communications at scale
  • Develop intelligent content workflows and APIs that use graph traversal algorithms, natural language processing, and machine learning to automate content production, template generation, and multi-channel publishing
  • Design real-time content personalization engines that query knowledge graphs to deliver contextually relevant messaging based on customer profiles, policy information, and behavioral patterns
  • Create sophisticated data ingestion and enrichment pipelines that continuously build and maintain knowledge graphs from structured and unstructured data sources across the enterprise
  • Implement semantic search and content discovery platforms that understand customer intent and context through graph-based query processing and recommendation algorithms
  • Build internal dashboards and tooling for content performance monitoring, knowledge graph visualization, semantic relationship analysis, and content optimization insights
  • Lead cross-functional collaboration with product managers, data scientists, and content strategists to translate business objectives into scalable knowledge-driven technical solutions
  • Champion engineering excellence in semantic modeling, ontology design, graph database optimization, and AI/ML integration best practices
  • Mentor engineering teams on knowledge graph technologies, content automation frameworks, and distributed system design patterns for semantic platforms
What we offer
What we offer
  • Comprehensive Total Rewards program that offers personalized coverage tailor-made for you and your family’s overall well-being
  • Financial benefits including market-competitive compensation
  • a 401K savings plan vested from day one that offers a 6% match
  • performance and recognition-based incentives
  • and tuition assistance
  • Access to additional benefits like mental healthcare as well as fertility and adoption assistance
  • Supports flexibility- We provide workplace flexibility as well as our GEICO Flex program, which offers the ability to work from anywhere in the US for up to four weeks per year
  • Fulltime
Read More
Arrow Right

Staff AI Context Engineer

MagicSchool is seeking a Staff AI Context Engineer to architect and enhance the ...
Location
Location
United States
Salary
Salary:
205000.00 - 240000.00 USD / Year
edtechjobs.io Logo
EdTech Jobs
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Deep Knowledge Systems Experience: 5+ years building large-scale information systems with at least 2+ years in staff/senior roles. Extensive hands-on experience with RAG systems, knowledge graphs, or semantic search platforms in production environments.
  • Graph Database Expertise: Deep experience with graph databases (Neo4j, Neptune, or similar), including schema design, query optimization (Cypher, Gremlin), and building graph-based applications.
  • RAG & Retrieval Mastery: Demonstrated expertise building production RAG systems including embedding selection, chunking strategies, hybrid search, reranking, and retrieval evaluation. Familiarity with vector databases (pgvector, Pinecone, Weaviate, Qdrant).
  • Embedding & NLP Background: Strong understanding of embedding models (sentence transformers, domain-specific embeddings), fine-tuning approaches, and semantic similarity. Experience with document processing, entity extraction, and text chunking for optimal retrieval.
  • Technical Stack: Strong coding skills in Python and/or TypeScript/Node.js. Experience with our stack (TypeScript, Node.js, PostgreSQL, NextJS, Supabase) plus graph databases and vector stores. Familiarity with LLM APIs and context management patterns.
  • Information Architecture: Deep understanding of information retrieval theory, semantic search, knowledge representation, and strategies for organizing complex domain knowledge for both human and AI consumption.
  • Leadership & Impact: Track record of architecting complex knowledge systems, making high-leverage technical decisions about information architecture, and mentoring engineers on sophisticated retrieval and graph concepts.
Job Responsibility
Job Responsibility
  • Knowledge Graph & Semantic Architecture: Architect and implement graph-based knowledge systems (Neo4j, Neptune, etc) that represent educational content relationships, standards alignments, prerequisite chains, curriculum coherence, learning progressions, and pedagogical connections.
  • Graph Schema & Ontology Development: Design and evolve ontologies and schemas for educational content, defining entity types (standards, concepts, skills, assessments), relationship semantics, and property models.
  • GraphRAG Implementation: Build GraphRAG systems that combine knowledge graph traversal with vector similarity, enabling agents to retrieve contextually connected educational materials.
  • Retrieval Pipeline Architecture: Architect and implement sophisticated retrieval-augmented generation pipelines including hybrid search (dense + sparse), multi-stage retrieval, reranking strategies, and query understanding.
  • Embedding & Vectorization Strategy: Design and operationalize embedding pipelines for educational content, selecting and fine-tuning embedding models, implementing chunking strategies, and managing vector stores at scale.
  • Retrieval Evaluation & Optimization: Design evaluation pipelines that measure retrieval precision, recall, MRR, and NDCG across educational content types. Continuously optimize retrieval quality.
  • Document Ingestion & Processing: Build robust ingestion systems that process structured and unstructured educational content, extracting entities, relationships, and metadata for knowledge base population.
  • Semantic Parsing & Extraction: Implement NLP pipelines for educational content that extract key concepts, prerequisite relationships, learning objectives, and pedagogical metadata.
  • Memory & Context Management: Invent and operationalize memory compaction mechanisms, session state management, and cross-conversation memory patterns that allow agents to maintain coherence across extended teaching workflows.
  • Context Evaluation & Monitoring: Design evaluation frameworks that measure retrieval precision, token relevance, attention allocation, and reasoning coherence as context evolves across sessions.
What we offer
What we offer
  • Flexibility of working from home.
  • Unlimited time off.
  • Choice of employer-paid health insurance plans. Dental and vision are also offered at very low premiums.
  • Generous stock options, vested over 4 years.
  • 401k match.
  • Monthly wellness stipend.
  • Fulltime
Read More
Arrow Right

Senior Security Graph Engineer

The Defender Experts (DEX) Research team is at the forefront of Microsoft’s thre...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in security research, detection engineering, threat lifecycle, cloud security in large-scale in complex cloud environments
  • Strong understanding of graph theory, graph databases (e.g., Neo4j, TigerGraph), and graph analytics with proficiency in Python or similar languages for data analysis and prototyping
  • Experience working with large-scale datasets, distributed systems and graph analytics projects
  • Ability to translate complex threat data into graphs and actionable insights
  • Experience with machine learning or statistical modelling applied to graph data
  • Proven ability to lead and execute advanced research on emerging cloud-based threats affecting both Microsoft and third-party security products across heterogeneous cloud environments
  • Knowledge of adversary infrastructure tracking, malware analysis, or campaign clustering
  • Extensive hands-on experience with cloud platforms—including, but not limited to, Azure—as well as a deep understanding of multi-cloud security challenges and solutions
  • B. Tech or Equivalent
Job Responsibility
Job Responsibility
  • Design and maintain scalable threat graphs that model entities such as devices, identity, threat actors, TTPs, infrastructure, and campaigns
  • Lead and execute advanced research to develop algorithms and heuristics to detect malicious patterns and relationships within graph data on emerging cloud-based threats impacting Microsoft and third-party security products across heterogeneous cloud environments
  • Collaborate with threat protection researchers, data scientists, and detection engineers to enrich graph models with contextual insights and refine detection and response strategies, to provide comprehensive threat coverage and response capabilities
  • Research and prototype novel graph-based techniques for threat detection, attribution, and prioritization in collaboration with internal and external security teams
  • Translate complex raw security data into actionable graph intelligence that enhances the effectiveness of security operations for a global customer base
  • Mentor, guide, and drive best practices among researchers and detection engineers on advanced graph-based threat hunting and incident response across diverse ecosystems
  • Contribute to industry knowledge and Microsoft’s security posture by publishing research, developing threat graph models, and proactively identifying threats and attack trends in the cloud
  • Fulltime
Read More
Arrow Right

Data Scientist : Graph Database & Ontology Specialist

We are seeking a Data Scientist with deep expertise in Knowledge Graphs and Onto...
Location
Location
United Kingdom , Bristol
Salary
Salary:
Not provided
tekever.com Logo
Tekever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Graph Databases: Advanced Neo4j expertise, including architecture, drivers, administration, and Cypher
  • Ontology & Semantics: Strong experience with data modeling, ontologies, and semantic technologies (RDF, OWL, SPARQL)
  • Programming: High proficiency in Python (pandas, networkx, py2neo, neo4j-driver)
  • Graph ML: Experience with Neo4j GDS or frameworks such as PyTorch Geometric or DGL
  • Production Engineering: Hands-on experience with Docker, REST APIs (FastAPI/Flask), and CI/CD pipelines
  • Core Data Science Profile: 3+ years of experience in Data Science or Data Engineering
  • Experience with NLP for entity and relationship extraction is a plus
  • Strongly skilled in standard ML workflows (Scikit-Learn, XGBoost)
  • Experience with geospatial data (GIS, GeoPandas) is valued
  • Education: MSc in Computer Science, Data Science, or a related engineering field (PhD welcome, but practical delivery is prioritized)
Job Responsibility
Job Responsibility
  • Ontology Design & Management: Design and maintain scalable ontologies to unify mission data, sensor outputs, flight logs, and operational parameters
  • Graph Engineering (Neo4j): Implement, optimize, and operate Neo4j schemas
  • write high-performance Cypher queries and ensure production scalability
  • Graph Data Science: Apply graph algorithms (e.g., centrality, pathfinding, community detection) and graph ML to derive actionable insights
  • Production Deployment: Move solutions from research to production (TRL > 6)
  • integrate graph models into APIs and pipelines with reliability and latency constraints
  • Data Integration: Build ingestion pipelines for structured and unstructured data into the Knowledge Graph
  • Cross-Functional Collaboration: Translate operational and domain requirements into robust data and graph models
What we offer
What we offer
  • An excellent work environment and an opportunity to make a difference
  • Salary Compatible with the level of proven experience
  • Fulltime
Read More
Arrow Right