CrawlJobs Logo

Infrastructure Software Engineer, Metadata

dropbox.com Logo

Dropbox

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

157800.00 - 240100.00 USD / Year

Job Description:

As a Software Engineer on the Metadata team, you’ll build and operate the large-scale distributed databases that every Dropbox service depends on. Metadata systems are mission-critical, in the live path for all user operations and must meet stringent requirements for latency, durability, and transactional consistency. You’ll design and evolve the core infrastructure that manages Dropbox’s databases at scale, enabling fast, reliable access to data for millions of users and hundreds of internal services. This work spans distributed systems, replication, caching, and transactional database systems. You’ll collaborate closely with engineers across Infrastructure and Product teams to ensure the metadata layer meets business needs and continues to scale with Dropbox’s growth. This is an opportunity to leverage your expertise in distributed systems and grow into broader technical leadership.

Job Responsibility:

  • Design and maintain distributed database systems providing low-latency, strongly consistent data access
  • Implement and optimize replication, consensus, and caching mechanisms to meet availability and performance goals
  • Operate production systems, including participating in the on-call rotation, ensuring high availability and data durability
  • Collaborate with infrastructure and product teams to assess current and future use cases and requirements, supporting the development of a mid- to long-term roadmap that reflects these needs
  • Contribute to system design reviews, postmortems, and reliability improvements
  • Write high-quality, efficient code in Go and Rust for performance-critical systems

Requirements:

  • 5+ years of experience designing and implementing software using distributed systems fundamentals: replication, consistency, partitioning, and fault tolerance
  • Experience building databases, storage systems, or large scale data infrastructure
  • Proficiency in Go, Rust, C++ or similar systems languages
  • Familiarity with consensus and coordination systems (e.g. Raft, Paxos, ZooKeeper, etcd)
  • Experience operating production services and participating in on-call rotations
  • Strong debugging and performance analysis skills
  • Excellent collaboration and communication abilities across teams

Nice to have:

  • Experience building distributed databases or storage systems
  • Practical experience with and deep understanding of data structures used in storage systems (e.g. LSM trees, B-trees, Hash Indexes)
  • Experience operating database systems (e.g. MySQL, Postgres, Cassandra)
  • Experience with distributed caching, either custom built or operating open source options such as Memcached or Redis
  • Experience improving reliability and performance in high-scale data systems
  • Experience working with cross-functional teams to understand their current use cases, identify future needs and requirements, and incorporate them into the team’s roadmap
  • Interest in deepening distributed systems expertise and expanding technical leadership
What we offer:
  • Competitive medical, dental, and vision coverage
  • 401(k) plan with a generous company match and immediate vesting
  • Flexible PTO/Paid Time Off, paid holidays, Volunteer Time Off, and more, allowing you time to unplug, unwind, and refresh
  • Income Protection Plans: Life and disability insurance
  • Business Travel Protection: Travel medical and accident insurance
  • Perks Allowance to be used on what matters most to you, whether that’s wellness, learning and development, food and groceries, and much more
  • Parental benefits including: Parental Leave, Child and Adult Care, Day Care FSA, Fertility Benefits, Adoption and Surrogacy Support, and Lactation Support
  • Access to over 10,000 global co-working spaces through Gable.to, making it easy to book flexible workspaces for collaboration or individual work
  • Quarterly Cell phone and internet allowance
  • Mental health and wellness benefits
  • Disability and neurodivergence support benefits

Additional Information:

Job Posted:
December 31, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Infrastructure Software Engineer, Metadata

Senior Software Engineer - AI

Senior Software Engineer role focused on AI and data-driven systems to transform...
Location
Location
Sweden , Malmö
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Software development principles
  • Programming language skills
  • Experience with Python (object-oriented)
  • Experience with REST-based frameworks like FastAPI
  • Frontend development skills
  • Cloud platform experience (Azure preferred)
  • Infrastructure-as-code experience (Terraform)
  • GitHub Actions for automation
  • Testing and quality focus
  • Experience with SSO, permissions, and access control
Job Responsibility
Job Responsibility
  • Design and develop cloud-based products
  • Build and evolve global application using AI and data
  • Enrich content with meaningful metadata
  • Create solutions for presenting and managing product information
  • Collaborate with cross-functional Agile team
  • Implement digital solutions for omnichannel content
  • Fulltime
Read More
Arrow Right

Intermediate Software Engineer SRE – AI

At PointClickCare our mission is simple: to help providers deliver exceptional c...
Location
Location
Canada , Mississauga
Salary
Salary:
115000.00 - 128000.00 CAD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years' experience in software engineering
  • Experience with SRE principles
  • Experience with AI/ML in production environments
  • A passion for automation, intelligent systems, and operational excellence
  • Strong debugging, problem-solving, and system design skills
  • Languages: Python, Java, Bash, Terraform
  • Platforms: Azure, Kubernetes, Docker
  • Tools: Datadog, Prometheus, AppDynamics, ELK, GitHub Actions
  • ML/AI: MCP framework, AI agents, Vector store, Agent orchestration (LangChain), RAG
  • CI/CD: Jenkins, ArgoCD, Spinnaker
Job Responsibility
Job Responsibility
  • Build ML-based anomaly detection and pattern recognition systems
  • Enhance telemetry with smart tagging and metadata for better AI insights
  • Develop event-driven workflows and self-healing systems using AI triggers
  • Automate incident response with generative AI and custom AI agent orchestration
  • Use time-series forecasting and predictive modelling to anticipate failures
  • Optimise infrastructure with AI-powered autoscaling and cost-aware resource allocation
  • Build scalable, fault-tolerant systems in a cloud-native environment
  • Participate in on-call rotations and lead incident response for critical systems
  • Skilled in API integration for streamlined data exchange and system connectivity
  • Run internal AIOps workshops and help teams adopt AI maturity models
What we offer
What we offer
  • Benefits starting from Day 1
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more
  • Fulltime
Read More
Arrow Right

Senior Software Engineer – Systems

lakeFS is an open source project that provides the object storage a manageabilit...
Location
Location
Salary
Salary:
Not provided
lakefs.io Logo
LakeFS
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in backend development with emphasis on virtualization, cloud, networking, or storage technologies such NFS / FUSE / SMB / CFAPI or CSI Drivers
  • Proficiency in Go (preferred) or similar backend languages (C++, Java, Rust etc.)
  • Strong grasp of data structures, system design and software architecture principles
  • Experience working with remote teams and excellent written and verbal communication skills
  • Passion for open source, data infrastructure, and empowering engineers
  • Experience with at least one of Windows, Mac or Linux & shell scripting (Python, Bash)
Job Responsibility
Job Responsibility
  • Architect and implement lakeFS Mount – our our storage abstraction that empowers AI/ML large-scale workflows by exposing petabytes of lakeFS data as a local file system across Linux, macOS, Windows, and Kubernetes via CSI
  • Optimize file system level performance using novel caching techniques, efficient metadata handling and prefetching while enabling write-mode consistency for millions of large files
  • Design and develop robust distributed backend services that power lakeFS, written primarily in Go
  • Work across our stack: cloud-native infrastructure (Kubernetes, Terraform, ArgoCD), data engineering SDKs (Iceberg, Spark), and performance-critical components
  • Ensure our product remains reliable, scalable, and secure in production – handling billions of daily API calls across multiple clouds
  • Collaborate closely with teammates on-site and in a distributed, remote environment
  • Contribute ideas and feedback to shape product direction based on customer and community input
  • Help foster a culture of trust, ownership, and continuous learning
Read More
Arrow Right
New

Infrastructure Software Engineer, Metadata

As a Software Engineer on the Metadata team, you’ll build and operate the large-...
Location
Location
Canada
Salary
Salary:
168300.00 - 227700.00 CAD / Year
dropbox.com Logo
Dropbox
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience designing and implementing software using distributed systems fundamentals: replication, consistency, partitioning, and fault tolerance
  • Experience building databases, storage systems, or large scale data infrastructure
  • Proficiency in Go, Rust, C++ or similar systems languages
  • Familiarity with consensus and coordination systems (e.g. Raft, Paxos, ZooKeeper, etcd)
  • Experience operating production services and participating in on-call rotations
  • Strong debugging and performance analysis skills
  • Excellent collaboration and communication abilities across teams
Job Responsibility
Job Responsibility
  • Design and maintain distributed database systems providing low-latency, strongly consistent data access
  • Implement and optimize replication, consensus, and caching mechanisms to meet availability and performance goals
  • Operate production systems, including participating in the on-call rotation, ensuring high availability and data durability
  • Collaborate with infrastructure and product teams to assess current and future use cases and requirements, supporting the development of a mid- to long-term roadmap that reflects these needs
  • Contribute to system design reviews, postmortems, and reliability improvements
  • Write high-quality, efficient code in Go and Rust for performance-critical systems
What we offer
What we offer
  • Competitive medical, dental and vision coverage
  • Retirement savings through a defined contribution pension or savings plan
  • Flexible PTO/Paid Time Off, paid holidays, Volunteer Time Off, and more
  • Income Protection Plans: Life and disability insurance
  • Business Travel Protection: Travel medical and accident insurance
  • Perks Allowance to be used on what matters most to you
  • Parental benefits including: Parental Leave, Fertility Benefits, Adoptions and Surrogacy support, and Lactation support
  • Mental health and wellness benefits
  • Fulltime
Read More
Arrow Right
New

Infrastructure Software Engineer, API Platform

As an Infrastructure Engineer on the API Platform team, your role will be crucia...
Location
Location
Mexico
Salary
Salary:
Not provided
dropbox.com Logo
Dropbox
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS, MS, or PhD in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience
  • 5+ years of professional software development experience
  • Proven track record constructing and managing expansive, multi-threaded, geographically dispersed backend systems
  • Proficient in programming and debugging across a range of languages such as Python, Go, C/C++, or Java
  • Proficiency with operating system internals, filesystems, databases, networks, and compilers.
  • Proven track record of defining & delivering well-scoped milestones/projects
  • Ability to independently define right solutions for ambiguous, open-ended problems
Job Responsibility
Job Responsibility
  • Build infrastructure capable of managing metadata for hundreds of billions of files, handling hundreds of petabytes of user data, and facilitating millions of concurrent connections.
  • Lead the expansion of Dropbox's function as the data-fabric, connecting hundreds of millions of applications, devices, and services globally, while also driving initiatives to enhance interoperability and adaptability across diverse ecosystems.
  • Measure and optimize Dropbox's analytics platform to maintain its status as one of the most advanced in the industry for extracting meaningful insights from vast data volumes.
  • Collaborate with cross-functional teams to innovate and implement solutions that enhance the performance, reliability, and security of Dropbox's infrastructure, ensuring a seamless experience for users worldwide.
  • Mentor and guide junior team members, sharing knowledge and best practices to cultivate a culture of continuous learning and professional growth within the infrastructure engineering team.
  • Stay current with emerging technologies and industry trends to continuously enhance Dropbox's infrastructure and maintain a competitive edge in the market.
  • On-call work may be necessary occasionally to help address bugs, outages, or other operational issues, with the goal of maintaining a stable and high-quality experience for our customers.
What we offer
What we offer
  • Medical, Dental & Vision allowance
  • Retirement, Critical Illness, Life & Income Protection allowance
  • Business Travel Protection: Travel medical and accident insurance
  • Flexible PTO/Paid Time Off policy in addition to statutory holidays, allowing you time to unplug, unwind, and refresh
  • Perks Allowance to be used on what matters most to you, whether that’s wellness, learning and development, food & groceries, and much more
  • Parental benefits including: Parental Leave, Fertility Benefits, Adoptions and Surrogacy support, and Lactation support
  • Mental health and wellness benefits
  • Fulltime
Read More
Arrow Right
New

Principal Engineer

The Principal AI/ML Operations Engineer leads the architecture, automation, and ...
Location
Location
United States , Pleasanton, California
Salary
Salary:
251000.00 - 314500.00 USD / Year
blackline.com Logo
BlackLine
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field
  • 10+ years in ML infrastructure, DevOps, and software system architecture
  • 4+ years in leading MLOps or AI Ops platforms
  • Strong programming skills in languages such as Python, Java, or Scala
  • Expertise in ML frameworks (TensorFlow, PyTorch, scikit-learn) and orchestration tools (Airflow, Kubeflow, Vertex AI, MLflow)
  • Proven experience operating production pipelines for ML and LLM-based systems across cloud ecosystems (GCP, AWS, Azure)
  • Deep familiarity with LangChain, LangGraph, ADK or similar agentic system runtime management
  • Strong competencies in CI/CD, IaC, and DevSecOps pipelines integrating testing, compliance, and deployment automation
  • Hands-on with observability stacks (Prometheus, Grafana, Newrelic) for model and agent performance tracking
  • Understanding of governance frameworks for Responsible AI, auditability, and cost metering across training and inference workloads
Job Responsibility
Job Responsibility
  • Define enterprise-level standards and reference architectures for ML-Ops and AIOps systems
  • Partner with data science, security, and product teams to set evaluation and governance standards (Guardrails, Bias, Drift, Latency SLAs)
  • Mentor senior engineers and drive design reviews for ML pipelines, model registries, and agentic runtime environments
  • Lead incident response and reliability strategies for ML/AI systems
  • Lead the deployment of AI models and systems in various environments
  • Collaborate with development teams to integrate AI solutions into existing workflows and applications
  • Ensure seamless integration with different platforms and technologies
  • Define and manage MCP Registry for agentic component onboarding, lifecycle versioning, and dependency governance
  • Build CI/CD pipelines automating LLM agent deployment, policy validation, and prompt evaluation of workflows
  • Develop and operationalize experimentation frameworks for agent evaluations, scenario regression, and performance analytics
What we offer
What we offer
  • short-term and long-term incentive programs
  • robust offering of benefit and wellness plans
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

The Data Engineer is responsible for designing, building, and maintaining robust...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
ibvogt.com Logo
ib vogt GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • experience in renewable energy, engineering, or asset-heavy industries is a plus
  • Strong experience with modern data stack (e.g., PowerPlatform, Azure Data Factory, Databricks, Airflow, dbt, Synapse, Snowflake, BigQuery, etc.)
  • Proficiency in Python and SQL for data transformation and automation
  • Experience with APIs, message queues (Kafka, Event Hub), data streaming and knowledge of data lakehouse and data warehouse architectures
  • Familiarity with CI/CD pipelines, DevOps practices, and containerization (Docker, Kubernetes)
  • Understanding of cloud environments (preferably Microsoft Azure, PowerPlatform)
  • Strong analytical mindset and problem-solving attitude paired with a structured, detail-oriented, and documentation-driven work style
  • Team-oriented approach and excellent communication skills in English
Job Responsibility
Job Responsibility
  • Design, implement, and maintain efficient ETL/ELT data pipelines connecting internal systems (M365, Sharepoint, ERP, CRM, SCADA, O&M, etc.) and external data sources
  • Integrate structured and unstructured data from multiple sources into the central data lake / warehouse / Dataverse
  • Build data models and transformation workflows to support analytics, reporting, and AI/ML use cases
  • Implement data quality checks, validation rules, and metadata management according to the company’s data governance framework
  • Automate workflows, optimize performance, and ensure scalability of data pipelines and processing infrastructure
  • Work closely with Data Scientists, Software Engineers, and Domain Experts to deliver reliable datasets for Digital Twin and AI applications
  • Maintain clear documentation of data flows, schemas, and operational processes
What we offer
What we offer
  • Competitive remuneration and motivating benefits
  • Opportunity to shape the data foundation of ib vogt’s digital transformation journey
  • Work on cutting-edge data platforms supporting real-world renewable energy assets
  • A truly international working environment with colleagues from all over the world
  • An open-minded, collaborative, dynamic, and highly motivated team
  • Fulltime
Read More
Arrow Right

Data Science Intern

Designs, develops, and applies programs, methodologies, and systems based on adv...
Location
Location
United States , Ft. Collins
Salary
Salary:
35.00 - 46.00 USD / Hour
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
May 26, 2026
Flip Icon
Requirements
Requirements
  • Working towards a Bachelor's and/or Master's degree with a focus in Data Science, Computer Science, Computer Engineering, Software development, or other IT related field
  • Basic knowledge of data science methodologies
  • Basic understanding of business requirements and data science objectives
  • Basic data mapping, data transfer and data migration skills
  • Basic understanding of analytics software (eg. R, SAS, SPSS, Python)
  • Basic knowledge of machine learning, data integration, and modeling skills and ETL tools (eg. Informatica, Ab Initio, Talend)
  • Basic communication and presentation skills
  • Basic data knowledge of relevant data programming languages
  • Basic knowledge of data visualization techniques
Job Responsibility
Job Responsibility
  • Participates in the analysis and validation of data sets/solutions/user experience
  • Aids in the development, enhancement and maintenance of a client's metadata based on analytic objectives
  • May load data into the infrastructure and contributes to the creation of the hypothesis matrix
  • Prepares a portion of the data for the Exploratory Data Analysis (EDA) / hypotheses
  • Contributes to building models for the overall solution, validates results and performance
  • Contributes to the selection of the model that supports the overall solution
  • Supports the research, identification and delivery of data science solutions to problems
  • Supports visualization of the model's insights, user experience and configuration tools for the analytics model
What we offer
What we offer
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Specific programs catered to helping reach career goals
  • Unconditional inclusion and flexibility to manage work and personal needs
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.