CrawlJobs Logo

DataOps Engineer

paymentology.com Logo

Paymentology

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

At Paymentology, we’re redefining what’s possible in the payments space. As the first truly global issuer-processor, we give banks and fintechs the technology and talent to launch and manage Mastercard and Visa cards at scale across more than 60 countries. Our advanced, multi-cloud platform delivers real-time data, unmatched scalability, and the flexibility of shared or dedicated processing instances. It’s this global reach and innovation that sets us apart. We’re looking for a DataOps Engineer to join our Data Engineering team and help build a modern data platform from the ground up. This is a greenfield opportunity focused on infrastructure, automation, and observability, playing a critical role in enabling reliable, scalable, and secure data systems. You’ll work closely with data engineers and senior technical stakeholders to design, implement, and operate the foundations of our data stack. This role is ideal for a mid-level engineer with strong DevOps fundamentals who is eager to deepen their expertise in data platforms, cloud infrastructure, and observability within a high-impact, global fintech environment.

Job Responsibility:

  • Design and implement cloud infrastructure for a modern data platform using Infrastructure as Code, with a strong focus on scalability, security, and reliability
  • Build and maintain CI/CD pipelines that support data engineering workflows and infrastructure deployments
  • Implement and operate observability solutions including monitoring, logging, metrics, and alerting to ensure platform reliability and fast incident response
  • Collaborate closely with data engineers to translate platform and workflow requirements into robust infrastructure solutions
  • Apply best practices for availability, disaster recovery, and cost efficiency, while documenting infrastructure patterns and operational procedures

Requirements:

  • 3-5 years of hands-on experience in DevOps, Platform Engineering, or DataOps roles
  • Experience supporting or contributing to data platforms or data infrastructure projects
  • Hands-on proficiency with Infrastructure as Code, particularly Terraform
  • Experience working with AWS or GCP and common cloud architecture patterns
  • Practical experience or strong understanding of Kubernetes and containerised workloads
  • Familiarity with observability tooling across monitoring, logging, metrics, and alerting
  • Strong scripting skills in Python, Bash, or GoLang to automate operational processes
  • Excellent problem-solving skills and the ability to work effectively in a collaborative, fully remote environment
  • A strong inclination to develop DataOps and MLOps knowledge and capabilities

Nice to have:

Exposure to modern data engineering tools such as dbt, Airflow, Apache Spark, or similar technologies is an advantage

Additional Information:

Job Posted:
February 16, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for DataOps Engineer

DataOps Engineer

Looking for DataOps Engineer to lead database performance management for SaaS he...
Location
Location
Salary
Salary:
Not provided
hivex.tech Logo
Hivex
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3.5+ years of professional database management, development, and/or DataOps experience in a SaaS product environment
  • Experience database performance engineering large scale systems through high growth
  • Experience leading data quality management activities
  • Ability to collaborate with Java and Python developers on best practices for database performance and data quality
  • Deep knowledge of database internals and best practices for transactional and analytical processing
  • Ability to problem-solve collaboratively and independently
Job Responsibility
Job Responsibility
  • Define and build automated a database performance engineering process and framework
  • Collect and manage deterministic, well-known, and representative test sets
  • Optimize database performance using configuration, best-practices, and effective models
  • Engage with developers to collaborate on requirements and performance engineering
  • Create and manage ETL processes
  • Detect and respond to operational and customer problems
Read More
Arrow Right

Senior Data Engineer

Our Senior Data Engineers enable public sector organisations to embrace a data-d...
Location
Location
United Kingdom , Bristol; London; Manchester; Swansea
Salary
Salary:
60000.00 - 80000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Enthusiasm for learning and self-development
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Gathering and meeting the requirements of both clients and users on a data project
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Owning the cloud infrastructure underpinning data systems through a DevOps approach
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines. With an understanding how to create reusable libraries to encourage uniformity of approach across multiple data pipelines.
  • Able to document and present an end-to-end diagram to explain a data processing system on a cloud environment, with some knowledge of how you would present diagrams (C4, UML etc.)
  • To provide guidance how one would implement a robust DevOps approach in a data project. Also would be able to talk about tools needed for DataOps in areas such as orchestration, data integration and data analytics.
Job Responsibility
Job Responsibility
  • Enable public sector organisations to embrace a data-driven approach by providing data platforms and services that are high-quality, cost-efficient, and tailored to clients’ needs
  • Develop, operate, and maintain these services
  • Provide maximum value to data consumers, including analysts, scientists, and business stakeholders
  • Play one or more roles according to our clients' needs
  • Support as a senior contributor for a project, focusing on both delivering engineering work as well as upskilling members of the client team
  • Play more of a technical architect role and work with the larger MadeTech team to identify growth opportunities within the account
  • Have a drive to deliver outcomes for users
  • Make sure that the wider context of a delivery is considered and maintain alignment between the operational and analytical aspects of the engineering solution
What we offer
What we offer
  • 30 days of paid annual leave + bank holidays
  • Flexible Parental Leave
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Junior Data Infrastructure Engineer

As part of the Data Infrastructure team you will be supporting mission critical ...
Location
Location
United Kingdom , Brighton
Salary
Salary:
Not provided
brandwatch.com Logo
Brandwatch
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An interest in how computer infrastructure actually works, and a passion for learning
  • Interest, and ideally production experience, running storage systems, eg. as part of a selfhosted service, a home lab or as part of academic studies
  • Experience with Linux systems administration, including experience of trouble shooting
  • Fluency with one or more scripting languages, ideally Bash or Python
  • Experience helping your peers
  • Pride in the quality of your work
Job Responsibility
Job Responsibility
  • Supporting mission critical big data platforms, to ensure they are fully performant, reliable, available and secure
  • Development of tooling and operational support for our platforms
  • Help with staging support
  • Join the team supporting the production systems
  • Take a full part in the life of the team
  • Start designing the infrastructure we run
Read More
Arrow Right
New

Azure Dataops Data Engineer – Ii

We are seeking an Azure DataOps Data Engineer – II with strong hands-on experien...
Location
Location
India , Gurgaon
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience in Data Engineering / DataOps roles
  • Strong hands-on experience with: Azure Databricks (PySpark, Spark SQL, Delta Lake)
  • Azure Data Factory (ADF) – pipelines, triggers, parameters, monitoring
  • Azure Data Lake Storage (ADLS Gen2)
  • Good understanding of ETL/ELT frameworks, batch and incremental processing
  • Strong SQL skills for data analysis and troubleshooting
  • Experience with production support, incident management, and SLA-driven environments
  • Familiarity with monitoring tools (Azure Monitor, Log Analytics, alerts)
  • Understanding of Azure security concepts (RBAC, Managed Identity, Key Vault)
  • Willingness to work in a rotational shift / on-call support model as part of a global operations team
Job Responsibility
Job Responsibility
  • Support production data platforms, ensuring high availability, reliability, and performance
  • Monitor data pipelines and jobs, proactively identifying and resolving failures, performance issues, and data discrepancies
  • Perform root cause analysis (RCA) for incidents and implement preventive measures
  • Implement DataOps best practices including automation, monitoring, alerting, and operational dashboards
  • Collaborate with cross-functional teams to support reporting, analytics, and downstream consumption
  • Maintain documentation for pipelines, operational runbooks, and support procedures
  • Participate in on-call and rotational shift support, including weekends or night shifts as required
  • Fulltime
Read More
Arrow Right

Senior DataOps Engineer

Drive optimisations, upgrades and maintenance of a Kubernetes based data and mod...
Location
Location
Salary
Salary:
Not provided
sniconsulting.net Logo
SNI sp. z o.o.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as DataOps Engineer or similar role covering the most of required skills
  • Expertise in Cloud architecture and key technologies (Kubernetes, Airflow, Managed Airflow)
  • Expertise in modern development tools and practices (e.g. CI/CD, DevOps, Observability, Pair Programming, TDD)
  • Knowledge of infrastructure-as-code tools (CloudFormation)
  • Experience with databases (Database: Redshift)
  • Programming language (Python)
  • Expertise in choosing and applying design patterns.
  • Developing software with scale, security and reliability in mind.
  • Knowledge of software development principles, design patterns and best practices
  • Test Driven Development and testing practices
Job Responsibility
Job Responsibility
  • Drive optimisations, upgrades and maintenance of a Kubernetes based data and modelling platform
  • Supporting access management fielding questions around Airflow and minor feature enhancements
  • Assist with Migration of Data pipelines
  • Fulltime
Read More
Arrow Right

Graduate Data Engineer

As a Graduate Data Engineer, you will build and maintain scalable data pipelines...
Location
Location
United Kingdom , Marlow
Salary
Salary:
Not provided
srgtalent.com Logo
SRG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Engineering, Mathematics, or similar, or similar work experience
  • Up to 2 years of experience building data pipelines at work or through internships
  • Can write clear and reliable Python/PySpark code
  • Familiar with popular analytics tools (like pandas, numpy, matplotlib), big data frameworks (like Spark), and cloud services (like Palantir, AWS, Azure, or Google Cloud)
  • Deep understanding of data models, relational and non-relational databases, and how they are used to organize, store, and retrieve data efficiently for analytics and machine learning
  • Knowledge about software engineering methods, including DevOps, DataOps, or MLOps is a plus
  • Master's degree in engineering (such as AI/ML, Data Systems, Computer Science, Mathematics, Biotechnology, Physics), or minimum 2 years of relevant technology experience
  • Experience with Generative AI (GenAI) and agentic systems will be considered a strong plus
  • Have a proactive and adaptable mindset: willing to take initiative, learn new skills, and contribute to different aspects of a project as needed to drive solutions from start to finish, even beyond the formal job description
  • Show a strong ability to thrive in situations of ambiguity, taking initiative to create clarity for yourself and the team, and proactively driving progress even when details are uncertain or evolving
Job Responsibility
Job Responsibility
  • Build and maintain data pipelines, leveraging PySpark and/or Typescript within Foundry, to transform raw data into reliable, usable datasets
  • Assist in preparing and optimizing data pipelines to support machine learning and AI model development, ensuring datasets are clean, well-structured, and readily usable by Data Science teams
  • Support the integration and management of feature engineering processes and model outputs into Foundry's data ecosystem, helping enable scalable deployment and monitoring of AI/ML solutions
  • Engaged in gathering and translating stakeholder requirements for key data models and reporting, with a focus on Palantir Foundry workflows and tools
  • Participate in developing and refining dashboards and reports in Foundry to visualize key metrics and insights
  • Collaborate with Product, Engineering, and GTM teams to align data architecture and solutions, learning to support scalable, self-serve analytics across the organization
  • Have some prompt engineering experience with large language models, including writing and evaluating complex multi-step prompts
  • Continuously develop your understanding of the company's data landscape, including Palantir Foundry's ontology-driven approach and best practices for data management
Read More
Arrow Right

Data Analytics Engineer

SDG Group is expanding its global Data & Analytics practice and is seeking a mot...
Location
Location
Egypt , Cairo
Salary
Salary:
Not provided
sdggroup.com Logo
SDG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Information Systems, or related field
  • Hands-on experience in DataOps / Data Engineering
  • Strong knowledge in Databricks OR Snowflake (one of them is mandatory)
  • Proficiency in Python and SQL
  • Experience with Azure data ecosystem (ADF, ADLS, Synapse, etc.)
  • Understanding of CI/CD practices and DevOps for data.
  • Knowledge of data modeling, orchestration frameworks, and monitoring tools
  • Strong analytical and troubleshooting skills
  • Eagerness to learn and grow in a global consulting environment
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable and reliable data pipelines following DataOps best practices
  • Work with modern cloud data stacks using Databricks (Spark, Delta Lake) or Snowflake (Snow pipe, tasks, streams)
  • Develop and optimize ETL/ELT workflows using Python, SQL, and orchestration tools
  • Work with Azure data services (ADF, ADLS, Azure SQL, Azure Functions)
  • Implement CI/CD practices using Azure DevOps or Git-based workflows
  • Ensure data quality, consistency, and governance across all delivered data solutions
  • Monitor and troubleshoot pipelines for performance and operational excellence
  • Collaborate with international teams, architects, and analytics consultants
  • Contribute to technical documentation and solution design assets
What we offer
What we offer
  • Remote working model aligned with international project needs
  • Opportunity to work on European and global engagements
  • Mentorship and growth paths within SDG Group
  • A dynamic, innovative, and collaborative environment
  • Access to world-class training and learning platforms
  • Fulltime
Read More
Arrow Right

Azure DataOps Lead

The Azure DataOps Lead will be responsible for leading the operational delivery,...
Location
Location
India
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8–12 years of total IT experience with at least 3–5 years in Azure DataOps or Data Engineering leadership
  • Hands-on expertise with key Azure Data Services, including: Azure Data Factory (ADF)
  • Azure Synapse Analytics
  • Azure Databricks
  • Azure SQL Database / SQL Managed Instance
  • Azure Data Lake Storage Gen2 (ADLS)
  • Strong understanding of DataOps concepts
  • Experience in monitoring and alerting using Log Analytics, Application Insights, and Azure Monitor
  • Working knowledge of incident management, RCA documentation, and operational reporting
  • Strong analytical skills for troubleshooting performance issues and identifying optimization opportunities
Job Responsibility
Job Responsibility
  • Lead and manage the Azure DataOps function, ensuring smooth daily operations, incident resolution, and performance stability across production data platforms
  • Oversee data pipeline orchestration and automation using Azure Data Factory (ADF), Synapse Analytics, Databricks, and Logic Apps
  • Implement CI/CD pipelines for data workflows using Azure DevOps or equivalent automation tools
  • Drive incident, problem, change, and request management processes aligned with ITIL best practices
  • Coordinate with L1/L2 support teams for escalations, RCA preparation, and client communication
  • Maintain governance for data quality, access control, and compliance using Azure Purview, Key Vault, and RBAC
  • Collaborate with Data Architects and Cloud Engineers to design scalable, resilient, and cost-efficient Azure data solutions
  • Ensure 24/7 operational readiness through proactive alert monitoring, performance tuning, and preventive maintenance
  • Contribute to automation initiatives using PowerShell, Python, or ARM templates to reduce manual efforts and improve system reliability
  • Partner with customer stakeholders to report on SLAs, KPIs, RCA summaries, and provide technical recommendations for improvement
Read More
Arrow Right