CrawlJobs Logo

Senior Data Engineer

n-ix.com Logo

N-iX

Location Icon

Location:
Ukraine

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges. Technical stack: Palantir Foundry, Python, PySpark, SQL, TypeScript.

Job Responsibility:

  • Collaborate with cross-functional teams to understand data requirements, and design, implement, and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows
  • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide the development process
  • Develop, implement, optimize, and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information
  • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency
  • Assist in optimizing data pipelines to improve machine learning workflows
  • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes
  • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes

Requirements:

  • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry
  • Strong proficiency in Python and PySpark
  • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.)
  • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow)
  • Expertise in data modeling, data warehousing, and ETL/ELT concepts
  • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.)
  • Hands-on experience in containerization technologies (e.g., Docker, Kubernetes)
  • Experience working with feature engineering and data preparation for machine learning models
  • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities
  • Strong communication and teamwork abilities
  • Understanding of data security and privacy best practices
  • Strong mathematical, statistical, and algorithmic skills

Nice to have:

  • Familiarity with ML Ops concepts, including model deployment and monitoring
  • Basic understanding of machine learning frameworks such as TensorFlow or PyTorch
  • Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI)
  • Certification in Cloud platforms, or related areas
  • Experience with search engine Apache Lucene, Webservice Rest API
  • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry
  • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous
  • Previous experience working with JavaScript and TypeScript
What we offer:
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

Additional Information:

Job Posted:
January 11, 2026

Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer position at Checkr, building the data platform to power saf...
Location
Location
United States , San Francisco
Salary
Salary:
162000.00 - 190000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of development experience in the field of data engineering
  • 5+ years writing PySpark
  • Experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
  • Experience with ETL/ELT, stream and batch processing of data at scale
  • Strong proficiency in PySpark and Python
  • Expertise in understanding of database systems, data modeling, relational databases, NoSQL (such as MongoDB)
  • Experience with big data technologies such as Kafka, Spark, Iceberg, Datalake and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
  • Knowledge of security best practices and data privacy concerns
  • Strong problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Create and maintain data pipelines and foundational datasets to support product/business needs
  • Design and build database architectures with massive and complex data, balancing with computational load and cost
  • Develop audits for data quality at scale, implementing alerting as necessary
  • Create scalable dashboards and reports to support business objectives and enable data-driven decision-making
  • Troubleshoot and resolve complex issues in production environments
  • Work closely with product managers and other stakeholders to define and implement new features
What we offer
What we offer
  • Learning and development reimbursement allowance
  • Competitive compensation and opportunity for professional and personal advancement
  • 100% medical, dental, and vision coverage for employees and dependents
  • Additional vacation benefits of 5 extra days and flexibility to take time off
  • Reimbursement for work from home equipment
  • Lunch four times a week
  • Commuter stipend
  • Abundance of snacks and beverages
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role at UpGuard supporting analytics teams to extract insig...
Location
Location
Australia , Sydney; Melbourne; Brisbane; Hobart
Salary
Salary:
Not provided
https://www.upguard.com Logo
UpGuard
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with data sourcing, storage and modelling to effectively deliver business value right through to BI platform
  • AI first mindset and experience scaling an Analytics and BI function at another SaaS business
  • Experience with Looker (Explores, Looks, Dashboards, Developer interface, dimensions and measures, models, raw SQL queries)
  • Experience with CloudSQL (PostgreSQL) and BigQuery (complex queries, indices, materialised views, clustering, partitioning)
  • Experience with Containers, Docker and Kubernetes (GKE)
  • Familiarity with n8n for automation
  • Experience with programming languages (Go for ETL workers)
  • Comfortable interfacing with various APIs (REST+JSON or MCP Server)
  • Experience with version control via GitHub and GitHub Flow
  • Security-first mindset
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable data pipelines to consolidate information from various internal systems and third-party sources
  • Develop and manage comprehensive semantic layer using technologies like LookML, dbt or SQLMesh
  • Implement and enforce data quality checks, validation rules, and governance processes
  • Ensure AI agents have access to necessary structured and unstructured data
  • Create clear, self-maintaining documentation for data models, pipelines, and semantic layer
What we offer
What we offer
  • Great Place to Work certified company
  • Equal Employment Opportunity and Affirmative Action employer
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a highly skilled Senior Data Engineer to join our team on a l...
Location
Location
United States , Dallas
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related discipline
  • At least 7 years of experience in data engineering
  • Strong background in designing and managing data pipelines
  • Proficiency in tools such as Apache Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and Amazon S3
  • Expertise in programming languages like Python, Scala, or Java for data processing and automation
  • Strong knowledge of both relational and NoSQL databases
  • Experience with Kubernetes-based data engineering and hybrid cloud environments
  • Familiarity with data modeling principles, governance frameworks, and quality assurance processes
  • Excellent problem-solving, analytical, and communication skills
Job Responsibility
Job Responsibility
  • Design and implement robust data pipelines and architectures to support data-driven decision-making
  • Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks
  • Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink
  • Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3
  • Establish and enforce data governance, quality, and security protocols
  • Manage complex workflows and processes across hybrid and multi-cloud environments
  • Work with diverse data formats, including Parquet and Avro
  • Troubleshoot and fine-tune distributed data systems
  • Mentor and guide engineers at the beginning of their careers
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Free online training
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role in Data & Analytics, Group Digital to build trusted da...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on building production data systems
  • Experience designing and operating batch and streaming pipelines on cloud platforms (GCP preferred)
  • Proficiency with tools like BigQuery, Dataflow/Beam, Pub/Sub (or Kafka), Cloud Composer/Airflow, and dbt
  • Fluent in SQL and production-grade Python/Scala for data processing and orchestration
  • Understanding of data modeling (star/snowflake, vault), partitioning, clustering, and performance at TB-PB scale
  • Experience turning ambiguous data needs into robust, observable data products with clear SLAs
  • Comfort with messy external data and geospatial datasets
  • Experience partnering with Data Scientists to productionize features, models, and feature stores
  • Ability to automate processes, codify standards, and champion governance and privacy by design (GDPR, PII handling, access controls)
Job Responsibility
Job Responsibility
  • Build Expansion360, the expansion data platform
  • Architect and operate data pipelines on GCP to ingest and harmonize internal and external data
  • Define canonical models, shared schemas, and data contracts as single source of truth
  • Enable interactive maps and location analytics through geospatial processing at scale
  • Deliver curated marts and APIs that power scenario planning and product features
  • Implement CI/CD for data, observability, access policies, and cost controls
  • Contribute to shared libraries, templates, and infrastructure-as-code
What we offer
What we offer
  • Intellectually stimulating, diverse, and open atmosphere
  • Collaboration with world-class peers across Data & Analytics, Product, and Engineering
  • Opportunity to create measurable, global impact
  • Modern tooling on Google Cloud Platform
  • Hardware and OS of your choice
  • Continuous learning (aim to spend ~20% of time on learning)
  • Flexible, friendly, values-led working environment
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adswerve is looking for a Senior Data Engineer to join our Adobe Services team. ...
Location
Location
United States
Salary
Salary:
130000.00 - 155000.00 USD / Year
adswerve.com Logo
Adswerve, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience)
  • 5+ years of experience in a data engineering, analytics, or marketing technology role
  • Hands-on expertise in Adobe Experience Platform (AEP), Real-Time CDP, Journey Optimizer, or similar tools is a big plus
  • Strong proficiency in SQL and hands-on experience with data transformation and modeling
  • Understanding of ETL/ELT workflows (e.g., dbt, Fivetran, Airflow, etc.) and cloud data platforms (e.g., GCP, Snowflake, AWS, Azure)
  • Experience with ingress/egress patterns and interacting with API’s to move data
  • Experience with Python, or JavaScript in a data or scripting context
  • Experience with customer data platforms (CDPs), event-based tracking, or customer identity management
  • Understanding of Adobe Experience Cloud integrations (e.g., Adobe Analytics, Target, Campaign) is a plus
  • Strong communication skills with the ability to lead technical conversations and present to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Lead the end-to-end architecture of data ingestion and transformation in Adobe Experience Platform (AEP) using Adobe Data Collection (Tags), Experience Data Model (XDM), and source connectors
  • Design and optimize data models, identity graphs, and segmentation strategies within Real-Time CDP to enable personalized customer experiences
  • Implement schema mapping, identity resolution, and data governance strategies
  • Collaborate with Data Architects to build scalable, reliable data pipelines across multiple systems
  • Conduct data quality assessments and support QA for new source integrations and activations
  • Write and maintain internal documentation and knowledge bases on AEP best practices and data workflows
  • Simplify complex technical concepts and educate team members and clients in a clear, approachable way
  • Contribute to internal knowledge sharing and mentor junior engineers in best practices around data modeling, pipeline development, and Adobe platform capabilities
  • Stay current on the latest Adobe Experience Platform features and data engineering trends to inform client strategies
What we offer
What we offer
  • Medical, dental and vision available for employees
  • Paid time off including vacation, sick leave & company holidays
  • Paid volunteer time
  • Flexible working hours
  • Summer Fridays
  • “Work From Home Light” days between Christmas and New Year’s Day
  • 401(k) Plan with 5% company match and no vesting period
  • Employer Paid Parental Leave
  • Health-care Spending Accounts
  • Dependent-care Spending Accounts
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right