CrawlJobs Logo

Senior Big Data Engineer

phasorsoft.com Logo

PhasorSoft Group

Location Icon

Location:
United States, Flowood

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Design and develop scalable data pipelines and solutions using Python and PySpark
  • Utilize big data technologies such as Hadoop, Spark, Kafka, or similar tools for processing and analyzing large datasets
  • Develop and maintain ETL processes to extract, transform, and load data into data lakes or warehouses
  • Collaborate with data engineers and scientists to implement machine learning models and algorithms
  • Optimize and tune data processing workflows for performance and efficiency
  • Implement data governance and security measures to ensure data integrity and privacy
  • Create and maintain documentation for data pipelines, workflows, and processes
  • Provide technical leadership and mentorship to junior team members

Requirements:

  • Proficiency in Python programming for data manipulation and analysis
  • Experience with PySpark for processing large-scale data
  • Strong understanding and practical experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • Knowledge of designing and implementing ETL processes for data integration
  • Ability to work with large datasets, perform data cleansing, transformations, and aggregations
  • Familiarity with machine learning concepts and experience implementing ML models
  • Understanding of data governance principles and experience implementing data security measures
  • Ability to create clear and concise documentation for data pipelines and processes
  • Strong teamwork and collaboration skills to work with cross-functional teams
  • Analytical and problem-solving skills to optimize data workflows and processes
  • For senior roles, the ability to provide technical leadership, mentorship, and guidance to junior team members
  • Knowledge of SQL for querying and manipulating data in databases

Nice to have:

Experience with cloud platforms like AWS, Azure, or Google Cloud is a plus

Additional Information:

Job Posted:
December 11, 2025

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Big Data Engineer

New

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
84835.61 - 149076.17 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows, BQML, Vertex AI.
  • Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics.
  • Hands-on experience working with real-time, unstructured, and synthetic data.
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
  • Expert knowledge on Python programming and SQL.
  • Experience with cloud platforms (AWS, GCP, Azure) and their data services
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
  • Familiarity with synthetic data generation and unstructured data processing
Job Responsibility
Job Responsibility
  • Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets
  • Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth.
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound.
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
  • Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Eligible to participate in an annual incentive program
  • Fulltime
Read More
Arrow Right
New

Senior Crypto Data Engineer

Token Metrics is seeking a multi-talented Senior Big Data Engineer to facilitate...
Location
Location
Vietnam , Hanoi
Salary
Salary:
Not provided
tokenmetrics.com Logo
Token Metrics
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field
  • A Master's degree in a relevant field is an added advantage
  • 3+ years of Python, Java or any programming language development experience
  • 3+ years of SQL & No-SQL experience (Snowflake Cloud DW & MongoDB experience is a plus)
  • 3+ years of experience with schema design and dimensional data modeling
  • Expert proficiency in SQL, NoSQL, Python, C++, Java, R
  • Expert with building Data Lake, Data Warehouse or suitable equivalent
  • Expert in AWS Cloud
  • Excellent analytical and problem-solving skills
  • A knack for independence and group work
Job Responsibility
Job Responsibility
  • Liaising with coworkers and clients to elucidate the requirements for each task
  • Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed
  • Reformulating existing frameworks to optimize their functioning
  • Testing such structures to ensure that they are fit for use
  • Building a data pipeline from different data sources using different data types like API, CSV, JSON, etc
  • Preparing raw data for manipulation by Data Scientists
  • Implementing proper data validation and data reconciliation methodologies
  • Ensuring that your work remains backed up and readily accessible to relevant coworkers
  • Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right
New

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

Madbox is a fast-growing mobile gaming company. We are looking for a Senior Data...
Location
Location
Spain , Barcelona
Salary
Salary:
Not provided
madbox.io Logo
Madbox
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s degree in Engineering, Computer Science, or equivalent
  • 4+ years of proven experience in Big Data systems architecture and data pipelines
  • Experience with both SQL and NoSQL databases is required
  • Strong proficiency in Python
  • Proven experience with cloud platforms (preferably GCP) and their data services
  • Hands-on experience setting up and monitoring CI/CD pipelines
  • Proficient with orchestration tools (ideally Airflow)
  • Strong communication skills
  • Analytical and problem-solving mindset with attention to detail
  • Autonomous, proactive, and always looking for opportunities to improve solutions and processes
Job Responsibility
Job Responsibility
  • Start by onboarding on our full Data Stack, both technically (GCP, streaming, big data) and from a business logic perspective
  • Take ownership of end-to-end data pipelines, ensuring scalability, reliability, and maintainability
  • Continuously improve our data stack, proposing technical evolutions, monitoring infrastructure performance, and optimizing costs
  • Contribute to the growth and mentoring of junior profiles, providing guidance, sharing best practices, and supporting their development within the team
  • Design and optimize data models tailored to solve complex analytical and business requirements
  • Ensure data quality through cleansing, unit testing, and embedded DQA processes
  • Communicate effectively with technical and non-technical stakeholders, explaining solutions clearly and constructively
  • Demonstrate autonomy and decision-making skills, moving projects forward while keeping stakeholders informed
  • Take a key part in building and maintaining the Data Engineering roadmap, covering infrastructure, robustness, and business-driven initiatives
Read More
Arrow Right
New

Senior Data Engineering Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
  • Proficiency in Python, PySpark, SQL
  • Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment
  • Knowledge of data governance principles and best practices, including data privacy and security regulations
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
  • Grow as we grow as a company. 76% of our managers are internal promotions
Read More
Arrow Right
New

Senior Data Engineer

Location
Location
Salary
Salary:
Not provided
kloud9.nyc Logo
Kloud9
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in developing scalable Big Data applications or solutions on distributed platforms
  • 4+ years of experience working with distributed technology tools, including Spark, Python, Scala
  • Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture
  • Proficient in working on Amazon Web Services(AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience working in Agile and Scrum development process
  • 3+ years of experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience architecting data product in Streaming, Serverless and Microservices Architecture and platform
  • 3+ years of experience working with Data platforms, including EMR, Airflow, Databricks (Data Engineering & Delta)
  • Experience with creating/configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc.
  • Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.
Job Responsibility
Job Responsibility
  • Design and develop scalable Big Data applications on distributed platforms to support large-scale data processing and analytics needs
  • Partner with others in solving complex problems by taking a broad perspective to identify innovative solutions
  • Build positive relationships across Product and Engineering
  • Influence and communicate effectively, both verbally and written, with team members and business stakeholders
  • Quickly pick up new programming languages, technologies, and frameworks
  • Collaborate effectively in a high-speed, results-driven work environment to meet project deadlines and business goals
  • Utilize Data Warehousing tools such as SQL databases, Presto, and Snowflake for efficient data storage, querying, and analysis
  • Demonstrate experience in learning new technologies and skills.
What we offer
What we offer
  • Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields.
Read More
Arrow Right

Senior Data Engineer

We are currently looking for a Data Engineer to join our client’s forward-thinki...
Location
Location
United Kingdom , London
Salary
Salary:
70000.00 - 75000.00 GBP / Year
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience leading data engineering initiatives in a cloud environment
  • Expertise in Azure Synapse Analytics (SQL pools, Spark, pipelines, serverless SQL)
  • Hands-on experience with Microsoft Fabric (OneLake, Lakehouse, Data Engineering/Data Science workloads)
  • Strong SQL skills (T-SQL, Synapse SQL, Lakehouse SQL)
  • Experience with Python and/or Spark for big data processing
  • Knowledge of cloud data architecture (ADLS Gen2, Delta Lake, Parquet)
  • Understanding of data governance, security, and compliance standards
Job Responsibility
Job Responsibility
  • Lead and mentor internal and external data engineering teams to deliver best-in-class cloud solutions
  • Design, build, and optimise enterprise data platforms using Azure Synapse, Microsoft Fabric, OneLake, and lakehouse architectures
  • Modernise traditional ETL/ELT processes and transition workloads to cloud-native tools
  • Develop scalable data pipelines, Spark-based processing, and dataflows for unified analytics
  • Implement governance, security, and compliance standards across all cloud data environments
  • Collaborate with stakeholders to support high-value analytics and Power BI initiatives
  • Produce documentation, technical designs, and support materials to enable continuous improvement
What we offer
What we offer
  • Competitive salary
  • Strong benefits package
  • Hybrid working model
  • Opportunity to own and shape a modern cloud data ecosystem
  • Continuous training and professional development in Microsoft Fabric, Azure, and modern data engineering
  • Clear path for career progression within a growing data organisation
  • Supportive, innovative, and people-focused working environment
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.