CrawlJobs Logo

Senior Engineering Manager, Big Data

https://checkr.com Logo

Checkr

Location Icon

Location:
United States , Denver

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

201000.00 - 237000.00 USD / Year

Job Description:

Checkr is looking for a Senior Engineering Manager to lead the Criminal Data team. This team is responsible for big data at Checkr: ingesting, storing, and processing records in the billions. As the Senior Engineering Manager for this team, you will be in charge of shaping the technical strategy and overseeing project delivery. The projects you’ll be working on are high impact and scale: you’ll be advancing our architecture and systems to lead Checkr into the next generation of product offerings. The decisions you make will impact millions of people every year, and help businesses make fast, informed, and safe decisions.

Job Responsibility:

  • Drive a motivating technical vision for the team
  • Partner closely with product management to solve business problems
  • Work with the team to build a world-class architecture that can scale into the next phase of Checkr’s growth
  • Hire the best talent and continue to raise the bar for the team
  • Represent the team in planning and product meetings
  • Optimize engineering processes and policies to drive velocity and quality

Requirements:

  • 6+ years as an engineering manager
  • 8+ years as an engineer
  • Exceptional verbal and written communication skills
  • Unparalleled bar for quality (data quality metrics, QC gates, data governance, automated regression test suites, data validations, etc)
  • Experience working on data products at scale and understanding the legal, human impact, and technical nuances of supporting a highly regulated product
  • Experience designing and maintaining: Real-time & batch processing data pipelines serving up billions of data points
  • Normalizing and cleansing data across a medallion lakehouse architecture
  • Systems that rely on high-volume, low-latency messaging infrastructure (e.g. Kafka or similar)
  • Highly tolerant production systems with streamlined operations (data lineage, logging, telemetry, alerting, etc.)
  • Familiarity with AWS Glue, OpenSearch, EMR, etc
  • Familiarity with DevOps (including Infrastructure as Code, CI/CD, containerization, etc)
  • Familiarity with developing APIs and backend microservices
  • Exposure to machine learning / AI to solve complex data challenges, such as transformation, deduplication, and enrichment
  • Exposure to working in the identity space (entity resolution)
  • Exposure to managing a globally distributed team
What we offer:
  • A fast-paced and collaborative environment
  • Learning and development allowance
  • Competitive compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • Up to 25K reimbursement for fertility, adoption, and parental planning services
  • Flexible PTO policy
  • Monthly wellness stipend, home office stipend

Additional Information:

Job Posted:
April 24, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Engineering Manager, Big Data

Big Data Engineer

The Applications Development Intermediate Programmer Analyst is an intermediate ...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Understanding of the Big Data architecture and the ability to trouble shoot performance and/or development issues on Hadoop (Cloudera preferably)
  • Experience working with Hive, Impala, Kudu, HBase, Spark for data curation/conformance related work
  • proficiency in Spark for development work related to curation/conformance
  • Experience with Spark/Storm/Kafka or equivalent streaming/batch processing and event-based messaging
  • Strong data analysis skills and the ability to slice and dice the data as needed for business reporting
  • Experience working in an agile environment with a fast-paced changing requirement
  • Excellent planning and organizational skills and Strong Communication skills
  • Relational SQL and NoSQL database integration and data distribution principles experience
  • Experience with API development and use of JSON/XML/Hypermedia data formats
  • Align to Engineering Excellence Development principles and standards
Job Responsibility
Job Responsibility
  • Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code
  • Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems
  • Apply fundamental knowledge of programming languages for design specifications
  • Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging
  • Serve as advisor or coach to new or lower level analysts
  • Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions
  • Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents
  • Has the ability to operate with a limited level of direct supervision
  • Can exercise independence of judgement and autonomy
  • Acts as SME to senior stakeholders and /or other team members
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Engineering Manager, Big Data

Checkr is looking for a Senior Engineering Manager to lead the Criminal Data tea...
Location
Location
United States , San Francisco
Salary
Salary:
238000.00 - 280000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years as an engineering manager
  • 8+ years as an engineer
  • Exceptional verbal and written communication skills
  • Unparalleled bar for quality (data quality metrics, QC gates, data governance, automated regression test suites, data validations, etc)
  • Experience working on data products at scale and understanding the legal, human impact, and technical nuances of supporting a highly regulated product
  • Experience designing and maintaining: Real-time & batch processing data pipelines serving up billions of data points
  • Normalizing and cleansing data across a medallion lakehouse architecture
  • Systems that rely on high-volume, low-latency messaging infrastructure (e.g. Kafka or similar)
  • Highly tolerant production systems with streamlined operations (data lineage, logging, telemetry, alerting, etc.)
  • Familiarity with AWS Glue, OpenSearch, EMR, etc
Job Responsibility
Job Responsibility
  • Drive a motivating technical vision for the team
  • Partner closely with product management to solve business problems
  • Work with the team to build a world-class architecture that can scale into the next phase of Checkr’s growth
  • Hire the best talent and continue to raise the bar for the team
  • Represent the team in planning and product meetings
  • Optimize engineering processes and policies to drive velocity and quality
What we offer
What we offer
  • A fast-paced and collaborative environment
  • Learning and development allowance
  • Competitive compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • Up to 25K reimbursement for fertility, adoption, and parental planning services
  • Flexible PTO policy
  • Monthly wellness stipend, home office stipend
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer with a collaborative, “can-do” attitud...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Fulltime
Read More
Arrow Right

Big Data Engineering Developer

The Applications Development Senior Programmer/Lead Analyst is an senior level p...
Location
Location
India , Pune; Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong solid understanding of the Big Data architecture and the ability to trouble shoot performance and/or development issues on Hadoop (Cloudera preferably)
  • 9+ years of experience working with Hive, Impala and Hbase, Kudu Spark for data curation/conformance related work
  • Strong proficiency in Spark for development work related to curation/conformance. Strong Scala developer (with previous Java background) preferred.
  • Experience with Spark/Storm/Kafka or equivalent streaming/batch processing and event based messaging
  • Strong data analysis skills and the ability to slice and dice the data as needed for business reporting
  • Experience working in an agile environment with a fast paced changing requirements.
  • Excellent planning and organizational skills
  • Strong Communication skills
Job Responsibility
Job Responsibility
  • Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas
  • Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users
  • Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement
  • Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality
  • Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems
  • Ensure essential procedures are followed and help define operating standards and processes
  • Serve as advisor or coach to new or lower level analysts
  • Has the ability to operate with a limited level of direct supervision
  • Can exercise independence of judgement and autonomy
  • Acts as SME to senior stakeholders and /or other team members
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer position at Checkr, building the data platform to power saf...
Location
Location
United States , San Francisco
Salary
Salary:
162000.00 - 190000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of development experience in the field of data engineering
  • 5+ years writing PySpark
  • Experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
  • Experience with ETL/ELT, stream and batch processing of data at scale
  • Strong proficiency in PySpark and Python
  • Expertise in understanding of database systems, data modeling, relational databases, NoSQL (such as MongoDB)
  • Experience with big data technologies such as Kafka, Spark, Iceberg, Datalake and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
  • Knowledge of security best practices and data privacy concerns
  • Strong problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Create and maintain data pipelines and foundational datasets to support product/business needs
  • Design and build database architectures with massive and complex data, balancing with computational load and cost
  • Develop audits for data quality at scale, implementing alerting as necessary
  • Create scalable dashboards and reports to support business objectives and enable data-driven decision-making
  • Troubleshoot and resolve complex issues in production environments
  • Work closely with product managers and other stakeholders to define and implement new features
What we offer
What we offer
  • Learning and development reimbursement allowance
  • Competitive compensation and opportunity for professional and personal advancement
  • 100% medical, dental, and vision coverage for employees and dependents
  • Additional vacation benefits of 5 extra days and flexibility to take time off
  • Reimbursement for work from home equipment
  • Lunch four times a week
  • Commuter stipend
  • Abundance of snacks and beverages
  • Fulltime
Read More
Arrow Right

Senior Engineering Manager - AI/ML

Hewlett Packard Enterprise is looking for a Senior Engineering Manager - AI/ML t...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, engineering, data science, artificial intelligence, machine learning, or closely related quantitative discipline
  • 7-15 years’ experience including 5 or more years of people management experience
  • Advanced Degree (Master’s or Ph.D.) strongly preferred
  • Strong problem-solving and analytical skills, with the ability to identify business opportunities, formulate strategies, and execute projects effectively
  • Excellent communication and presentation skills, with the ability to convey complex technical concepts to technical and non-technical stakeholders
  • Proven ability to manage multiple projects and priorities in a fast-paced environment, ensuring timely delivery and high-quality results
  • Experience with cloud platforms, big data technologies, and distributed computing frameworks is a plus
  • Strong understanding of data privacy, security, and ethical considerations in AI and machine learning
  • Strong technical expertise in AI and machine learning algorithms, models, and tools, with proficiency in programming languages such as Python or R
  • Demonstrated leadership and management skills, with experience in leading and mentoring AI and machine learning professional teams.
Job Responsibility
Job Responsibility
  • Develop software algorithms to structure, analyze and leverage structured and unstructured data
  • Use machine learning and statistical modeling techniques to improve product/system performance, data management, quality, and accuracy
  • Apply, optimize, and scale deep learning technologies and algorithms
  • Document procedures for installation and maintenance
  • Perform testing and debugging
  • Define and monitor performance metrics
  • Translate customer requirements and industry trends into AI/ML products and systems improvements
  • Develop and drive the organization’s AI and machine learning strategy
  • Identify new opportunities for AI and machine learning applications
  • Oversee complex AI and machine learning projects from conception to deployment
What we offer
What we offer
  • Comprehensive suite of benefits supporting physical, financial, and emotional wellbeing
  • Personal and professional development programs
  • Career growth opportunities
  • Inclusive work environment.
  • Fulltime
Read More
Arrow Right