CrawlJobs Logo

Data Engineer

https://www.circlek.com Logo

Circle K

Location Icon

Location:
India, Gurugram

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes.

Job Responsibility:

  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation

Requirements:

  • Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 3+ years of experience with setting up and operating data pipelines using Python or SQL
  • 3+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy/analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
  • In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts
  • Understanding of REST and good API design
  • Experience working with Apache Iceberg, Delta tables and distributed computing frameworks
  • Strong collaboration, teamwork skills, excellent written and verbal communications skills
  • Self-starter and motivated with ability to work in a fast-paced development environment
  • Agile experience highly desirable
  • Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools

Nice to have:

  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques
  • Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
  • Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting
  • ADF, Databricks and Azure certification is a plus

Additional Information:

Job Posted:
June 18, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer

New

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right
New

Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
Canada , Toronto
Salary
Salary:
124000.00 - 145000.00 CAD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • bonus opportunities
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right
New

Senior Software Engineer, Data Engineering

Join us in building the future of finance. Our mission is to democratize finance...
Location
Location
United States , Menlo Park
Salary
Salary:
146000.00 - 198000.00 USD / Year
robinhood.com Logo
Robinhood
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions
Job Responsibility
Job Responsibility
  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company
What we offer
What we offer
  • Market competitive and pay equity-focused compensation structure
  • 100% paid health insurance for employees with 90% coverage for dependents
  • Annual lifestyle wallet for personal wellness, learning and development, and more
  • Lifetime maximum benefit for family forming and fertility benefits
  • Dedicated mental health support for employees and eligible dependents
  • Generous time away including company holidays, paid time off, sick time, parental leave, and more
  • Lively office environment with catered meals, fully stocked kitchens, and geo-specific commuter benefits
  • Bonus opportunities
  • Equity
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer to join our product DE team and report...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner across engineering teams to tackle company-wide initiatives
  • Mentor junior members of the team
  • Partner with leadership, engineers, program managers and data scientists to understand data needs
  • Apply expertise and build scalable data solution
  • Develop and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
  • Intelligently design data models for storage and retrieval
  • Deploy data quality checks to ensure high quality of data
  • Ownership of the end-to-end data engineering component of the solution
  • Support on-call shift as needed to support the team
  • Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
Job Responsibility
Job Responsibility
  • Build top-notch data solutions and data architecture to inform our most critical strategic and real-time decisions
  • Help translate business needs into data requirements and identify efficiency opportunities
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join their product DE team. T...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner across engineering teams to tackle company-wide initiatives
  • Mentor junior members of the team
  • Partner with leadership, engineers, program managers and data scientists to understand data needs
  • Apply expertise and build scalable data solution
  • Develop and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
  • Intelligently design data models for storage and retrieval
  • Deploy data quality checks to ensure high quality of data
  • Ownership of the end-to-end data engineering component of the solution
  • Support on-call shift to support the team
  • Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
Job Responsibility
Job Responsibility
  • Build top-notch data solutions and data architecture to inform our most critical strategic and real-time decisions
  • Help translate business needs into data requirements and identify efficiency opportunities
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right
New

Sr. Data Engineer I, Data Infrastructure

Khan Academy is partnering with SafeInsights to provide researchers access to st...
Location
Location
United States , Mountain View
Salary
Salary:
137871.00 - 172339.00 USD / Year
khanacademy.org Logo
Khan Academy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data engineering within a production environment
  • Experience with Google infrastructure services and data services particularly GCS & Big Query
  • Advanced knowledge of Python
  • Strong SQL skills
  • Demonstrated proficiency with Docker and Kubernetes
  • Experience with Airflow
  • Background in applied math, statistics, economics, or a related technical field
  • Motivated by the Khan Academy mission “to provide a free world-class education for anyone, anywhere"
  • Proven cross-cultural competency skills demonstrating self-awareness, awareness of other, and the ability to adopt inclusive perspectives, attitudes, and behaviors to drive inclusion and belonging throughout the organization.
Job Responsibility
Job Responsibility
  • Collaborate with our Data Insights Group to understand the desired end data product and the overall projects goals
  • Collaborate with our data architect and Data Infrastructure team to understand available data source and tooling
  • Propose tooling and processes that will work with extant Khan Academy systems to meet the projects goals
  • Build test and refine these tools and processes
  • Deliver the SafeInsights container and associated project deliverables
  • Provide documentation and training required to maintain and enhance the above
What we offer
What we offer
  • Competitive salaries
  • Ample paid time off as needed
  • Remote-first culture
  • Generous parental leave
  • An exceptional team
  • The chance to put your talents towards a deeply meaningful mission
  • Opportunities to connect through affinity, ally, and social groups
  • 401(k) + 4% matching
  • Comprehensive insurance, including medical, dental, vision, and life
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineering Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
  • Proficiency in Python, PySpark, SQL
  • Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment
  • Knowledge of data governance principles and best practices, including data privacy and security regulations
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
  • Grow as we grow as a company. 76% of our managers are internal promotions
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.