CrawlJobs Logo

Senior Data Engineer

https://www.atlassian.com Logo

Atlassian

Location Icon

Location:
India, Bengaluru

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team responsible for building our data lake, maintaining big data pipelines/services, and facilitating the movement of billions of messages each day. You'll work directly with business stakeholders and engineering teams to enable growth and retention strategies. You'll help ingest data faster, make data pipelines more efficient, and build micro-services to enable self-serve capabilities at scale.

Job Responsibility:

  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services, architect, design, and enable self-serve capabilities at scale
  • Work on an AWS-based data lake backed by open source projects such as Spark and Airflow
  • Identify ways to make our platform better and improve user experience
  • Apply strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake

Requirements:

  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
  • A willingness to accept failure, learn and try again
  • An open mind to try solutions that may seem crazy at first
  • Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)

Nice to have:

  • Experience building self-service tooling and platforms
  • Built and designed Kappa architecture platforms
  • Built pipelines using Databricks and well versed with their API’s
  • Contributed to open source projects (Ex: Operators in Airflow)
What we offer:
  • Health coverage
  • Paid volunteer days
  • Wellness resources

Additional Information:

Job Posted:
March 19, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer

New

Senior Data Engineer

As a Senior Data Engineer, you will be pivotal in designing, building, and optim...
Location
Location
United States
Salary
Salary:
102000.00 - 125000.00 USD / Year
wpromote.com Logo
Wpromote
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent practical experience
  • 4+ years of experience in data engineering or a related field
  • Intermediate to advanced programming skills in Python
  • Proficiency in SQL and experience with relational databases
  • Strong knowledge of database and data warehousing design and management
  • Strong experience with DBT (data build tool) and test-driven development practices
  • Proficiency with at least 1 cloud database (e.g. BigQuery, Snowflake, Redshift, etc.)
  • Excellent problem-solving skills, project management habits, and attention to detail
  • Advanced level Excel and Google Sheets experience
  • Familiarity with data orchestration tools (e.g. Airflow, Dagster, AWS Glue, Azure data factory, etc.)
Job Responsibility
Job Responsibility
  • Developing data pipelines leveraging a variety of technologies including dbt and BigQuery
  • Gathering requirements from non-technical stakeholders and building effective solutions
  • Identifying areas of innovation that align with existing company and team objectives
  • Managing multiple pipelines across Wpromote’s client portfolio
What we offer
What we offer
  • Half-day Fridays year round
  • Unlimited PTO
  • Extended Holiday break (Winter)
  • Flexible schedules
  • Work from anywhere options*
  • 100% paid parental leave
  • 401(k) matching
  • Medical, Dental, Vision, Life, Pet Insurance
  • Sponsored life insurance
  • Short Term Disability insurance and additional voluntary insurance
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

As a senior member of our engineering team, you will take ownership of critical ...
Location
Location
Poland
Salary
Salary:
Not provided
userlane.com Logo
Userlane GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 5 years of hands-on experience in designing and developing data processing systems
  • Experience being part of a team of software engineers and helping establish processes from scratch
  • Familiarity with DBMS like ClickHouse or a different SQL-based OLAP database
  • Experience with various data engineering tools like Airflow, Kafka, dbt
  • Experience building and maintaining applications with the following languages: Python, Golang, Typescript
  • Knowledge of container technologies like Docker and Kubernetes
  • Experience with CI/CD pipelines and automated testing
  • Ability to solve problems and balance structure with creativity
  • Ability to operate independently and apply strategic thinking with technical depth
  • Willingness to share information and skills with the team
Job Responsibility
Job Responsibility
  • Shape and maintain our various data and backend components - DBs, APIs and services
  • Understand business requirements and analyze their impact on the design of our software services and tools
  • Identify architectural changes needed in our infrastructure to support a smooth process of adding new features
  • Research, propose, and deliver changes to our software architecture to address our engineering and product requirements
  • Design, develop, and maintain a solid and stable RESTful API based on industry standards and best practices
  • Collaborate with internal and external teams to deliver software that fits the overall ecosystem of our products
  • Stay up to date with the new trends and technologies that enable us to work smarter, not harder
What we offer
What we offer
  • Team & Culture: A high-performance culture with great leadership and a fun, engaged, motivated, and diverse team with people from over 20 countries
  • Market: Userlane is among the global leaders in the rapidly growing Digital Adoption industry
  • Growth: We take you and your development seriously. You can expect weekly 121s, a personalised skills assessment and development plan, on the job coaching and a budget for events and training
  • Compensation: Significant financial upside with an attractive and incentivising package on B2B basis
  • Fulltime
Read More
Arrow Right
New

Senior Crypto Data Engineer

Token Metrics is seeking a multi-talented Senior Big Data Engineer to facilitate...
Location
Location
Vietnam , Hanoi
Salary
Salary:
Not provided
Token Metrics
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field
  • A Master's degree in a relevant field is an added advantage
  • 3+ years of Python, Java or any programming language development experience
  • 3+ years of SQL & No-SQL experience (Snowflake Cloud DW & MongoDB experience is a plus)
  • 3+ years of experience with schema design and dimensional data modeling
  • Expert proficiency in SQL, NoSQL, Python, C++, Java, R
  • Expert with building Data Lake, Data Warehouse or suitable equivalent
  • Expert in AWS Cloud
  • Excellent analytical and problem-solving skills
  • A knack for independence and group work
Job Responsibility
Job Responsibility
  • Liaising with coworkers and clients to elucidate the requirements for each task
  • Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed
  • Reformulating existing frameworks to optimize their functioning
  • Testing such structures to ensure that they are fit for use
  • Building a data pipeline from different data sources using different data types like API, CSV, JSON, etc
  • Preparing raw data for manipulation by Data Scientists
  • Implementing proper data validation and data reconciliation methodologies
  • Ensuring that your work remains backed up and readily accessible to relevant coworkers
  • Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

We are seeking a highly skilled and motivated Senior Data Engineer/s to architec...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
techmahindra.com Logo
Tech Mahindra
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in data engineering with a focus on Microsoft Azure and Fabric technologies
  • Strong expertise in: Microsoft Fabric (Lakehouse, Dataflows Gen2, Pipelines, Notebooks)
  • Strong expertise in: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
  • Strong expertise in: Power BI and/or other visualization tools
  • Strong expertise in: Azure Functions, Logic Apps, and orchestration frameworks
  • Strong expertise in: SQL, Python and PySpark/Scala
  • Experience working with structured and semi structured data (JSON, XML, CSV, Parquet)
  • Proven ability to build metadata driven architectures and reusable components
  • Strong understanding of data modeling, data governance, and security best practices
Job Responsibility
Job Responsibility
  • Design and implement ETL pipelines using Microsoft Fabric (Dataflows, Pipelines, Lakehouse ,warehouse, sql) and Azure Data Factory
  • Build and maintain a metadata driven Lakehouse architecture with threaded datasets to support multiple consumption patterns
  • Develop agent specific data lakes and an orchestration layer for an uber agent that can query across agents to answer customer questions
  • Enable interactive data consumption via Power BI, Azure OpenAI, and other analytics tools
  • Ensure data quality, lineage, and governance across all ingestion and transformation processes
  • Collaborate with product teams to understand data needs and deliver scalable solutions
  • Optimize performance and cost across storage and compute layers
Read More
Arrow Right
New

Senior Data Engineer

As a Data Software Engineer, you will leverage your skills in data science, mach...
Location
Location
United States , Arlington; Woburn
Salary
Salary:
134000.00 - 184000.00 USD / Year
str.us Logo
STR
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Ability to obtain a Top Secret (TS) security clearance, for which U.S citizenship is needed by the U.S government
  • 5+ years of experience in one or more high level programming languages, like Python
  • Experience working with large datasets for machine learning applications
  • Experience in navigating and contributing to complex, large code bases
  • Experience with containerization practices and CI/CD workflows
  • BS, MS, or PhD in a related field or equivalent experience
Job Responsibility
Job Responsibility
  • Explore a variety of data sources to build and integrate machine learning models into software
  • Develop machine learning and deep learning algorithms for large-scale multi-modal problems
  • Work with large data sets and develop software solutions for scalable analysis
  • Collaborate to create and maintain software for data pipelines, algorithms, storage, and access
  • Monitor software deployments, create logging frameworks, and design APIs
  • Build analytic tools that utilize data pipelines to provide actionable insights into customer requests
  • Develop and execute plans to improve software robustness and ensure system performance
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer - Platform Enablement

SoundCloud empowers artists and fans to connect and share through music. Founded...
Location
Location
United States , New York; Atlanta; East Coast
Salary
Salary:
160000.00 - 210000.00 USD / Year
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, analytics engineering, or similar roles
  • Expert-level SQL skills, including performance tuning, advanced joins, CTEs, window functions, and analytical query design
  • Proven experience with Apache Airflow (designing DAGs, scheduling, task dependencies, monitoring, Python)
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Excellent communication skills and a collaborative mindset
Job Responsibility
Job Responsibility
  • Develop and optimize SQL data models and queries for analytics, reporting, and operational use cases
  • Design and maintain ETL/ELT workflows using Apache Airflow, ensuring reliability, scalability, and data integrity
  • Collaborate with analysts and business teams to translate data needs into efficient, automated data pipelines and datasets
  • Own and enhance data quality and validation processes, ensuring accuracy and completeness of business-critical metrics
  • Build and maintain reporting layers, supporting dashboards and analytics tools (e.g. Looker, or similar)
  • Troubleshoot and tune SQL performance, optimizing queries and data structures for speed and scalability
  • Contribute to data architecture decisions, including schema design, partitioning strategies, and workflow scheduling
  • Mentor junior engineers, advocate for best practices and promote a positive team culture
What we offer
What we offer
  • Comprehensive health benefits including medical, dental, and vision plans, as well as mental health resources
  • Robust 401k program
  • Employee Equity Plan
  • Generous professional development allowance
  • Creativity and Wellness benefit
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

SoundCloud is looking for a Senior Data Engineer to join our growing Content Pla...
Location
Location
Germany; United Kingdom , Berlin; London
Salary
Salary:
Not provided
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience in backend engineering (Scala/Go/Python) with strong design and data modeling skills
  • Hands-on experience building ETL/ELT pipelines and streaming solutions on cloud platforms (GCP preferred)
  • Proficient in SQL and experienced with relational and NoSQL databases
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience with containerization and orchestration (Docker, Kubernetes)
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Strong collaboration skills and ability to work across backend, data, and product teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain high-performance services for content modeling, serving, and integration
  • Develop data pipelines (batch & streaming) with cloud native tools
  • Collaborate on rearchitecting the content model to support rich metadata
  • Implement APIs and data services that power internal products, external integrations, and real-time features
  • Ensure data quality, governance, and validation across ingestion, storage, and serving layers
  • Optimize system performance, scalability, and cost efficiency for both backend services and data workflows
  • Work with infrastructure-as-code (Terraform) and CI/CD pipelines for deployment and automation
  • Monitor, debug, and improve reliability using various observability tools (logging, tracing, metrics)
  • Collaborate with product leadership, music industry experts, and engineering teams across SoundCloud
What we offer
What we offer
  • Extensive relocation support including allowances, one way flights, temporary accommodation and on the ground support on arrival
  • Creativity and Wellness benefit
  • Employee Equity Plan
  • Generous professional development allowance
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Free German courses at beginning, intermediate and advanced
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.