This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Glean is building a world-class Data Organization composed of data science, applied science, data engineering and business intelligence groups. Our data engineering group is entirely based in our Bangalore, India office. Glean data engineering has two mandates: 1) our customer-facing data platform akin to a data platform software ENG team 2) analytics engineering to enable our other internal SQL/pipeline/BI tool developers horizontally. In this role, you’ll serve as a tech lead manager for Glean’s one and only data engineering team.
Job Responsibility:
Support senior-level data engineers to help grow them from a technical & career perspective (~25% of time)
Hands on work (~75% of time) in: Glean employee-facing analytics initiatives
Help improve the availability of high-value upstream data data in 3rd party apps by channeling inputs from business intelligence to identify biggest gaps in data foundations
Partner with Go-to-Market & Finance operations groups to create streamlined data management processes in enterprise apps like Salesforce, Marketo and various accounting software
Architect and implement key tables that transform structured and unstructured data into usable models by the data, operations, and engineering orgs for applications like session replays, A/B experimentation
Ensure and maintain the quality and availability of internally used tables within reasonable SLAs
Own and improve the reliability, efficiency and scalability of ETL tooling, including but not limited to dbt, BigQuery, Sigma. This includes identifying, implementing and disseminating best practices as well
Requirements:
1+ years of tech lead management experience
12+ yrs of work experience in software/data engineering (former is strongly preferred) as a bachelor degree holder
Experience in architecting, implementing and maintaining robust data platform solutions for external-facing data products
Experience with implementing and maintaining large-scale data processing tools like Beam and Spark
Experience working with stakeholders and peers from different time zones and roles, e.g. ENG, PM, data science, GTM, often as the main data engineering point of contact
Experience in full-cycle data warehousing projects, including requirements analysis, proof-of-concepts, design, development, testing, and implementation
Experience in database designing, architecture, and cost-efficient scaling
Experience with cloud-based data tools like BigQuery and dbt
Experience with data pipelining tools like Airbyte, Apache, Stitch, Hevo Data, and Fivetran
High degree of proficiency with SQL and are able to set best practices and up-level our growing SQL user base within the organization
Proficient in at least one of Python, Java and Golang
Familiar with cloud computing services like GCP and/or AWS
Concise and precise in written and verbal communication. Technical documentation is your strong suit
Nice to have:
Experience working with customers directly in a B2B setting
Experience with Salesforce, Marketo, and Google Analytics
Experience in distributed data processing & storage, e.g. HDFS
Experience with cross-function collaboration with US partners
Comfortable reporting to org leaders who themselves are not data engineers