This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Lead – Data Platform Engineering will architect and build the data infrastructure that underpins our markets & investment analytics. This role is responsible for designing scalable data frameworks, integrating firm-wide data assets, and enabling teams to deliver data solutions efficiently and safely. Working within BlackRock’s Global Markets & Index Investments (BGM) Data & Analytics team – the Lead will spearhead the creation of our next-generation data ecosystem using DevOps practices, helping shape analytics that power its critical market and investment systems. As a leader, this position demands a balance of hands-on execution, strategic direction, and team leadership to ensure a reliable, scalable, and secure data platform.
Job Responsibility:
Architect and build the data infrastructure that underpins our markets & investment analytics
Design scalable data frameworks, integrating firm-wide data assets, and enabling teams to deliver data solutions efficiently and safely
Spearhead the creation of our next-generation data ecosystem using DevOps practices, helping shape analytics that power its critical market and investment systems
Ensure a reliable, scalable, and secure data platform
Requirements:
10+ years in backend, data, or platform engineering (with at least 3–5 years dedicated to data infrastructure and cloud-native architecture)
Proven track record of building internal data platforms or developer tools at scale is highly desirable
A Bachelor’s degree or equivalent experience in Computer Science, Engineering, or related discipline is a must
Proficiency in data architecture protocols and previous involvement with financial or investment data systems is beneficial
Distributed systems expertise: Profound understanding of distributed data systems and advanced data technologies (e.g. Spark, Kafka, Airflow) for large-scale data processing
DevOps & Cloud proficiency: Expertise in implementing CI/CD pipelines, cloud orchestration, and infrastructure-as-code(Terraform, CDK) on modern cloud platforms (Azure preferred, also AWS/GCP)
Hands-on experience with containerization (Docker, Kubernetes) and cloud infrastructure automation is essential
Data pipeline development: Proficiency in designing and optimizing scalable data pipelines for data ingestion, transformation, and delivery
Experience with modern ETL/ELT frameworks (e.g. Apache Airflow, dbt, Azure Data Factory) for workflow orchestration is preferred
Programming & data modelling: Strong programming skills in Python, SQL, Scala with solid data modelling and database design experience
Ability to ensure data quality, integrity, and lineage throughout data pipelines is required
Data migration experience: Proven experience leading large-scale data migration initiatives from on-premises databases (e.g. Oracle, Teradata, SQL Server) to cloud-based data platforms like Snowflake, with minimal downtime and no compromise on data integrity or compliance
Modern data architecture understanding: Familiarity with modern data architecture patterns (streaming data, data lakes, etc.) and technologies
Ability to build resilient data services supporting advanced analytics, AI/ML models, and event-driven use cases
Demonstrated ability to lead and mentor engineering teams, establish high coding standards, and drive continuous improvement in platform reliability and developer efficiency
Excellent collaboration and communication skills, with comfort in bridging technical and business needs to deliver results
Establish standards methodologies such as CI/CD pipelines, observability, and version control for data workflows
Partner with Data Quality, DevOps, and Infrastructure teams to ensure seamless data flow and governance across the platform