This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join us as a “Data Engineer" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences.
Job Responsibility:
Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data
Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures
Development of processing and analysis algorithms fit for the intended data complexity and volumes
Collaboration with data scientist to build and deploy machine learning models
Requirements:
Must be graduate/ must have bachelor’s degree
Solid experience in data engineering, ETL and building and maintaining pipelines for structured, semi structured and unstructured data from multiple upstream systems
Work with various formats and protocols (CSV, JSON, XML, Avro, Parquet, APIs , streaming feeds and messaging queues)
Develop scalable ETL/ELT workflows using AWS and big data frameworks
Strong Experience in AWS( Redshift, Glue, Lambda, Step Functions, Cloud Formation templates, CloudWatch, API Gateway)
Excellent programming skills in Python or Pyspark
Experience with ETL orchestration tools, workflow schedulers and CI/CD pipelines
Excellent knowledge of Data Storage and Warehousing concepts
Model and Maintain datasets within warehouses like S3, Data Lake, Redshift, Hive/Glue Catalog)
Experience with database systems (relational: Oracle, SQL Server, PostgreSQL, MySQL
columnar: Redshift, Snowflake
NoSQL: MongoDB, Cassandra)
Advanced SQL skills (DDL, DML, performance tuning) and scripting experience (PL/SQL, T-SQL, Python or Shell)
Knowledge of data warehousing concepts (Inmon/Kimball) and ETL tools (e.g., Informatica)
Cloud platform experience, ideally AWS (S3, Redshift, Glue) and Data Lake implementation
Nice to have:
Experience with big data ecosystems (Hadoop, Databricks, Snowflake etc.)
Knowledge of Kafka and real-time flows with MSK
Knowledge of Trino/Presto, Delta/Iceberg/Hudi
Experience with Data Quality frameworks and metadata management
Exposure to Post-trade settlement , clearing, reconciliations or financial markets preferred