This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This is a data engineer position - a programmer responsible for the design, development implementation and maintenance of data flow channels and data processing systems that support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable, and secure manner in coordination with the Data & Analytics team.
Job Responsibility:
Ensuring high quality software development, with complete documentation and traceability
Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data
Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
Ensure efficient data storage and retrieval using Big Data
Implement best practices for spark performance tuning including partition, caching and memory management
Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins)
Work on batch processing frameworks for Market risk analytics
Promoting unit/functional testing and code inspection processes
Work with business stakeholders and Business Analysts to understand the requirements
Work with other data scientists to understand and interpret complex datasets
Requirements:
5-8 years of experience in working in data eco systems
4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
Experienced in working with large and multiple datasets and data warehouses
Experience building and optimizing 'big data' data pipelines, architectures, and datasets
Strong analytic skills and experience working with unstructured datasets
Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Experience with external cloud platform such as OpenShift, AWS & GCP
Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
Experienced in integrating search solution with middleware & distributed messaging - Kafka
Highly effective interpersonal and communication skills with tech/non-tech stakeholders
Experienced in software development life cycle and good problem-solving skills
Excellent problem-solving skills and strong mathematical and analytical mindset
Ability to work in a fast-paced financial environment
Bachelor's/University degree or equivalent experience in computer science, engineering, or similar domain
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.