This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Wells Fargo is seeking a Senior Software Engineer. This is for Data Engineering to join the CALM (Corporate Asset and Liability Management) Data Engineering team within the Enterprise Functions Technology (EFT) organization. In this role, you will be responsible for designing, developing, optimizing, and maintaining metadata‑driven, scalable, high‑performance data engineering frameworks that power critical financial risk processes across Corporate Treasury. You will work independently to build resilient data pipelines, APIs, wrappers, and supporting components to enable reliable data ingestion, transformation, validation, and delivery across cloud and on‑prem ecosystems. This position plays a key role in Data Center exit migrations, DPC onboarding, and enterprise-wide modernization initiatives. The role requires deep technical expertise, hands‑on problem‑solving, and technical leadership in distributed data engineering, cloud platforms, data quality, and performance engineering.
Job Responsibility:
Lead moderately complex initiatives and deliverables within technical domain environments
Contribute to large scale planning of strategies
Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments
Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures
Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements
Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals
Lead projects and act as an escalation point, provide guidance and direction to less experienced staff
Deliver high-quality engineering outcomes during Data Center exit migrations and DPC onboarding, ensuring validations, automation, and production readiness
Collaborate with cross-functional teams to build scalable, high‑performance data solutions using Python, SQL, Spark, Iceberg, Dremio, and Autosys
Design, build, test, deploy, and maintain large-scale structured and unstructured data pipelines using Python, SQL, Apache Spark, and modern data lake/lakehouse technologies
Develop and optimize metadata-driven pipelines, wrappers, ingestion frameworks, and validation layers to support CALM data workflows
Build and maintain high-quality ELT/ETL pipelines following best practices in reliability, performance, observability, and reusability
Engineer and optimize Spark pipelines for large‑scale batch and streaming workloads (partitioning, caching, Catalyst optimization, AQE, Tungsten)
Work with open table formats such as Iceberg, Delta, or Hudi for versioned data, time-travel, compaction, and schema evolution
Implement Medallion (Bronze/Silver/Gold) architecture patterns for modern lakehouse systems
Implement automated data quality frameworks using tools such as Great Expectations / Deequ or custom validators
Build data health monitoring frameworks with SLAs/SLOs, anomaly detection, and lineage capture
Ensure strong validation layers during Data Center exits, migration programs, and DPC onboarding
Build RESTful and metadata APIs using Python frameworks (FastAPI/Flask) to enable secure, governed data access
Collaborate with application teams to integrate data access patterns and platform services
Design and deploy data pipelines in cloud platforms (AWS, Azure, GCP) leveraging managed compute, orchestration, and storage
Build CI/CD workflows and infrastructure automation using Jenkins, GitHub Actions, Azure DevOps, Terraform, or Helm
Apply secure engineering principles including IAM, secrets management, encryption standards, and network controls
Build resilient orchestration flows using Autosys or equivalent tools
Apply modular design with retries, alerts, SLAs, and workflow dependency management
Work with cross-functional Agile teams (Product, Architecture, QA, Treasury SMEs)
Analyze technical requirements, evaluate design alternatives, and provide recommendations aligned with enterprise standards
Independently deliver complex engineering tasks and contribute to architecture/roadmap discussions
Requirements:
4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
Strong years of Software Engineering experience OR equivalent (industry, training, military, education)
Hands-on experience with Python, SQL, and bash scripting for automation
Strong experience building big data pipelines using Apache Spark, Hive, Hadoop
Experience with Autosys/Airflow or similar orchestration tools
Working knowledge of REST APIs, Object Storage, Dremio, and CI/CD pipelines
Strong troubleshooting and problem‑solving capabilities
Solid foundation in data modeling (conceptual/logical/physical) and database design