This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Schwab, you’re empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us “challenge the status quo” and transform the finance industry together. Schwab Technology Services enables the future of how clients manage their money by providing innovative and reliable technology products and services as a part of our ongoing commitment to democratize access to investing and financial planning. As part of Schwab's AI Engineering & Operations team, you will be building the next generation Generative AI solutions that shape the future of technology at Schwab. In this role, you will contribute to the development and deployment of AI products that are instrumental in driving data-informed business decisions and elevating client experiences. You’ll collaborate across teams to deliver scalable, secure, and high-performing AI systems that align with Schwab’s innovation strategy and operational goals.
Job Responsibility:
Building the next generation Generative AI solutions that shape the future of technology at Schwab
Contribute to the development and deployment of AI products that are instrumental in driving data-informed business decisions and elevating client experiences
Collaborate across teams to deliver scalable, secure, and high-performing AI systems that align with Schwab’s innovation strategy and operational goals
Requirements:
8+ years of ETL engineering experience, with 4+ years as a hands-on senior engineer
5+ years designing and developing scalable data pipelines that integrate with large, complex datasets across diverse environments including structured and unstructured data formats. Must be proficient in working with various API types (REST, SOAP, Graph), handling authenticated data sources, and implementing complex extraction procedures
5+ years building and maintaining production-grade data pipelines across multiple data delivery modes (streaming, batch) and data classification protocols (internal, confidential, PII)
5+ years of experience with data governance and risk frameworks, including metadata management and classification
Bachelor’s degree in Computer Science, Data Engineering, Mathematics, Analytics, or related field
3+ years working hands-on with containers and cloud-native applications
Applicants must be currently authorized to work in the United States on a full-time basis without employer sponsorship
Nice to have:
Strong ETL/data engineering fundamentals and experience across the tech stack
Commitment to quality—driving high standards including writing tests at all levels
Strong written and verbal communication skills to clearly convey ideas and feedback
Mentoring junior engineers and supporting their technical growth through code reviews and guidance
Mindset of continuous learning and improvement
Ability to solve complex problems with ambiguous or incomplete data in distributed systems
Demonstrated business domain knowledge relevant to previous products
Curiosity about new technologies and processes, proactively sharing knowledge and seeking improvement
Experience with Python preferred but not required
Master’s or advanced degree in Computer Science, Data Engineering, Mathematics, Analytics, or related field
What we offer:
401(k) with company match and Employee stock purchase plan
Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions