This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Resonate is a leading provider of high-quality, AI-powered consumer data, intelligence, and technology, empowering marketers to create a more personalized world that increases customer acquisition and lifetime value. Our SaaS platform, Ignite, and our Data-as-a-Service (DaaS) offerings provide unparalleled insights into consumer motivations, values, and behaviors, enabling our clients to connect with their target audiences in more meaningful and effective ways. We are a dynamic and fast-growing company seeking passionate and innovative individuals to join our team! We’re looking for a Senior Data Engineer who combines deep technical expertise with strong communication skills and a proactive mindset. You’ll design, build, and optimize complex data pipelines in a cost-conscious environment while actively collaborating with your squad. This role is ideal for engineers who love sharing their work, navigating ambiguity, and understand that “it runs” is not the same as “it runs efficiently.” Thriving here means taking ownership, balancing innovation with operational excellence, and proactively seeking out opportunities for optimization.
Job Responsibility:
Design and develop high-performance data pipelines using Scala/Spark on AWS EMR
Build new features and data products while maintaining operational excellence
Optimize pipelines for both performance and cost efficiency
Debug complex distributed system issues using appropriate tools and methodologies
Implement comprehensive monitoring and observability from day one
Write efficient Snowflake/Snowpark procedures across multiple languages
Balance new development with continuous improvement of existing systems
Requirements:
5+ years of hands-on experience with Apache Spark using Scala exclusively (no PySpark)
Proven track record of optimizing large-scale data pipelines for performance and cost
Strong AWS EMR experience, including fleet management and instance optimization
Proficiency in AWS Step Functions
Deep understanding of distributed computing principles and resource management
Experience debugging and tuning multi-terabyte daily workloads
Comfort working across Scala, Python, and SQL as needed
Experience with probabilistic data structures for high-cardinality data processing
Advanced troubleshooting abilities in distributed systems
Strong understanding of data skew mitigation strategies
Metrics-first mindset—measuring before and after optimization
Root cause analysis expertise, preventing issues rather than just fixing them
Cost-conscious, treating company money like your own
Balance between innovation and operational excellence
Proactive in optimization—seeing inefficiencies as opportunities
Communicate early and often, especially around risks and blockers
Ability to interpret needs beyond stated requirements
Nice to have:
Experience designing large-scale, observable, and maintainable systems
Proven ability to balance building new capabilities with perfecting existing systems
Strong track record of mentoring and knowledge sharing
Background in cost/performance tradeoff decision-making
Passion for environments where initiative and ownership are rewarded