This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Patreon is looking for a Senior Software Engineer, Data to support our mission. The Data Engineering team at Patreon builds pipelines, models, and tooling that power both customer-facing and internal data products. As a Senior Software Engineer on the team, you’ll architect and scale the data foundation that underpins our creator analytics product, discovery and safety ML systems, internal product analytics, executive reporting, experimentation, and company-wide decision-making.
Job Responsibility:
Design, build, and maintain the pipelines that power all data use cases. This includes ingestion of raw data from production databases, object storage, and message queues, and vendors into our Data Lake, and building core datasets and metrics
Develop intuitive, performant, and scalable data models (facts, dimensions, aggregations) that support product features, internal analytics, experimentation, and machine learning workloads
Implement robust batch and streaming pipelines using Spark, Python, and Airflow
Build pipelines adhering to standards for accuracy, completeness, lineage, and dependency management. Build monitoring and observability so teams can trust what they’re using
Work with Product, Data Science, Infrastructure, Finance, Marketing, and Sales to turn ambiguous questions into well-scoped, high-impact data solutions
Pay down technical debt, improve automation, and follow best practices in data modeling, testing, and reliability. Mentor peers earlier in their career within the team
Requirements:
4+ years of experience in software development
At least 2+ years of experience in building scalable, production-grade data pipelines
Familiarity with SQL and distributed data processing tools like Spark, Flink, Kafka Streams, or similar
Strong programming foundations in Python or similar language, with good software engineering design patterns and principles (testing, CI/CD, monitoring)
Familiar with modern data lakes (eg: Delta Lake, Iceberg)
Familiar with data warehouses (eg: Snowflake, Redshift, BigQuery) and production data stores such as relational (eg: MySQL, PostgreSQL), object (eg: S3), key-value (eg: DynamoDB) and message queues (eg: Kinesis, Kafka)
Excellent collaboration and communication skills
comfortable partnering with non-technical stakeholders, writing crisp design docs, giving actionable feedback, and can influence without authority across teams
Understanding of data modeling and metric design principles
Passionate about data quality, system reliability, and empowering others through well-crafted data assets
Highly motivated self-starter who thrives in a collaborative, fast-paced environment and takes pride in high-craft, high-impact work
Bachelor’s degree in Computer Science, Computer Engineering, or a related field, or the equivalent