Staff Software Engineer - Data Cloud
Rippling
About Rippling
Rippling is the first way for businesses to manage all of their HR & IT—payroll, benefits, computers, apps, and more—in one unified workforce platform.
By connecting every business system to one source of truth for employee data, businesses can automate all of the manual work they normally need to do to make employee changes. Take onboarding, for example. With Rippling, you can just click a button and set up a new employees’ payroll, health insurance, work computer, and third-party apps—like Slack, Zoom, and Office 365—all within 90 seconds.
Based in San Francisco, CA, Rippling has raised $1.2B from the world’s top investors—including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock—and was named one of America's best startup employers by Forbes.
We prioritize candidate safety. Please be aware that official communication will only be sent from @Rippling.com addresses.
About the Role:
The Data Platform team works on building the blocks that are used by other teams at Rippling to create advanced HR applications at lightning fast speed. At the core of our technological aspirations lies this Team, a group dedicated to pushing the boundaries of what's possible with data. We architect high-performance, scalable systems that power the next generation of data products - ranging from reports, analytics, customizable workflows, search, and many new sets of products and capabilities to help customers manage and get unprecedented value from their business data.
This is a unique opportunity to work on both product and platform layers at the same time. We obsess over the scalability and extensibility of platform solutions, ensuring that solutions will meet the needs across the breadth of Rippling's product suite, along with the applications of tomorrow. You won't just be crafting features; you'll be shaping the future of business data management.
What You'll Do:
- Develop high-quality software with attention to detail using tech stacks like Python, MongoDB, CDC, and Kafka
- Leverage big data technologies like Apache Presto, Apache Pinot, Flink, and Airflow
- Build OLAP stack and data pipelines in support of Reporting products
- Build custom programming languages within the Rippling Platform
- Create data platforms, data lakes, and data ingestion systems that work at scale
- Lead mission-critical projects and deliver data ingestion capabilities end-to-end with high-quality
- Have clear ownership of one or many products, APIs, or platform spaces
- Build and grow your engineering skills in different challenging areas and solve hard technical problems
- Influence architecture, technology selections, and trends of the whole company
Qualifications:
- 7+ years of experience in software development, preferably in fast-paced, dynamic environments.
- Solid understanding of CS fundamentals, architectures, and design patterns.
- Proven track record in building large-scale applications, APIs, and developer tools.
- Excellent at cross-functional collaboration, able to articulate technical concepts to non-technical partners.
- You thrive in a product-focused environment and are passionate about making an impact on customer experience.
- Bonus Points: for contributing to open-source projects (Apache Iceberg, Parquet, Spark, Hive, Flink, Delta Lake, Presto, Trino, Avro)