Data Engineer
Talkdesk
At Talkdesk, we are courageous innovators focused on helping organizations around the world create better customer experiences. Our AI-powered cloud contact center solutions optimize our customers’ most critical customer service processes. We are recognized as a Contact Center as a Service (CCaaS) leader by influential research organizations including Gartner. With $498 million in total funding, a valuation of more than $10 Billion, and a ranking of #8 on the Forbes Cloud 100 list, now is the time to be part of the Talkdesk legacy to help accelerate our success in a new decade of transformational growth.
We champion an inclusive and diverse culture representative of the communities in which we live and serve. And, we give back to our community by volunteering our time, supporting non-profits and minimizing our global footprint.
Responsibilities:
- Develop, deploy and maintain a Data Mesh solution to power Talkdesk’s Data Science, BI and Reporting products
- Design batch or streaming dataflows capable of processing large quantities of fast moving unstructured data
- Monitoring dataflows and underlying systems, promoting the necessary changes to ensure scalable, high performance solutions and assure data quality and availability
- Work closely with the rest of Talkdesk’s engineering to deliver a world class Data Mesh solution
Requirements:
- Fluent in Scala or Python programming, but effective on both;
- Easiness of dealing with SQL development and Data modeling (RDBMS and NoSQL);
- Ability to deal with very large data stores, using Big Data processing tools;
- Excels on designing, developing and maintaining batch and real-time pipelines of data;
- Experience with Delta Lake and Apache Spark Structured Streaming;
- Strong understanding of distributed computing principles and distributed systems;
- Strong written and verbal English communication skills.
Nice to have / Pluses
- BS/MS Degree in Computer Engineering, Computer Science, Applied Math, or a similar area
- Agile development methodology/Scrum experience;
- Databricks experience;
- Elasticsearch index modeling, integration and usage;
- Java backend development;
- Experience with integration of data from multiple data sources;
- Knowledge regarding ansible and terraform;
- Good understanding of Lambda and Kappa Architectures, along with their advantages and drawbacks;
- Experience with Flink, Kafka Streams, Storm or similar;
- Experience with Data Warehouses and related concepts. Knowledge of Redshift or other Data Warehousing solutions;
- Experience with messaging systems, such as Kafka or RabbitMQ
- Experience with cloud environments such as AWS or Google Cloud