There is a major shift happening in how data powers the core of a company's business. Businesses operate in real-time and the software they use is catching up. Rather than processing data only at the end of the day, why not react to it continuously as the data arrives? This is the emerging world of stream processing. Apache Kafka® was built with the vision to become the central nervous system that makes data available in real-time to all the applications that need to use it.
This talk explains how companies are using the concepts of events and streams to transform their business to meet the demands of this digital future and how Apache Kafka serves as a foundation to streaming data applications.
This Tech Talk will be delivered by Gwen Shapira, a principal data architect at Confluent helping customers achieve success with their Apache Kafka implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. She currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an author of “Kafka - the Definitive Guide”, “Hadoop Application Architectures”, and a frequent presenter at industry conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects. When Gwen isn’t coding or building data pipelines, you can find her pedaling on her bike exploring the roads and trails of California, and beyond.