Want to learn more about how to build streaming data infrastructure for big data applications? Come meet with Confluent, or attend our talks, at Strata + Hadoop in New York, September 27-29, 2016. Below are all of our activities at this year's show:
Ian Wrigley | 1:30pm–5:00pm | Location: 1B 03/04
Ian Wrigley demonstrates how to leverage the capabilities of Apache Kafka to collect, manage, and process stream data for both big data projects and general-purpose enterprise data integration—no prior knowledge of Kafka required. Ian covers system architecture and use cases and walks attendees through hands-on exercises where you’ll publish data to, and subscribe to data from, Kafka and investigate Kafka’s Java and REST APIs. Ian also explores other elements of the broader Kafka ecosystem, such as Kafka Connect and Kafka Streams.
This tutorial is ideal for application developers, ETL (extract, transform, load) developers, or data scientists who need to interact with Kafka clusters as a source of, or destination for, stream data.
Neha Narkhede | 1:15–1:55pm | Location: 1 E 12/1 E 13
Neha Narkhede explains how Apache Kafka serves as a foundation to streaming data applications that consume and process real-time data streams and introduces Kafka Connect, a system for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library. Neha also describes the lessons companies like LinkedIn learned building massive streaming data architectures.
Neha Narkhede | 2:05–2:45pm | Location: O'Reilly Booth (Table A)
Ask questions and learn more about Apache Kafka and Confluent Platform use cases.
Neha Narkhede | 3:35–4:00pm | Location: O’Reilly Booth (Table A)
Get a copy of Kafka: The Definitive Guide (Early Release) signed by Neha Narkhede.
Confluent Executives | 7:00-10:00pm | Location: Casa Nonna (310 West 38th Street, New York, NY 10036)
Join Confluent executives, customers, partners and fellow stream data lovers as we take over Casa Nonna and relax after a full day of conference sessions.
Ewen Cheslack-Postava | 1:15pm–1:55pm | 1 E 12/1 E 13
Ewen Cheslack-Postava explores resilient multi-data-center architecture with Apache Kafka, sharing best practices for data replication and mirroring as well as disaster scenarios and failure handling. Ewen covers four scenarios—replication and failover for disaster recovery, data produced in one location but consumed in another, aggregate cluster for data analysis, and bidirection replication—discussing the requirements for each, providing a proven architecture, and explaining the benefits and limitations of the solution.
Jun Rao | 2:05–2:35pm | Location: 3D 04/09
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. Jun Rao explains the motivation for making these changes and the threats that Kafka Security mitigates, discusses the design of Kafka security, and demonstrates how to secure a Kafka cluster. Jun also covers common pitfalls in securing Kafka and talks about ongoing security work.