Eight Apache Kafka-Centric Sessions, Tutorials and Q&A Panels will be Delivered by Confluent Co-Founders and Engineers, Highlighting Growing Appetite for Kafka Knowledge Among Data Community
PALO ALTO, Calif.—March 28, 2016—Confluent, founded by the creators of Apache™ Kafka®, today announced that several members of its team will be delivering key sessions about Kafka—including best practices for real-world use cases and implementations—at Strata + Hadoop World San Jose, which will take place between March 28-31 at the San Jose Convention Center.
Kafka is increasingly being adopted as a central data platform for streaming data, and Confluent’s multiple sessions will help developers, data scientists, analysts and executives develop skills and familiarity with the stream processing platform. To address the skills gap for developers looking to implement Kafka, Confluent will host multiple tutorials, office hours and ask-me-anything (AMA) sessions designed to introduce new users to Kafka through practical explanations and tips for deployments. Sessions include:
- Tutorial: “Introduction to Apache Kafka,” with director of product and data science Joseph Adler, technical trainer Jesse Anderson, engineer Ewen Cheslack-Postava and director of education services Ian Wrigley on March 29 at 9:00 a.m. in room LL21 A
- Tutorial: “Building data pipelines with Apache Kafka,” with Adler, Cheslack-Postava and Wrigley on March 29 at 1:30 p.m. in room 210 A/E
- “Office Hour with Jay Kreps,” on March 30 at 11:50 a.m. at O’Reilly Booth, Table A
- “AMA: Apache Kafka,” with Adler, Anderson, Cheslack-Postava and Confluent co-founders and Kafka co-creators Neha Narkhede and Jun Rao on March 31 at 2:40 p.m. in room 211 A-C
Confluent will also present deeper dives into Kafka and stream processing for more experienced users looking to take their deployments to the next level. Featured sessions include:
- “Distributed stream processing with Apache Kafka,” with Confluent CEO and Kafka co-creator Jay Kreps on March 30 at 11:00 a.m. in room 210 C/G
- “Securing Apache Kafka,” with Rao on March 30 at 11:50 a.m. in room LL21 B
- “Putting Kafka into overdrive,” with system architect Gwen Shapira and LinkedIn staff site reliability engineer Todd Palino on March 30 at 5:10 p.m. in room 210 C/G
- “When one data center is not enough: Building large-scale stream infrastructure across multiple data centers with Apache Kafka” with software engineer Guozhang Wang on March 31 at 11:50 a.m. in room 210 C/G
“Businesses are facing a paradigm shift in data as they must now make sense of a growing influx of data flooding the enterprise, and the use of Apache Kafka as a central data pipeline continues to increase with the growing need for real-time data,” said Confluent CTO Neha Narkhede. “We are thrilled for the opportunity to work with aspiring Kafka engineers looking to embrace stream processing within their own organizations.”
Visit booth #838 to meet the team and learn more. Confluent will also be hosting several book signings during the event:
- “Kafka: The Definitive Guide” (O’Reilly Media, Feb. 2016) with authors Narkhede, Shapira and Palino
- March 30 at 1:00 – 1:30 p.m. at Confluent Booth #838
- March 30 at 6:20 – 6:50 p.m. at the O’Reilly Booth
- March 31 at 1:00 p.m. – 1:30 p.m. at Confluent Booth #838
- “I Heart Logs” (O’Reilly Media, Sept. 2014) with author Kreps
- March 30 at 3:20 – 3:50 p.m. at Confluent Booth #838
- March 31 at 10:30 p.m. – 11:00 a.m. at Confluent Booth #838
To continue the discussion about stream processing and Kafka, Confluent will host the inaugural Kafka Summit on April 26 at the Hilton Union Square in San Francisco. For more information or to register for Kafka Summit, please visit https://kafka-summit.org/.
About Apache Kafka
Apache Kafka is an open source technology that acts as a real-time, fault tolerant, highly scalable messaging system. It is widely adopted for use cases ranging from collecting user activity data, logs, application metrics, stock ticker data and device instrumentation. Its key strength is its ability to make high volume data available as a real-time stream for consumption in systems with very different requirements—from batch systems like Hadoop, to real-time systems that require low-latency access, to stream processing engines that transform the data streams as they arrive. This infrastructure lets you build around a single central nervous system transmitting messages to all the different systems and applications within a company.
Confluent, founded by the creators of Apache™ Kafka®, enables organizations to harness business value from stream data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. Backed by Benchmark, Data Collective, Index Ventures and LinkedIn, Confluent is based in Palo Alto. To learn more, please visitwww.confluent.io. Download Apache Kafka and Confluent Platform at www.confluent.io/download.
Jill Reed or Alex Cardenas
Highwire PR for Confluent
(415) 963-4174, ext. 5