Companies new and old are all recognizing the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka® streaming platform. With Apache Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event-driven architectures and the population of multiple downstream systems.
In this talk, we’ll look at one of the most common integration requirements – connecting databases to Apache Kafka. We’ll consider the concept that all data is a stream of events, including that residing within a database. We’ll look at why we’d want to stream data from a database, including driving applications in Apache Kafka from events upstream. We’ll discuss the different methods for connecting databases to Apache Kafka, and the pros and cons of each. Techniques including Change-Data-Capture (CDC) and Kafka Connect will be covered, as well as an exploration of the power of KSQL, streaming SQL for Apache Kafka, for performing transformations such as joins on the inbound data.
Watch now to learn:
You can try out all the examples shown in the talk here: https://github.com/confluentinc/demo-scene/blob/master/no-more-silos/no-more-silos.adoc
上の「新規登録」をクリックすることにより、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします : プライバシーポリシー
Get up to $200 off on each of your first 3 Confluent Cloud monthly bills
Choose one sign-up option below