Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!
This two-part online talk series helps you understand the value of building a connector to Kafka, how to build connectors, and why you should work with Confluent, the original authors of Apache Kafka, to get it done.
This online talk focuses on the key business drivers behind connecting to Kafka and introduces the new Confluent Verified Integrations Program. We’ll discuss what it takes to participate, the process and benefits of the program.
Audience: Business and technical audiences from companies that develop BI or analytics applications, databases, data storage solutions, IoT applications, messaging, monitoring, reporting or visualization applications; or any solution that could connect to Kafka.
This online talk dives into the new Verified Integrations Program and the integration requirements, the Connect API and sources and sinks that use Kafka Connect. We cover the verification steps and provide code samples created by popular application and database companies. We will discuss the resources available to support you through the connector development process.
Audience: Technically focused, developers and engineers, solution architects from companies that build BI or analytics applications, databases, data storage solutions, IoT applications, messaging, monitoring, reporting or visualization applications, or any solution that could connect to Kafka. Part 1 is recommended, but not required to attend this webinar.
Learn more about Confluent's Verified Integrations Program
上の「新規登録」をクリックすることにより、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします : プライバシーポリシー
Get up to $200 off on each of your first 3 Confluent Cloud monthly bills
Choose one sign-up option below