Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Celebrating Over 100 Supported Apache Kafka Connectors

We just released Confluent Platform 5.4, which is one of our most important releases to date in terms of the features we’ve delivered to help enterprises take Apache Kafka® and event streaming into production. These include Role-Based Access Control (RBAC), Structured Audit Logs, Multi-Region Clusters, and Schema Validation.

In the spirit of innovation around Kafka, we are very excited to announce that we now have reached over 100 supported connectors for getting data in and out of Kafka.

100+ Prebuilt Connectors

A rich ecosystem of 100+ prebuilt connectors

One of our main goals here at Confluent is to enhance productivity for Kafka developers. This means delivering capabilities that help developers spend more time building the event streaming applications that will actually change their business, and less time figuring out the inner workings of Kafka.

If you are a developer working with open source Apache Kafka, you have two options to connect existing data sources and sinks:

  1. Develop your own connectors using the Kafka Connect framework: the challenge with developing your own connectors is the time and effort that it takes, which could take up to several weeks, excluding the time required to fix any issues that might arise during actual operations.
  2. Leverage existing open source connectors already built by the community: the challenge in this case is the inherent risk of using technology that isn’t supported by an expert vendor. If you work for an organization deploying Kafka and event streaming into production, this usually is a gamble you cannot afford to make.

That’s why in 2019, we decided to rocketboost our efforts in this space. We started the year with fewer than 10 and ended the year with more than 100 supported connectors.

Most of these connectors are developed and supported by Confluent, but we have also worked closely with our technology partners. We have developed a program for independent software vendors (ISVs) and partners to verify connectors, assuring customers and users of connector interoperability and functionality with Kafka and Confluent Platform. About a quarter of the 100+ connectors are graduates of this program.

We also gathered extensive customer feedback to prioritize building the connectors you need, so we are confident that you will find the most popular and valuable connectors in our catalog, including Salesforce, InfluxDB, Google BigQuery, Azure Blob Storage, AWS Lambda, Syslog, and more.

Confluent Hub: Your one-stop shop for Kafka connectors

To further simplify how you leverage our connector portfolio, we offer Confluent Hub, an online marketplace to easily browse, search, and filter for connectors and other plugins that best fit your data movement needs for Kafka.

You may already know about Confluent Hub given that we first introduced it back in June 2018. What’s newsworthy is that we’ve given it a complete makeover. The new Confluent Hub boasts updated graphics, cleaner layouts and content, but most importantly, a dramatically enhanced user experience with a left-hand filtering banner that allows you to granularly search for plugins based on critical attributes, such as:

  • Type: a sink connector, source connector, converter, or transform
  • Enterprise support: Confluent or partner supported
  • Licensing: commercially licensed or free (open source or community licensed)
  • Confluent Cloud availability: whether it’s available fully managed in our hosted SaaS offering

Confluent Hub

You can read about how the new Confluent Hub makes finding connectors easier than ever in this blog post by Ethan Ruhe.

Ready to get started?

Thanks to this important milestone of 100+ supported connectors and a revamped Confluent Hub, it has never been easier for you and your organization to instantly connect your most popular data sources and sinks to Kafka. We encourage you to explore Confluent Hub to find the Kafka connectors that are right for your use cases.

If you’d like to try the rest of our enterprise features from the 5.4 release, you can download Confluent Platform to take Kafka into production for your mission-critical use cases.

Mauricio Barra is a product marketing manager at Confluent, responsible for the go-to-market strategy of Confluent Platform. His primary goal is to drive clarity and awareness within the Apache Kafka community about the value proposition of Confluent Platform as an enterprise-ready event streaming platform. Mauricio has more than seven years of experience in enterprise technology, priorly having worked on storage, availability and integrated systems products for VMware.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Using the Fully Managed MongoDB Atlas Connector in a Secure Environment

Since the MongoDB Atlas source and sink became available in Confluent Cloud, we’ve received many questions around how to set up these connectors in a secure environment. By default, MongoDB

Streaming Data from Apache Kafka into Azure Data Explorer with Kafka Connect

Near-real-time insights have become a de facto requirement for Azure use cases involving scalable log analytics, time series analytics, and IoT/telemetry analytics. Azure Data Explorer (also called Kusto) is the

Enabling the Deployment of Event-Driven Architectures Everywhere Using Microsoft Azure and Confluent Cloud

Hybrid cloud architecture and accelerated cloud migrations are becoming the norm rather than the exception, as our increasingly digital world introduces certain challenges along the way, including modernizing existing application/architecture,