Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Deploying Apache Kafka on AWS Elastic Block Store (EBS)

Apache Kafka is designed to be highly performant, reliable, scalable, and fault tolerant. At the same time, the performance and reliability of a Kafka cluster is highly dependent on the underlying infrastructure. That interdependence makes the right infrastructure choices critical to any successful deployment. For users who have made the decision to deploy Kafka on the AWS Cloud, making the right choices on storage infrastructure can seem daunting. The reality is that selecting reasonable infrastructure is easier than you think.

Let’s start by thinking about the Kafka cluster at a high level. At its core, the Kafka cluster is a set of servers that offer a shared service where data can be published and retrieved by external clients. Each server is referred to as a Kafka broker, and the data managed by the brokers is logically divided into distinct topics. Data for each topic is persisted locally on the brokers, in a replicated and partitioned manner that prevents data loss or catastrophic disruption if a broker fails. By design, Kafka clusters will automatically re-replicate data and re-balance the client connections when a broker node is lost from the cluster. The brokers are optimized to aggregate the physical I/O for the topic data, resulting in a general pattern of sequential operations against the storage tier. Readers interested in a more comprehensive discussion of the Kafka architecture can refer to the documentation.

Consider what this implies for the underlying storage infrastructure in a Kafka Cluster. Obviously, the absolute performance is critically important… as higher performance reduces the time needed to persist the data as it arrives in the cluster as well as the time needed to retrieve data for a consume or a new cluster node when re-replication is needed. EBS volumes in AWS are an excellent option here. They provide consistent levels of I/O performance (IOPS) and ultimate flexibility in their deployment. A properly designed Kafka cluster based on EBS storage can virtually eliminate the re-replication overhead that would be triggered by an instance failure, as the EBS volumes can be reassigned to a new instance quickly and easily. And from an operations perspective, a Kafka cluster deployed against EBS storage can be shut down cleanly without risk of data loss, a capability not possible when using EC2 Local Instance Storage.

This is why we view the new st1 and sc1 EBS offerings from Amazon as very promising. At a cost up to 50% lower than earlier EBS offerings, and optimized for sequential I/O workloads, we observed that these storage volumes delivered the performance and reliability needed for Kafka environments. We will conduct more detailed testing and welcome hearing about what others have found. (See Amazon blog: EBS Update – New Cold Storage and Throughput Options) .

The other infrastructure components (CPU, memory, networking) also play an important role in the capabilities of any Kafka cluster. In future blogs, I’ll discuss the considerations for those sub-systems in greater detail. It was important to start with storage, because reliable, persistent data platforms such as Kafka are impossible without it.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Project Metamorphosis Month 3: Infinite Storage in Confluent Cloud for Apache Kafka

This is the third month of Project Metamorphosis, where we discuss new features in Confluent’s offerings that bring together event streams and the best characteristics of modern cloud data systems. […]

How Merging Companies Will Give Rise to Unified Data Streams

Company mergers are becoming more common as businesses strive to improve performance and grow market share by saving costs and eliminating competition through acquisitions. But how do business mergers relate […]

Scaling Apache Kafka to 10+ GB Per Second in Confluent Cloud

Apache Kafka® is the de facto standard for event streaming today. The semantics of the partitioned consumer model that Kafka pioneered have enabled scale at a level and at a […]

Sign Up Now

Start your 3-month trial. Get up to $200 off on each of your first 3 Confluent Cloud monthly bills

新規登録のみ。

上の「新規登録」をクリックすることにより、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします : プライバシーポリシー

上記の「新規登録」をクリックすることにより、お客様は以下に同意するものとします。 サービス利用規約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします: プライバシーポリシー

単一の Kafka Broker の場合には永遠に無料
i

商用版の機能を単一の Kafka Broker で無期限で使用できるソフトウェアです。2番目の Broker を追加すると、30日間の商用版試用期間が自動で開始します。この制限を単一の Broker へ戻すことでリセットすることはできません。

デプロイのタイプを選択
手動デプロイ
  • tar
  • zip
  • deb
  • rpm
  • docker
または
自動デプロイ
  • kubernetes
  • ansible

上の「無料ダウンロード」をクリックすることにより、当社がお客様の個人情報をプライバシーポリシーに従い処理することを理解されたものとみなします。 プライバシーポリシー

以下の「ダウンロード」をクリックすることにより、お客様は以下に同意するものとします。 Confluent ライセンス契約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、お客様の個人データが以下に従い処理することにも同意するものとします: プライバシーポリシー

このウェブサイトでは、ユーザーエクスペリエンスの向上に加え、ウェブサイトのパフォーマンスとトラフィック分析のため、Cookie を使用しています。また、サイトの使用に関する情報をソーシャルメディア、広告、分析のパートナーと共有しています。