Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Log Compaction | Highlights in the Apache Kafka and Stream Processing Community | April 2016

The Apache Kafka community was crazy-busy last month. We released a technical preview of Kafka Streams and then voted on a release plan for Kafka 0.10.0. We accelerated the discussion of few key proposals in order to make the release, rolled out two release candidates, and then decided to put the release on hold in order to get few more changes in.

  • Kafka Streams tech preview! If you are interested in a new, lightweight, easy-to-use way to process streams of data, I highly recommend you take a look.
  • If you are interested in the theory of stream processing, check out Making Sense of Stream Processing download the eBook while it’s still available. The book is written by Martin Kleppmann and if you’ve been interested in Kafka and stream processing for a while, you know his work is always worth reading.
  • Wondering what will be included in 0.10.0 release? Worried if there are any critical issues left? Take a look at our release plan.
  • Pull request implementing KIP-36 was merged. KIP-36 adds rack-awareness to Kafka. Brokers can now be assigned to specific racks and when topics and partitions are created, and the replicas will be assigned to nodes based on their rack placement.
  • Pull request implementing KIP-51 was merged. KIP-51 is a very small change to the Connect REST API, allowing users to ask for a list of available connectors.
  • Pull request implementing KIP-45 was merged. KIP-45 is a small change to the new consumer API which standardizes the types of containers accepted by the various consumer API calls.
  • KIP-43, which adds support for standard SASL mechanisms in addition to Kerberos, was voted in. We will try to get this merged into Kafka in release 0.10.0.
  • There are quite a few KIPs under very active discussions:
    • KIP-4, adding an API for administrative actions such as creating new topics, requires some modifications to MetadataRequest.
    • KIP-35 adds a new protocol for getting the current version of all requests supported by a Kafka broker. This protocol improvement will make it possible to write Kafka clients that will work with brokers of different versions.
    • KIP-33 adds time-based indexes to Kafka and supporting both time-based log purging and time-based data lookup.

That’s all for now! Got a newsworthy item? Let us know. If you are interested in contributing to Apache Kafka, check out the contributor guide to help you get started.

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Kafka Streams Interactive Queries Go Prime Time

What is stopping you from using Kafka Streams as your data layer for building applications? After all, it comes with fast, embedded RocksDB storage, takes care of redundancy for you, […]

From Eager to Smarter in Apache Kafka Consumer Rebalances

Everyone wants their infrastructure to be highly available, and ksqlDB is no different. But crucial properties like high availability don’t come without a thoughtful, rigorous design. We thought hard about […]

Walmart’s Real-Time Inventory System Powered by Apache Kafka

Consumer shopping patterns have changed drastically in the last few years. Shopping in a physical store is no longer the only way. Retail shopping experiences have evolved to include multiple […]

Sign Up Now

Start your 3-month trial. Get up to $200 off on each of your first 3 Confluent Cloud monthly bills

新規登録のみ。

上の「新規登録」をクリックすることにより、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします : プライバシーポリシー

上記の「新規登録」をクリックすることにより、お客様は以下に同意するものとします。 サービス利用規約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、当社がお客様の個人情報を以下に従い処理することを理解されたものとみなします: プライバシーポリシー

単一の Kafka Broker の場合には永遠に無料
i

商用版の機能を単一の Kafka Broker で無期限で使用できるソフトウェアです。2番目の Broker を追加すると、30日間の商用版試用期間が自動で開始します。この制限を単一の Broker へ戻すことでリセットすることはできません。

デプロイのタイプを選択
手動デプロイ
  • tar
  • zip
  • deb
  • rpm
  • docker
または
自動デプロイ
  • kubernetes
  • ansible

上の「無料ダウンロード」をクリックすることにより、当社がお客様の個人情報をプライバシーポリシーに従い処理することを理解されたものとみなします。 プライバシーポリシー

以下の「ダウンロード」をクリックすることにより、お客様は以下に同意するものとします。 Confluent ライセンス契約 Confluent からのマーケティングメールの随時受信にも同意するものとします。また、お客様の個人データが以下に従い処理することにも同意するものとします: プライバシーポリシー

このウェブサイトでは、ユーザーエクスペリエンスの向上に加え、ウェブサイトのパフォーマンスとトラフィック分析のため、Cookie を使用しています。また、サイトの使用に関する情報をソーシャルメディア、広告、分析のパートナーと共有しています。