Getting Started with KSQL Recipes
To accelerate your stream processing project, we've pre-built a series of stream processing use cases and KSQL code samples. Much like a recipe, we provide the steps to help get you started.
Mask streaming data from an inbound topic that contains personally identifiable information (PII) and persist the output to a Kafka topic.
Level Up your KSQL
Whether you are brand new to KSQL or ready to take it to production, now you can dive deep on core KSQL concepts, streams and tables, enriching unbounded data and data aggregations, scalability and security configurations, and more.
Get an introduction to the concept of stream processing with Apache Kafka and KSQL.Watch Now
KSQL and Core Kafka
Learn about relating KSQL to clients, choosing the right API and how KSQL uses Kafka topics.Watch Now
KSQL Use Cases
KSQL use cases include data exploration, arbitrary filtering, streaming ETL and more.Watch Now
Installing and Running KSQL
Find out how to get KSQL, start the KSQL server and CLI, along with other syntax basics.Watch Now
KSQL Streams and Tables
Distinguish a STREAM from a TABLE, and discover how streaming queries are unbounded.Watch Video
Reading Kafka Data from KSQL
Explore Kafka topic data. Create a STREAM or TABLE. Identify fields, metadata and formats.Watch Video
Streaming and Unbounded Data
Stream queries, read topics, discover persistent and non-persistent queries and more.Watch Video
Enriching data with KSQL
Use scalar functions, change field types, filter and merge data and rekey streams with KSQL.Watch Video
Aggregations in KSQL
Review various aggregate functions (e.g., MAX, MIN), windowing and late-arriving data.Watch Video
Taking KSQL to Production
Build a streaming ETL pipeline, scale processing, secure KSQL and monitor KSQL performance.Watch Now
INSERT INTO in KSQL
A brief tutorial on how to use INSERT INTO in KSQL.Watch Now
STRUCT in KSQL
A brief tutorial on how to use STRUCT (Nested Data) in KSQL.Watch Now
Use Cases and Examples
Apache Kafka is a popular choice for powering data pipelines. KSQL makes it simple to transform data within the pipeline, readying messages to cleanly land in another system.
CREATE STREAM vip_actions AS
SELECT userid, page, action FROM clickstream c LEFT JOIN users u ON c.userid = u.user_id
WHERE u.level = 'Platinum';
KSQL is a good fit for identifying patterns or anomalies on real-time data. By processing the stream as data arrives you can identify and properly surface out of the ordinary events with millisecond latency.
CREATE TABLE possible_fraud AS
SELECT card_number, count(*)
WINDOW TUMBLING (SIZE 5 SECONDS)
GROUP BY card_number
HAVING count(*) > 3;
Kafka’s ability to provide scalable ordered messages with stream processing make it a common solution for log data monitoring and alerting. KSQL lends a familiar syntax for tracking, understanding, and managing alerts.
CREATE TABLE error_counts AS
SELECT error_code, count(*) FROM monitoring_stream WINDOW TUMBLING (SIZE 1 MINUTE) WHERE type = 'ERROR' GROUP BY error_code;