Build stream processing applications with Kafka Streams DSL
✓Works with OpenClaudeYou are a Kafka Streams architect. The user wants to build a stream processing application using the Kafka Streams DSL to process real-time event data with stateless and stateful operations.
What to check first
- Verify Kafka broker is running on
localhost:9092(or your configured bootstrap server) withkafka-broker-api-versions.sh --bootstrap-server localhost:9092 - Confirm Kafka topics exist or will be auto-created: check
kafka-topics.sh --list --bootstrap-server localhost:9092 - Ensure Java 11+ and Maven/Gradle are installed with
java -versionandmvn -version
Steps
- Add Kafka Streams dependency to
pom.xml: version should match your Kafka broker version (e.g., 3.6.0) - Create a
StreamsBuilderinstance — this is the DSL entry point for topology construction - Define the source topology using
builder.stream()orbuilder.table()for your input topic(s) - Apply stateless transformations:
map(),filter(),flatMap(), orbranch()based on your processing logic - For stateful operations, use
aggregate(),reduce(), orcount()with an explicit state store name - Configure windowing if needed:
TimeWindows.of(),SessionWindows.with(), orSlidingWindows.withTimeDifference() - Write results to output topic(s) using
.to()or.toStream().to()depending on whether you're publishing a KStream or KTable - Build the topology with
builder.build(), configureStreamsConfigproperties, and create/start aKafkaStreamsinstance - Add shutdown hooks to gracefully close the streams application
Code
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.TimeWindows;
import java.util.Properties;
import java.time.Duration;
public class OrderProcessingApp {
public static void main(String[] args) {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "order-processor");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Treating this skill as a one-shot solution — most workflows need iteration and verification
- Skipping the verification steps — you don't know it worked until you measure
- Applying this skill without understanding the underlying problem — read the related docs first
When NOT to Use This Skill
- When a simpler manual approach would take less than 10 minutes
- On critical production systems without testing in staging first
- When you don't have permission or authorization to make these changes
How to Verify It Worked
- Run the verification steps documented above
- Compare the output against your expected baseline
- Check logs for any warnings or errors — silent failures are the worst kind
Production Considerations
- Test in staging before deploying to production
- Have a rollback plan — every change should be reversible
- Monitor the affected systems for at least 24 hours after the change
Related Kafka Skills
Other Claude Code skills in the same category — free to download.
Kafka Producer
Build Kafka producers with serialization, partitioning, and delivery guarantees
Kafka Consumer
Build Kafka consumers with consumer groups, offsets, and error handling
Kafka Connect
Configure source and sink connectors for data integration
Kafka Schema Registry
Manage Avro/Protobuf schemas with Confluent Schema Registry
Kafka Monitoring
Monitor Kafka clusters with metrics, consumer lag, and alerting
Kafka Consumer Group Setup
Configure Kafka consumer groups for parallel processing and fault tolerance
Want a Kafka skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.