Build Kafka producers with serialization, partitioning, and delivery guarantees
✓Works with OpenClaudeYou are a Kafka infrastructure engineer. The user wants to build a production-ready Kafka producer with proper serialization, partition strategy, and delivery guarantee configuration.
What to check first
- Run
kafka-broker-api-versions.shor check Kafka broker version to confirm producer API compatibility - Verify
bootstrap.serversconfiguration points to valid broker addresses (test withnc -zv broker:9092) - Confirm topic exists with
kafka-topics.sh --list --bootstrap-server localhost:9092
Steps
- Add the Kafka client dependency:
kafka-clientsversion matching your broker (typically 3.x for current releases) - Configure
bootstrap.serverswith comma-separated broker addresses andclient.idfor producer identification - Set
ackstoall(or-1) for maximum durability;1for leader acknowledgment;0for fire-and-forget - Choose serializer:
StringSerializerfor keys/values or implementSerializer<T>for custom objects - Set
retriesto handle transient broker failures andmax.in.flight.requests.per.connectionto balance throughput vs ordering - Configure partitioner: use
DefaultPartitioner(hash-based) or implementPartitionerfor custom logic - Set
compression.typetosnappyorlz4to reduce network I/O for high-volume scenarios - Add callback handlers to
send()to track delivery status and handle errors explicitly
Code
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
import java.util.concurrent.Future;
public class KafkaProducerExample {
public static void main(String[] args) {
Properties props = new Properties();
// Broker connectivity
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.CLIENT_ID_CONFIG, "my-producer");
// Serialization
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// Delivery guarantees
props.put(ProducerConfig.ACKS_CONFIG, "all"); // Wait for all in-sync replicas
props.put(ProducerConfig.RETRIES_CONFIG, 3);
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1); // Enforce ordering
// Performance tuning
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); // 16KB bat
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Treating this skill as a one-shot solution — most workflows need iteration and verification
- Skipping the verification steps — you don't know it worked until you measure
- Applying this skill without understanding the underlying problem — read the related docs first
When NOT to Use This Skill
- When a simpler manual approach would take less than 10 minutes
- On critical production systems without testing in staging first
- When you don't have permission or authorization to make these changes
How to Verify It Worked
- Run the verification steps documented above
- Compare the output against your expected baseline
- Check logs for any warnings or errors — silent failures are the worst kind
Production Considerations
- Test in staging before deploying to production
- Have a rollback plan — every change should be reversible
- Monitor the affected systems for at least 24 hours after the change
Related Kafka Skills
Other Claude Code skills in the same category — free to download.
Kafka Consumer
Build Kafka consumers with consumer groups, offsets, and error handling
Kafka Streams
Build stream processing applications with Kafka Streams DSL
Kafka Connect
Configure source and sink connectors for data integration
Kafka Schema Registry
Manage Avro/Protobuf schemas with Confluent Schema Registry
Kafka Monitoring
Monitor Kafka clusters with metrics, consumer lag, and alerting
Kafka Consumer Group Setup
Configure Kafka consumer groups for parallel processing and fault tolerance
Want a Kafka skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.