Manage Avro/Protobuf schemas with Confluent Schema Registry
✓Works with OpenClaudeYou are a Kafka infrastructure engineer. The user wants to manage Avro and Protobuf schemas using Confluent Schema Registry, including registering schemas, retrieving them, and validating compatibility.
What to check first
- Verify Schema Registry is running:
curl http://localhost:8081/subjects - Confirm Kafka broker connectivity and Schema Registry URL in your client configuration
- Check installed dependencies:
mvn dependency:tree | grep schema-registryorpip list | grep confluent
Steps
- Start with Schema Registry running (Docker:
docker run -d -p 8081:8081 confluentinc/cp-schema-registry:7.xrequires Zookeeper/Kafka) - Create an Avro schema file (
.avsc) with namespace, type, and fields clearly defined - Register the schema via POST to
http://{registry-host}:8081/subjects/{subject-name}/versionswithContent-Type: application/vnd.schemaregistry.v1+json - Retrieve registered schema using GET
/subjects/{subject-name}/versions/{version}or/subjects/{subject-name}/versions/latest - Set compatibility mode via PUT
/config/{subject-name}with body{"compatibility":"BACKWARD"}(or FORWARD, FULL, NONE) - Validate a record against schema before publishing by using the schema ID returned during registration
- Use the SchemaRegistryClient in your producer/consumer to automatically encode/decode with schema ID prefixes
- Monitor schema versions with GET
/subjects/{subject-name}/versionsto see the full history
Code
import io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient;
import io.confluent.kafka.schemaregistry.client.SchemaRegistryClient;
import io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import java.io.IOException;
import java.util.Collections;
public class SchemaRegistryManager {
private final SchemaRegistryClient client;
private final String schemaRegistryUrl;
public SchemaRegistryManager(String schemaRegistryUrl) {
this.schemaRegistryUrl = schemaRegistryUrl;
this.client = new CachedSchemaRegistryClient(schemaRegistryUrl, 100);
}
public int registerSchema(String subject, String avroSchemaJson)
throws IOException, RestClientException {
Schema schema = new Schema.Parser().parse(avroSchemaJson);
return client.register(subject, schema);
}
public Schema getLatestSchema(String subject)
throws IOException, RestClientException {
return client.getLatestSchemaMetadata(subject).getSchema();
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Treating this skill as a one-shot solution — most workflows need iteration and verification
- Skipping the verification steps — you don't know it worked until you measure
- Applying this skill without understanding the underlying problem — read the related docs first
When NOT to Use This Skill
- When a simpler manual approach would take less than 10 minutes
- On critical production systems without testing in staging first
- When you don't have permission or authorization to make these changes
How to Verify It Worked
- Run the verification steps documented above
- Compare the output against your expected baseline
- Check logs for any warnings or errors — silent failures are the worst kind
Production Considerations
- Test in staging before deploying to production
- Have a rollback plan — every change should be reversible
- Monitor the affected systems for at least 24 hours after the change
Related Kafka Skills
Other Claude Code skills in the same category — free to download.
Kafka Producer
Build Kafka producers with serialization, partitioning, and delivery guarantees
Kafka Consumer
Build Kafka consumers with consumer groups, offsets, and error handling
Kafka Streams
Build stream processing applications with Kafka Streams DSL
Kafka Connect
Configure source and sink connectors for data integration
Kafka Monitoring
Monitor Kafka clusters with metrics, consumer lag, and alerting
Kafka Consumer Group Setup
Configure Kafka consumer groups for parallel processing and fault tolerance
Want a Kafka skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.