Disadvantages of Kafka Message Compression
- When you do compression, producers must commit some CPU cycles to complete that compression.
- Similarly, the consumers must commit some CPU cycles to decompress the data.
Apache Kafka – Message Compression
Kafka Producers are going to write data to topics and topics are made of partitions. Now the producers in Kafka will automatically know to which broker and partition to write based on your message and in case there is a Kafka broker failure in your cluster the producers will automatically recover from it which makes Kafka resilient and which makes Kafka so good and used today. So if we look at a diagram to have the data in our topic partitions we’re going to have a producer on the left-hand side sending data into each of the partitions of our topics.
So here is another setting that’s so important which is Message Compression. Before that let’s understand the Kafka Message Anatomy first.
Contact Us