Understanding Kafka Topics and Partitions

Apache KafkaKafka Consumer-ApiKafka Producer-Api

Apache Kafka Problem Overview


I am starting to learn Kafka, during my readings, some questions came to my mind:

  1. When a producer is producing a message - it will specify the topic it wants to send the message to, is that right? Does it care about partitions?

  2. When a subscriber is running - does it specify its group id so that it can be part of a cluster of consumers of the same topic or several topics that this group of consumers is interested in?

  3. Does each consumer group have a corresponding partition on the broker or does each consumer have one?

  4. Are the partitions created by the broker, and therefore not a concern for the consumers?

  5. Since this is a queue with an offset for each partition, is it the responsibility of the consumer to specify which messages it wants to read? Does it need to save its state?

  6. What happens when a message is deleted from the queue? - For example, the retention was for 3 hours, then the time passes, how is the offset being handled on both sides?

Apache Kafka Solutions


Solution 1 - Apache Kafka

> ## This post already has answers, but I am adding my view with a few pictures from Kafka Definitive Guide > >Before answering the questions, let's look at an overview of producer components:

overview of producer components

>### 1. When a producer is producing a message - It will specify the topic it wants to send the message to, is that right? Does it care about partitions?

The producer will decide target partition to place any message, depending on:

  • Partition id, if it's specified within the message
  • key % num partitions, if no partition id is mentioned
  • Round robin if neither partition id nor message key is available in the message means only the value is available

>### 2. When a subscriber is running - Does it specify its group id so that it can be part of a cluster of consumers of the same topic or several topics that this group of consumers is interested in?

You should always configure group.id unless you are using the simple assignment API and you don’t need to store offsets in Kafka. It will not be a part of any group. source

>### 3. Does each consumer group have a corresponding partition on the broker or does each consumer have one?

In one consumer group, each partition will be processed by one consumer only. These are the possible scenarios

  • Number of consumers is less than number of topic partitions then multiple partitions can be assigned to one of the consumers in the group number of consumers less than topic partitions
  • Number of consumers same as number of topic partitions, then partition and consumer mapping can be like below, number of consumers same as the number of topic partitions
  • Number of consumers is higher than number of topic partitions, then partition and consumer mapping can be as seen below, Not effective, check Consumer 5 number of consumers more than number of topic partitions

>### 4. As the partitions created by the broker, therefore not a concern for the consumers?

Consumer should be aware of the number of partitions, as was discussed in question 3.

>### 5. Since this is a queue with an offset for each partition, is it the responsibility of the consumer to specify which messages it wants to read? Does it need to save its state?

Kafka(to be specific Group Coordinator) takes care of the offset state by producing a message to an internal __consumer_offsets topic, this behavior can be configurable to manual as well by setting enable.auto.commit to false. In that case consumer.commitSync() and consumer.commitAsync() can be helpful for managing offset.

More about Group Coordinator:

  1. It's one of the elected brokers in the cluster from Kafka server side.
  2. Consumers interact with the Group Coordinator for offset commits and fetch requests.
  3. Consumer sends periodic heartbeats to Group Coordinator.

>### 6. What happens when a message is deleted from the queue? - For example, The retention was for 3 hours, then the time passes, how is the offset being handled on both sides?

If any consumer starts after the retention period, messages will be consumed as per auto.offset.reset configuration which could be latest/earliest. technically it's latest(start processing new messages) because all the messages got expired by that time and retention is topic-level configuration.

Solution 2 - Apache Kafka

Let's take those in order :)

> 1 - When a producer is producing a message - It will specify the topic it wants to send the message to, is that right? Does it care about partitions?

By default, the producer doesn't care about partitioning. You have the option to use a customized partitioner to have a better control, but it's totally optional.


> 2 - When a subscriber is running - Does it specify its group id so that it can be part of a cluster of consumers of the same topic or several topics that this group of consumers is interested in?

Yes, consumers join (or create if they're alone) a consumer group to share load. No two consumers in the same group will ever receive the same message.


> 3 - Does each consumer group have a corresponding partition on the broker or does each consumer have one?

Neither. All consumers in a consumer group are assigned a set of partitions, under two conditions : no two consumers in the same group have any partition in common - and the consumer group as a whole is assigned every existing partition.


> 4 - Are the partitions created by the broker, therefore not a concern for the consumers?

They're not, but you can see from 3 that it's totally useless to have more consumers than existing partitions, so it's your maximum parallelism level for consuming.


> 5 - Since this is a queue with an offset for each partition, is it responsibility of the consumer to specify which messages it wants to read? Does it need to save its state?

Yes, consumers save an offset per topic per partition. This is totally handled by Kafka, no worries about it.


> 6 - What happens when a message is deleted from the queue? - For example: The retention was for 3 hours, then the time passes, how is the offset being handled on both sides?

If a consumer ever request an offset not available for a partition on the brokers (for example, due to deletion), it enters an error mode, and ultimately reset itself for this partition to either the most recent or the oldest message available (depending on the auto.offset.reset configuration value), and continue working.

Solution 3 - Apache Kafka

Kafka uses Topic conception which comes to bring order into message flow.

To balance the load, a topic may be divided into multiple partitions and replicated across brokers.

Partitions are ordered, immutable sequences of messages that’s continually appended i.e. a commit log.

Messages in the partition have a sequential id number that uniquely identifies each message within the partition.

Partitions allow a topic’s log to scale beyond a size that will fit on a single server (a broker) and act as the unit of parallelism.

The partitions of a topic are distributed over the brokers in the Kafka cluster where each broker handles data and requests for a share of the partitions.

Each partition is replicated across a configurable number of brokers to ensure fault tolerance.

Well explained in this article : http://codeflex.co/what-is-apache-kafka/

Solution 4 - Apache Kafka

  1. When a producer is producing a message - it will specify the topic it wants to send the message to, is that right? Does it care about partitions?

Yes, the Producer does specify the topic

producer.send(new ProducerRecord<byte[],byte[]>(topic,  partition, key1, value1) , callback);

The more partitions there are in a Kafka cluster, the higher the throughput one can achieve. A rough formula for picking the number of partitions is based on throughput. You measure the throughout that you can achieve on a single partition for production (call it p) and consumption (call it c).


  1. When a subscriber is running - does it specify its group id so that it can be part of a cluster of consumers of the same topic or several topics that this group of consumers is interested in?

When the Kafka consumer is constructed and group.id does not exist yet (i.e. there are no existing consumers that are part of the group), the consumer group will be created automatically. If all consumers in a group leave the group, the group is automatically destroyed.


  1. Does each consumer group have a corresponding partition on the broker or does each consumer have one?

Each consumer group is assigned a partition, multiple consumer groups can access a single partition, but not 2 consumers belonging to a consumer group are assigned the same partition because consumer consumes messages sequentially in a group and if multiple consumers from a single group consume messages from the same partition then sequence might be lost, whereas groups being logically independent can consume from the same partition.


  1. Are the partitions created by the broker, and therefore not a concern for the consumers?

Brokers already have partitions. Each broker to have up to 4,000 partitions and each cluster to have up to 200,000 partitions.

Whenever a consumer enters or leaves a consumer group, the brokers rebalance the partitions across consumers, meaning Kafka handles load balancing with respect to the number of partitions per application instance for you.

Before assigning partitions to a consumer, Kafka would first check if there are any existing consumers with the given group-id. When there are no existing consumers with the given group-id, it would assign all the partitions of that topic to this new consumer. When there are two consumers already with the given group-id and a third consumer wants to consume with the same group-id. It would assign the partitions equally among all three consumers. No two consumers of the same group-id would be assigned to the same partition source


  1. Since this is a queue with an offset for each partition, is it the responsibility of the consumer to specify which messages it wants to read? Does it need to save its state?

Offset is handled internally by Kafka. The current offset is a pointer to the last record that Kafka has already sent to a consumer in the most recent poll. So, the consumer doesn't get the same record twice because of the current offset. It doesn't need to be specified exclusively


  1. What happens when a message is deleted from the queue? - For example, the retention was for 3 hours, then the time passes, how is the offset being handled on both sides?

It automatically reconfigures themselves according to need. It should give an error.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionPinidbestView Question on Stackoverflow
Solution 1 - Apache KafkamrsrinivasView Answer on Stackoverflow
Solution 2 - Apache KafkaC4storView Answer on Stackoverflow
Solution 3 - Apache KafkaybondaView Answer on Stackoverflow
Solution 4 - Apache KafkaShristi BishtView Answer on Stackoverflow