Apace Kafka Partitions with three brokers example

Category : Apache Spark | Sub Category : Apache Spark Programs | By Prasad Bonam Last updated: 2023-08-05 09:45:52 Viewed : 610


Apace Kafka Partitions with three brokers example:

In Apache Kafka, partitions are distributed across multiple brokers to achieve scalability, fault tolerance, and high availability. Each broker in the Kafka cluster can be responsible for hosting one or more partitions of a topic. Lets go through an example with three brokers to understand how Kafka partitions work:

  1. Setting Up Kafka: For this example, assume you have set up a Kafka cluster with three brokers: Broker-1, Broker-2, and Broker-3. Each broker is running on a different host, and their addresses are localhost:9092, localhost:9093, and localhost:9094, respectively.

  2. Topic Creation: Create a Kafka topic named "my_topic" with three partitions and a replication factor of 2. This means there will be three partitions, and each partition will have two replicas across different brokers.

    Using the command-line tool on Unix/Linux/Mac:

    bash
    bin/kafka-topics.sh --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --create --topic my_topic --partitions 3 --replication-factor 2

    Using the command-line tool on Windows:

    batch
    binwindowskafka-topics.bat --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --create --topic my_topic --partitions 3 --replication-factor 2
  3. Partition Assignment: The Kafka cluster will distribute the three partitions of "my_topic" across the three brokers as follows:

    • Partition 0: Replica in Broker-1, Replica in Broker-2
    • Partition 1: Replica in Broker-2, Replica in Broker-3
    • Partition 2: Replica in Broker-3, Replica in Broker-1

    Kafka ensures that each partition has one leader and one or more followers (replicas). The leader is responsible for handling all read and write requests for the partition, while followers replicate data from the leader for fault tolerance.

  4. Producing Messages: Start a producer to send messages to the "my_topic" topic.

    Using the command-line tool on Unix/Linux/Mac:

    bash
    bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my_topic

    Using the command-line tool on Windows:

    batch
    binwindowskafka-console-producer.bat --broker-list localhost:9092 --topic my_topic

    Now, you can enter messages in the console, and Kafka will publish them to one of the three partitions in the "my_topic" topic. The partitioning strategy used by default is round-robin, which means messages will be distributed evenly across partitions.

  5. Consuming Messages: Start a consumer to read messages from the "my_topic" topic.

    Using the command-line tool on Unix/Linux/Mac:

    bash
    bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my_topic --group my_group

    Using the command-line tool on Windows:

    batch
    binwindowskafka-console-consumer.bat --bootstrap-server localhost:9092 --topic my_topic --group my_group

    The consumer will read messages from one of the partitions in the "my_topic" topic. Kafka will automatically assign a partition to the consumer within the consumer group and start reading from the offset where it left off.

Kafka partitions provide data parallelism and enable multiple producers and consumers to work concurrently and process data in a distributed manner. The distribution of partitions across brokers ensures high availability and load balancing, making Kafka an excellent choice for handling real-time data feeds and large volumes of event data.


Search
Related Articles

Leave a Comment: