Institute of Computer Science
  1. Courses
  2. 2024/25 spring
  3. Enterprise Systems Integration (MTAT.03.229)
ET
Log in

Enterprise Systems Integration 2024/25 spring

  • Home
  • Lectures
  • Practicals
  • Assignements
  • Project and exam
  • Message Board

[Pre Lab] Session 7.1: Microservices integration patterns - Event Driven Architecture - Kafka

1. Clone the following repository

$ git clone https://github.com/M-Gharib/ESI-W7.1.git

Note: if you want to create a new Spring Boot project from scratch, you need to install the following dependencies for both the Product and Payment services:

  • Spring Web
  • Spring for Apache Kafka
  • Lombok

2. Create a file and name it docker-compose.yml within the root directory of your project.

3. Copy the following content into docker-compose.yml

version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
    container_name: zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-kafka:7.3.2
    container_name: broker
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1

4. Run Docker desktop

5. From the directory containing the docker-compose.yml file created in the previous step, run the following command to start Zookeeper and Kafka broker.

$ docker compose up -d

A simplified representation of Apache Kafka Architecture is shown in the following diagram. Where the orderservice will play the message/event producer and paymentservice will be the message/event consumer.


A simplified representation of Apache Kafka Architecture

The orderservice has the following structure.

orderservice
└── configuration
      └──  KafkaTopicConfiguration.java
└── controller
      └──  ProducerController.java
└── model
      └──  Order.java
└── service
      └──  ProducerService.java

6. Check the code in KafkaTopicConfiguration.java, we are creating two topics orderTopic and orderTopicJson to be used for sending String messages and order objects respectively.

7. Check the code in ProducerController.java, we have two request handlers that are dedicated to invoking two functions in ProducerController.java, which use kafkaTemplate.send(,) to send String messages/ order objects to topics orderTopic/orderTopicJson respectively.

On the other hand, The paymentservice has the following structure.

paymentservice
└── model
      └──  Order.java
└── service
      └──  ConsumerService.java

8. Check the code in ConsumerService.java, we have two @KafkaListener that are listening to orderTopic and orderTopicJson respectively. When a message/object is sent to any of these topics the related function will be triggered, which will result in printing a message/object to the log.

9. Check the application.properties in both orderservice and paymentservice. They contain the configuration for serializing/deserializing String messages

# producer - orderservice
...
spring.kafka.producer.bootstrap-servers: localhost:9092
spring.kafka.producer.key-serializer: org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer: org.apache.kafka.common.serialization.StringSerializer
...
# consumer - paymentservice
...
spring.kafka.consumer.bootstrap-servers: localhost:9092
spring.kafka.consumer.group-id: orderEventGroup
spring.kafka.consumer.auto-offset-reset: earliest

spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
...

10. Run orderservice, then, run paymentservice.

11. Send the following request in RestClientFile.rest, then, check the console of both services, you should see that the String message has been sent from the payment service and received by the payment service.

### send a string message to the orderTopic topic
GET http://localhost:8082/api/kafka/publish?message=orderPlacedAgain

12. Modify the application.properties in both orderservice and paymentservice, as follows:

# producer - orderservice
...
spring.kafka.producer.bootstrap-servers: localhost:9092
spring.kafka.producer.key-serializer: org.apache.kafka.common.serialization.StringSerializer
#spring.kafka.producer.value-serializer: org.apache.kafka.common.serialization.StringSerializer

spring.kafka.producer.value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.producer.properties.spring.json.type.mapping=order:com.esi.orderservice.model.Order
...
# consumer - paymentservice
...
spring.kafka.consumer.bootstrap-servers: localhost:9092
spring.kafka.consumer.group-id: orderEventGroup
spring.kafka.consumer.auto-offset-reset: earliest

spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
#spring.kafka.consumer.value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.properties.spring.json.type.mapping=order:com.esi.paymentservice.model.Order
...

This modification will allow us to send objects that can be received and understood by the consumer. In the last line of the code snippet, we are specifying a mapping between the Order.java in both orderservice and paymentservice. Note that we have defined the exact same object in both services.

13. Send the following request in RestClientFile.rest, then, check the console of both services, you should see that the Order object has been sent from the payment service and received by the payment service.

### send an order object through a post request to the orderTopicJson topic
POST  http://localhost:8082/api/kafka/publish HTTP/1.1
content-type: application/json

{
    "id": "11114567-e89b-12d3-a456-426614174000",
    "description": "order description..."
}

14. When you finish, you can stop Zookeeper and Kafka broker by using the following command.

$ docker compose down

Note: you can read messages from any created topic using kafka-console-consumer as follows: In your terminal, change <topic name> with the topic name (orderTopic or orderTopicJson in the case of this example). Then, run this command to launch the kafka-console-consumer. You should be able to see all messages that have been written on the specified topic. This tool can be very handy when you are learning Kafka. The --from-beginning argument means that messages will be read from the start of the topic.

$ docker exec --interactive --tty broker kafka-console-consumer --bootstrap-server broker:9092  --topic <topic name> --from-beginning
  • Institute of Computer Science
  • Faculty of Science and Technology
  • University of Tartu
In case of technical problems or questions write to:

Contact the course organizers with the organizational and course content questions.
The proprietary copyrights of educational materials belong to the University of Tartu. The use of educational materials is permitted for the purposes and under the conditions provided for in the copyright law for the free use of a work. When using educational materials, the user is obligated to give credit to the author of the educational materials.
The use of educational materials for other purposes is allowed only with the prior written consent of the University of Tartu.
Terms of use for the Courses environment