18 July 2022
Event-driven architecture is based on publish events somewhere in an application and receiving them somewhere else in the same application or another application. In this post, I will present a simple Java application to show how this approach works. Of course, in a real application there will be more configuration details for both event publisher (producer) and event receiver (consumer). There should also be more configuration for Kafka/Zookeeper in a production environment, which mostly are setup and configured in bare-metal servers.
Most today applications are based on Spring Boot ecosystem which are using a different code-style to interact with Kafka. This post is using "kafka-clients" which is a dedicated library for Kafka interaction.
It is assumed the reader has basic understanding of Kafka and event-driven architecture. It's also assumed they already installed Java 16 (or later versions), Maven, and docker and docker-compose.
As the first step, I created a Maven project and added "kafka-clients" as a dependency as this:
4.0.0 com.ahmadsedighi.kafka kafka-simple-producer-consumer 1.0-SNAPSHOT org.apache.kafka kafka-clients 2.4.0
Then I configured it to use Java 16 -the latest Java version that I had in my machine.
org.apache.maven.plugins maven-compiler-plugin 3.9.0 16
Because I wanted to run the application as a docker image, I needed to bundle the final jar file and all dependencies. So, I added "maven-assembly-plugin" to create a fat jar -a jar with all necessary dependencies needed to run.
org.apache.maven.plugins maven-assembly-plugin 3.3.0 jar-with-dependencies assemble-all package single
The project is now ready to develop! But before that we need to explore kafka-clients library a little bit.
Producer
Consumer
ProducerRecord
since
it contains additional information such as the record offset, the checksum, and more.
I have implemented two classes BasicProducer
and BasicConsumer
to publish and consume events at both sides. Both classes have simple public static void main(String[] args)
to connect to Kafka by basic configuration.
Two docker files, Dockerfile-consumer and Dockerfile-producer are used to create docker images. Since both of them have the same structure, I explain one of them.
FROM adoptopenjdk:16_36-jre-hotspot ADD ./target/kafka-simple-producer-consumer-1.0-SNAPSHOT-jar-with-dependencies.jar app.jar ENTRYPOINT ["java","-classpath","./app.jar", "com.ahmadsedighi.kafka.simple.producer.BasicProducer"]
As you can see, I just extended adoptopenjdk:16_36-jre-hotspot
to create a VM with Java 16, then I added the fat jar file to the classpath, and run the BasicProducer
through the java
tools.
The docker-compose.yml bundles all services to run inside docker containers, including Kafka and Zookeeper.
kafka: image: wurstmeister/kafka:2.12-2.5.0 mem_limit: 512m ports: - "9092:9092" environment: - KAFKA_ADVERTISED_HOST_NAME=kafka - KAFKA_ADVERTISED_PORT=9092 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 depends_on: - zookeeper zookeeper: image: wurstmeister/zookeeper:3.4.6 mem_limit: 512m ports: - "2181:2181" environment: - KAFKA_ADVERTISED_HOST_NAME=zookeeper
Run the application is too simple, just execute the following command:
docker-compose up -d
You should see the logs of successful execution of containers. You can then login to each individual containers to see what's happening inside
docker-compose logs producer-service
Full source code of this article can be found on my GitHub repository.