Home United States USA — software Event-Driven Architecture With Apache Kafka for. NET Developers Part 2: Event Consumer

Event-Driven Architecture With Apache Kafka for. NET Developers Part 2: Event Consumer

179
0
SHARE

In this series: Development environment and Event producer Event consumer (this article) Azure integration (coming soon) Let’s carry our discussion forward a…
Join the DZone community and get the full member experience. Let’s carry our discussion forward and implement a consumer of the events published by the Employee service to the leave-applications Kafka topic. We will extend the application that we developed earlier to add two new services to demonstrate how Kafka consumers work: Manager service and Result reader service. The complete source code of the application and other artifacts is available in my GitHub repository. The Manager service acts as both a consumer and a producer of events. The service reads the leave applications from the leave-applications topic (consumer), asynchronously records the manager’s decision on the application, and publishes the result as an event named leave application processed to the leave-applications-results Kafka topic (publisher). Since we previously discussed the Publisher API and its implementation in the Employee service in detail in the previous article, I will not cover its event producer feature again. I encourage you to attempt building the publisher feature of the service using my version of the source code as a guide. Launch your Visual Studio or VS Code to create a new. NET Core console application named TimeOff. Manager in the same solution as the Employee service. For reference, you can locate this project in the GitHub repository with the same name. As before, we will install the following NuGet packages to our project to enable our application to understand how to produce and consume messages: Open the Program class file in your editor and begin populating the Main method as per the directions. You can access the Kafka consumer API through an instance of the IConsumer class. As before, we need the Schema Registry client ( CachedSchemaRegistryClient) to enforce schema constraints on the consumer. Like the producer client, the consumer client requires certain initialization parameters, such as the list of Bootstrap servers, the brokers to which the client will initially connect. Use the following code to create the configurations that will be used to initialize the client. Let’s discuss the initialization properties and their values in a little more detail. Multiple consumers can be grouped into a consumer group that a GroupId uniquely identifies. Kafka will automatically balance the allocation of partitions to consumers belonging to the same consumer group. As consumers read messages from a partition, they store a pointer to their position in the partition (called offset) within Kafka. Kafka stores this information in a topic named __consumer_offsets. If a consumer resumes processing after a delay due to scheduled shutdowns or application crashes, it can resume processing messages from where it left earlier. The Kafka. NET client can automatically record and store offsets periodically in Kafka. We can turn off the automatic offset persistence process by setting the value of the EnableAutoCommit property to false for better control.

Continue reading...