Home United States USA — software 2 Approaches to Microservices Monitoring and Logging

2 Approaches to Microservices Monitoring and Logging

146
0
SHARE

What are the challenges of manually setting up monitoring and logging for your microservices? How can Kong Konnect help simplify the process? Find out!
Join the DZone community and get the full member experience. We’re seeing a massive shift in how companies build their software. More and more, companies are building—or are rapidly transitioning—their applications to a microservice architecture. The monolithic application is giving way to the rise of microservices. With an application segmented into dozens (or hundreds!) of microservices, monitoring and consolidated logging become imperative. At any given moment, one of your microservices could fail or throw an error or begin hogging resources. You need to monitor for this so that you can respond quickly and appropriately. In addition, your ability to troubleshoot errors and understand system behavior depends heavily on the existence and effectiveness of your logging tool. Sadly, setting up effective monitoring and logging for all of your microservices is not so simple—though it could be. This article will look at the challenges of manually setting up monitoring and logging for your microservices. Then we’ll look at how simple it is to do this with a service connectivity platform—Kong Konnect. Before we dive into monitoring and logging, let’s briefly go over our sample microservices application. We have three API services: Users, Products, and Orders. Each service has two GET endpoints and one POST endpoint. The code for these simple services, which were built with Node.js and Express, is publicly available. We have deployed all three of our services to GCP Cloud Functions. Below are the outputs for sample curl requests to our Users service: The Products and Orders services work similarly. We would like to have monitoring for each of these services to see response status codes, response times, and traffic throughput. We already have Prometheus and Grafana up and running, ready to capture and display metrics. We just need to add this monitoring solution to our services. We’d also like to have consolidated logging sent to a single location for all services. We already have a Loggly account set up and ready to receive logs. Again, we just need to add this logging tool to our services. Let’s consider the level of effort for manually hooking our services into our monitoring and logging solutions. Since we’re running Prometheus, and our services all happen to be Node.js Express servers, perhaps the most straightforward approach would be to use the express-prom-bundle package. This package is a Prometheus metrics middleware that captures metrics and exposes them at the /metrics endpoint for the server. Simple enough. Of course, that means we’ll need to modify the package.json and server.js files for each of our three services. We’ll need to add the package to our project, then add the lines of code to server.js to use the middleware. After that, we’ll need to redeploy our newly updated service. Our three services now expose their metrics at the /metrics endpoint for each of their respective URLs. We’ll need to update the configuration for our Prometheus service, making sure that the scrape configs include three targets—one for each service. Similarly, if we want our services’ log messages sent to a centralized location—like Loggly—then we’ll probably use the most straightforward Node.js package for this. That’s likely the winston-loggly-bulk package. Similar to the package for integrating Prometheus, we’ll need to add this logging package to each of our three projects and modify server.js to use the package. And, of course, we’ll also need to redeploy our services after we’ve updated them. A full-featured and robust business application probably has more than three microservices. An application may have dozens of microservices or more. They also won’t all be uniform Node.js Express servers. With just three services to hook into a monitoring and logging solution, the manual approach is already bad enough.

Continue reading...