A software architect gives a tutorial on how to user various tools to make two Kubernetes-based microservices communicate with one another in a single app.
Let’s be friends:
Comment ( 0)
Join the DZone community and get the full member experience.
This is the seventh article of my Kubernetes Know-How series.
In first six articles, we have learned how to use pod, service, replicaset, deployment, configmap, and secret in K8s.
In this article, we will see how K8s provides networking and service discovery features. In the microservices world, when services are distributed, it’s absolutely essential to have service discovery and load balancing. Imagine you have a front-end talking to a backend and there are one or multiple instances. For a front-end, the most important thing is to find a backend instance and get the job done. It’s nothing but networking your containers. In fact, service discovery mechanisms in K8s are very similar to those of Docker Swarm.
Before I get into the details of K8s service discovery, let me clarify something. Many developers think, ‚why can’t we have multiple containers in the same pod?‘ One needs to understand that, even though technically its possible to pack multiple containers in the same pod, it comes at the cost of higher mantainence and lower flexibility. Imagine you have a MySQL and Java-based web app in the same container. How cumbersome it would be to manage that pod. Upgrade, maintenance and release would become complicated. Even if you are ready to take accountability for it, just imagine a scenario in which a pod crashed. Prima facie, you can not assume anything as either MySQL could be a culprit or your web app could have been the root cause. It’s a pain in the neck and don’t forget that you have different teams responsible for managing these containers. Therefore, in almost all of the cases one pod represents on container.
K8s services provide discovery and load balancing. We have already seen K8s service and how to use it and different types of services – clusterIP, nodeport, and so on. Let’s refresh our memory by going through an example.
Now let me slightly change the example to have two service: a front-end and a backend. In addition, I am changing the type of the service from NodePort to ClusterIP. We used NodePort as we wanted to access the web console running inside a K8s cluster from the outside world. However, in a production environment you would not want to do so. In fact, you just want service collaboration inside a K8s cluster and for that we only require ClusterIP. Notice that I have used the word “only.” I am emphasizing this as NodePort involves two services: one for exposing a service on a port of a node and one for the ClusterIP service that comes included by default.
Now, in the example, we are going to have backend and front-end services. The front-end service gets the HTTP request and then invokes the backend service, which is REST-based and using RestController.
A basic service generates a random IP address, referred to as ClusterIP, that occupies a defined range ( 10.32.0.0/16 by default). Now accessing any entity in K8s cluster using the IP address is a naïve thing to do.