Know About the Kubernetes Services

Know About the Kubernetes Services

In this blog, we will deep dive into another Kubernetes component i.e. Service. Service is one of the critical components of Kubernetes as at the production level we do not deploy pods but deployment. Most of the time, when we deploy the deployments for each deployment, we create the services.

Why Do We Need Kubernetes Services?

In Kubernetes, a DevOps engineer deploying a pod as part of a deployment might face challenges when managing dynamic IP addresses, especially when scaling the application. For instance, if a pod goes down and a new one is created by the replica set, the associated IP address change can disrupt end-user access.

In the real world, this is not implemented. For example, an XYZ website does not tell its user to access the site in this and that IP as the request increases. This auto-healing behavior maintained the pods but not the IP address associated with which users use to access them.

  • Load Balancing

To solve the issues of the above scenario, we can create a service on the top of deployment. Service acts as a load balancer. Service uses the component of K8s that is kube-proxy. As the service offers load balancing, the test users can access the application by the service name provided by the Kubernetes instead of the IP address as it changes frequently. So based on the request received the load balancer distributes the requests to different pods.

  • Service Discovery

If the user tries to access the pod whose IP is changed then the service also resolves this problem. Service does not track the IP addresses as the pod is created frequently with a new IP every time. For large projects there may be multiple pods and IPs are changed frequently, if the Service starts to track the IPs then the service will fail. Now what the service does is the service would not bother about the IP. So service uses a new process the Labels and Selectors for service discovery. For every pod created, selectors and labels will be applied. So service will watch for the labels instead of the IP address.

  • Expose to the external World

Service can allow us to access our application outside the Kubernetes cluster. We can do it in many ways. Service provides 3 types of service:

  1. ClusterIP service
    If we create any service using clusterIP mode, as it is a default mode then the application can be accessed only inside the Kubernetes cluster.

  2. Node Port service
    If we create any service using the Node port mode, then the application can be accessed within the organization i.e. anybody with the same network can access the application. They won't get the master node access but the work node access.

  3. Loadbalancer service
    If we create a service of load balancing type then we will get the elastic load balancer IP address as it is the public IP address. So whoever wants to access our application from anywhere in the world can easily access the application using the elastic IP address. So this load balancing works on the cloud, depending upon the cloud provider as the cloud provider provides the creation of elastic IP.

Let's dive into practical

Start the Minikube cluster

Our Minikube is ready to use now.

This is our Python application directory. If you want to use this application you can clone the repo here.

Let's build the docker image and push it into a public registry like DockerHub. In case you did not push it to the public registry then you will face the problem of Imagepull Backoff error and the pods will not get ready or start.

docker -t image_name name path_of_docker_File

So our image is built.

Let's create a deployment file called "deployment.yml"

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-python-app
  labels:
    app: sample-python-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-python-app
  template:
    metadata:
      labels:
        app: sample-python-app
    spec:
      containers:
      - name: python-app
        image: subash07/python-app:latest #after I pushed the image to dockerhub
        ports:
        - containerPort: 8000
kubectl apply -f deployment.yml

Created.

To get the deployment information

kubectl get deploy

Here we can see the status as Imagepull Backoff and my pods are not ready.

This occurs because my docker image is not in the public registry, that'swhy it could not pull the image that I mentioned in the YAML file.

In case you want to use your local image run the following commands.

Execute the Minikube Docker daemon configuration:

eval $(minikube docker-env)

Build your Docker image:

docker build -t demo-pyapp .

This ensures that the Docker image is built using the Minikube Docker daemon, making it available for use within your Minikube cluster.

If it doesnot work then push the image to dockerhub or any public registry and use the image name as I have done it

We can get the IP address of the pods using kubectl get pods -o wide

Our two pods are running as we set the replicas: 2 in deployment.yml

To get detailed information about pods we can add verbosity which ranges from 0-9

  • Level 0: Normal output

  • Levels 1-3: Informational messages

  • Levels 4-6: Debugging information

  • Levels 7-9: Tracing information

kubectl get pods -v=7

This helps us to debug the pod error.

Using these given IP addresses we can access our application inside the cluster only not outside the cluster.

We can see our app is running inside the Minikube cluster. We can check in both IP addresses to see our app running inside the cluster.

But if we try to access the application from outside the cluster, we won't be able to access it. Since the IP of the external browser and clusterIP are not the same as our app is in ClusterIP(default).

If we delete the pod then the replica set will immediately create the new pod with a different IP but in the same subnet.

So our application is only accessible to those who have clusterIP. To solve the problem of accessing the app from the same network and external world network we need the service.

Type= Nodeport: to access the app within the organization(Same network)

Type= LoadBalancer: to access the application from the external world anyone who has access to the internet.

Let's create the service file = service.yml

apiVersion: v1
kind: Service
metadata:
  name: python-django-service
spec:
  type: NodePort
  selector:
    app: sample-python-app
  ports:
    - port: 80
      targetPort: 8000
      nodePort: 30007

We need to select the label from the template section of deployment.yml to insert in the selector section of service.yml. The target port is set to 8000 as our application is running in the port 8000.

Run the service .yml file

kubectl apply -f service.yml

To get services created

kubectl get svc

We can see the app running in the node port which is different from clusterIP. We can access the app either by doing Minikube SSH or using the Minikube IP address in our browser.

This is inside the cluster.

To access from the external browser within the same network.

Here we can access the application from our browser also. If other people will try to access it, they won't be able to access the application as it is not a load balancer type service.

To access the application from the external browser from another IP we need to use load balancer type.

We changed the node port type to load balancer then the external IP is <pending> as we are executing it from the local environment. The load balancer IP is only provided by the cloud providers so we need to use it in the cloud services like EKS, GKE, AKS.

We can simply edit the svc file or create a new load balancer yaml file = Loadbalancer.yml file

apiVersion: v1
kind: Service
metadata:
  name: python-django-service
spec:
  type: LoadBalancer
  selector:
    app: sample-python-app
  ports:
    - port: 80
      targetPort: 8000
      nodePort: 30007

Kubeshark

Kubeshark is a very simple application that is used to show the real-time flow of the traffic request. Kubeshark offers real-time, cluster-wide, identity-aware, protocol-level visibility into API traffic, empowering its users to see with their own eyes what’s happening in all (hidden) corners of their K8s clusters.

To install in Linux

sh <(curl -Ls https://kubeshark.co/install)

To run the CLI, use the tap command.

kubeshark tap

In one of the tabs of the terminal Kubeshark is running.

And in the next tab, we sent the 8-9 requests to our application, we will see this traffic request being distributed in the two pods with the IPs: 10.244.0.67 and 10.244.0.66. Let's check it out.

This is the sample image taken from the internet because I got the error on my minikube as kubeshark-worker-daemon-set pod did not create due to which the request sent from my terminal did not reach the KUbeshark. The error message "Error response from daemon: invalid CapAdd: capability not supported by your kernel or not available in the current environment: 'CAP_CHECKPOINT_RESTORE'" indicates that the CAP_CHECKPOINT_RESTORE capability is not supported or available in your Minikube kernel. But you can try from your side.

In summary, Kubernetes Services emerge as an important component in managing the complexities of deploying applications at scale. By adopting a label-based service discovery approach, Services overcome the limitations of tracking individual pod IPs. The practical implementation within a Minikube cluster highlighted the significance of Services in providing external access to applications.

Happy Learning!!