Kubernetes API Server Explained

👋 Hi! I’m Bibin Wilson. In each edition, I share practical tips, guides, and the latest trends in DevOps and MLOps to make your day-to-day DevOps tasks more efficient. If someone forwarded this email to you, you can subscribe here to never miss out!

The kube-apiserver is the core component of a Kubernetes cluster, serving as the central hub that exposes the Kubernetes API.

It is designed to be highly scalable, capable of handling a large number of concurrent requests efficiently.

End users, and other cluster components, talk to the cluster via the API server. Very rarely monitoring systems and third-party services may talk to API servers to interact with the cluster.

So when you use kubectl to manage the cluster, at the backend you are actually communicating with the API server through HTTP REST APIs. API server uses gRPC to talk to the etcd component.

All communication between the API server and other components within the cluster is encrypted using TLS (Transport Layer Security) to ensure secure access and prevent unauthorized interventions in the cluster's operations.

⚠️ Note: If you're new to Kubernetes, the information below might be challenging to grasp at first. However, with hands-on experience, it will start to make sense.

Kubernetes api-server is responsible for the following.

  1. API management: Exposes the cluster API endpoint (REST) and handles all API requests. The API is version and it supports multiple API versions simultaneously.

    The API request could be internal or from external users, K8s SDKs, third party apps etc.

  2. Authentication: The API server supports several authentication methods such as client certificates, bearer tokens, and HTTP Basic Authentication.

  3. Authorization: Once the API server has authenticated a request, it evaluates the request against its authorization policies. API server used Attribute-Based Access Control (ABAC) and Role-Based Access Control (RBAC) for authorization.

  4. Processing API requests and validating data for the API objects like pods, services, etc. (Validation and Mutation Admission controllers).

    For example If you try to create a pod with invalid memory limits (e.g., specifying memory as "1TB" instead of "1024Gi"), the API server's admission controllers will reject this request, preventing invalid configurations from entering the system.

  5. The only component that the kube-apiserver initiates a connection to is the etcd component. All the other components connect to the API server.

  6. Each component (Kubelet, scheduler, controllers) independently watches the API server to figure out what it needs to do.

Let’s break down what watch is and how this communication happens:

The watch mechanism in Kubernetes is used to observe changes to resources. Instead of continuously polling the API server for changes (which would be inefficient), components like the Kubelet and Controllers can establish a watch to listen for events or changes in resource objects (like Pods, Services, etc.).

This is done through a long-running HTTP request.

Here’s how it works:

  • A component (e.g., Kubelet or Kube-scheduler) tells the API server that it wants to watch a specific resource, say Pods or Nodes.

  • The API server responds by maintaining a persistent HTTP connection and sends real-time updates (events) when changes occur to the resource.

  • The component then takes appropriate action based on the event it received (e.g., the scheduler schedules a Pod, the Kubelet starts/stops a Pod, etc.).

apiserver proxy

By default, services running inside the cluster are not accessible outside the cluster unless they are exposed using NodePort or LoadBalancer services.

There are use cases where an administrator or a developer needs to access internal services that are not exposed outside the cluster, such as for administrative tasks or debugging.

For this purpose, the API server has a built-in apiserver proxy. It is part of the API server process and is primarily used to enable access to ClusterIP services from outside the cluster.

You can start the API server proxy using the following command.

kubectl proxy --port=8080

The API server proxy is not just limited to kubectl proxy. It is also used in several other commands that involve accessing resources inside the cluster.

Here are two common examples:

The kubectl port-forward command establishes a direct link between your local machine and a pod’s port, allowing you to access services running inside the pod as if they were local to your machine.

kubectl port-forward <pod-name> 8080:80

The kubectl exec command allows you to run commands inside a container within a pod, such as checking logs or debugging

kubectl exec -it <pod-name> -- /bin/bash

Here is how it works.

The API server proxy is part of the API server process, meaning that the traffic flow is managed through the API server itself:

  • When you issue a command like kubectl proxy or kubectl port-forward, your request first goes to the API server.

  • The API server forwards the request to the appropriate service or pod within the cluster.

  • The response is then sent back to the client (your local machine) via the API server.

API Server Aggregation Layer

API server contians an aggreagation layer which allows you to extend Kubernetes API to create custom APIs resources and controllers which are not natively available in Kubernetes.

A real world example is, the Prometheus Adapter provides a custom API that extends Kubernetes to expose custom metrics stored in Prometheus. This API is served under the path /apis/custom.metrics.k8s.io/v1beta1.

This setup allows Kubernetes users to make autoscaling decisions based on application-specific metrics that are more relevant to the application's performance and health, rather than relying solely on general system metrics like CPU and memory usage.

Security Note: To reduce the cluster attack surface, it is important to secure the API server. The Shadowserver Foundation has conducted an experiment that discovered 380 000 publicly accessible Kubernetes API servers.

Reply

or to participate.