Init containers are the containers that run once during the startup process of a pod. A Pod can have any number of init containers, & they will each run once(in order) to completion.

A Kubernetes Pod can have one or more containers. A Pod with more than one container is a Multi-Container Pod.
In a multi-container Pod, the containers share resources such as network and storage. They can interact with one another, working together to provide functionality.

Best Practice tip: Always keep containers in a separate Pods unless they need to share resources

An example of a Multi-Container Pod use case can be:
The User have an application that is hard-coded to write log output to a file on disk.

Kubernetes provides this capability that can automatically restart containers whenever they fail. By the use of Restart Policies the user can customize this behavior by defining when the user want a pod’s containers to be automatically restarted.

Restart policies are an important component of self-healing applications, which are automatically repaired when a problem arises.

There are three possible values for a pod’s restart policy in Kubernetes:
OnFailure &

In Kubernetes, Always is the default restart policy. With this policy, containers will always be restarted if they stop, even if they completed successfully. …

Kubernetes provides a number of features that allow users to build robust solutions, such as the ability to automatically restart unhealthy containers. To make the most of these features, K8s needs to be able to accurately determine the status of user applications.
This means actively monitoring Container Health.

Health Checks are a good way to let the K8s system know if an instance of the application is working or not. The other containers or services should not send requests or try to access a container which is in a bad state. …

Kubernetes provides various mechanisms or knobs to the user in order to define the amount of resources(CPU, Memory) which the container may require or atleast the user thinks the container may require.
Also, there are scenarios where the user may want to limit the resources to be given to containers.

In this post we will discuss such mechanisms.

  1. Resource Requests:
    An example of Resource Request in a Pod definition:

Resource Requests allow the user to define an amount of resources(such as CPU or Memory) which is expected from a container to use. The Kubernetes Scheduler will use Resource Request to…

Different Applications deployed on Kubernetes Pods

As a user, when you are running applications in Kubernetes, you may want to pass dynamic values to your applications at runtime to control the way how they behave.
This is known as Application configuration.

Two types of application Configuration are:
1) ConfigMaps
2) Secret

With ConfigMaps, one can store Configuration data in Kubernetes . ConfigMaps store data in the form of a key-value map. ConfigMap data can be passed to user’s container applications.

Simple example of a ConfigMap:

apiVersion: v1
kind: ConfigMap
name: sales-dev
key1: value1
key2: value2
morekeys: data

Secrets are similar to ConfigMaps…

Pods deployed in a Kubernetes Worker node

In this part of our ongoing Kubernetes series, lets talk about Containers & Pods which are essentially the building block of the Kubernetes ecosystem.
In some texts, you may see Containers and Pods are been used interchangeably, but in K8s terminology there is a separate place for both of them.

A software container is a means of packing up an application or service and everything(dependencies, such as the code, runtime, system libraries etc) which is required for it to run, regardless of environment, in a single place.

But Kubernetes cannot create a container by itself. Kubernetes needs a container runtime…

Backing up your kubernetes cluster data by backuping up etcd is of crucial importance. etcd is the backend storage solution for the deployed Kubernetes cluster. All the K8s objects, applications & configuration are stored in etcd.

  1. Backing up ‘etcd’ data is done using etcd command line tool: etcdctl
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save <filename>

2. Restoring etcd can be done from a backup using the etcdctl snapshot restore command:

ETCDCTL_API=3 etcdctl snapshot restore <filename>

Lets have a handson demo session on the above operations. Steps are as follows:

1) Execute etcdctl command to check Cluster name. Lookup for the…

In Kubernetes, while performing a Maintenance activity, sometimes it is needed to remove a k8s node from service. To do this, one can DRAIN the node. In this process containers running on the node(to be drained) will be gracefully terminated(& potentially rescheduled on another node).
Draining can be performed on a Control Plane node as well as a Worker node.

Command to Drain a Kubernetes node:
kubectl drain <node-name> — ignore-daemonsets

- -’ignore-daemonsets’ is used because there may be some daemonset pods that are tied to the node, so user may want to ignore them while executing the draining.


API Interaction in case of High Availability in Kubernetes Cluster

Kubernetes is already designed to deploy Pods(containers) as Highly Available. But what if the user wants the kubernetes cluster itself as HIGHLY AVAILABLE.
To accomplish this we need to have multiple Control Plane nodes in the k8s cluster. Although, with multiple Control Plane nodes, comes multiple instances of ‘kube-api-cluster’ &with this arrangement there is a need of ‘Load Balancer’ to have communication with worker nodes using k8s apis.

Therefore, In order to have communication with one of the control plane nodes or kube-api-server instances, the kubectl has to go through a Load Balancer, as shown in the figure above.


Gaurav Kaushik

Cloud, DevOps Enthusiast :)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store