Introduction
Kubernetes provides orchestration for more than three-quarters of containerized applications today. There are many alternatives to Kubernetes but it is still a widely used tool for managing your containerized workloads.
Through E2E Myaccount portal, You can launch a Kubernetes master, worker nodes within a blink and get the work with your Kubernetes cluster in no time.
This guide will show how to deploy Managed Kubernetes on E2E Cloud.
Getting Started
Here are the steps to be followed:
- Login into MyAccount: Please go to ‘MyAccount’ and log in using your credentials set up at the time of creating and activating the E2E Networks ‘MyAccount’.
- Browse to this label “Kubernetes” under dropdown of Compute: Post logging in to the E2E Networks ‘MyAccount’, Your dashboard will appear and just below the dashboard icon , Click on compute and choose Kubernetes from the available options.
Now, you can create Kubernetes. How? Let us guide you through:
On the top right section of the managed kubernetes dashboard, You will click on the “Create kubernetes” Button which will drag you to the cluster page for selecting the configuration and entering the details of your database.
Kubernetes Configuration and Setup
- After clicking on “Create Kubernetes'', there will be some configurations that will give you a glimpse of the plan like cluster name, version and price .You are required to click on “Add Plan” and need to select the required configuration and setting for your kubernetes.
- Now a mini screen will appear where a list of plans will be displayed for you to choose. Four tabs for filtering your plan, vCPUs , SSD storage and RAM. You can pick one from the drop down menu according to your use case.
- Now that you have chosen the plan, increase or decrease worker count and write a label name.
- Click on “Add Plan” under Actions.
- You still have a choice to make modifications in your plan.
- Once you are finally set, Click on “Add Plan” from the bottom right.
- It will take you back to the previous screen with your chosen configurations automatically filled up under “Add Node Pool Plan”.
- Now, you have to “Select VPC”. VPC is mandatory to be launched along with the Kubernetes cluster to improve security of your infrastructure. If you don’t have VPC launched, please follow these steps:
- You are ready to Create a Cluster now. Click on “Create Cluster”: It will take a few minutes to set up the scale group and you will taken to the ‘Manage Kubernetes’ page.
Manage your Kubernetes
Following things will be visible once you create a cluster:
- Cluster Details: You will be able to check all the basic details of your kubernetes. You can check the kubernetes name and kubernetes version details.
- Node Pool: Here you can resize i.e. increase or decrease the pool size.
Kubecongfig.yaml File and Token are the two most important tabs for managing the kubernetes
Note: How To Download Kubeconfig.yaml File?
- After downloading the Kube config
- Please make sure kubectl is installed on your system
- To install kubectl follow the following steps:
(Before you begin, You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.25 client can communicate with v1.24, v1.25, and v1.26 control planes. Using the latest compatible version of kubectl helps avoid unforeseen issues)
Install kubectl on Linux
The following methods exist for installing kubectl on Linux:
- Install kubectl binary with curl on Linux
- Install using native package management
- Install using other package management
We will explain how to install Kubectl binary with Curl command on Linux
- Download the latest release with the command:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”
2. Validate the binary (optional)
Download the kubectl checksum file:
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
- Run kubectl –kubeconfig=”download_file_name” proxy
Now open the below URL in the browser to check your Kubernetes dashboard which is ready to connect and serve your cluster:
The screen will appear like this
Now, go to Myaccount In the cluster details screen, you will find “Kubeconfig Token”. Click on “Show Token”. Copy this token and paste it in Kubernetes dashboard
With this, you get a fully featured kubernetes installation which can run and orchestrate any pod in the cluster
Now we will explain various modules available in Myaccount
Active Node Pool details
Here, you can easily see the plan name, state etc. under Node Pool. The Active Node Pool Details tab provides information about the Worker nodes. Users also increase and delete the worker node.
Persistent Volume (PVC)
When you click on PVC (also known as CSI- Container Storage interface), there will be no PVC pre added. You will see “Click here” and will get an option to add persistent volume. Based on the customers feedback, now Myaccount allows a customer to choose PVC as low as 10GB. PVC is required to create Stateful applications (Stateful applications save data to persistent disk storage for use by the server, by clients, and by other applications. An example of a stateful application is a database or key-value store to which data is saved and retrieved by other applications)
E2E is giving several options of persistent volumes like 10 GB, 20 GB, 50 GB etc.
You can select your required persistent volume from the drop down and give its name before creating it.
It will take a few minutes to get created and will appear under PVC module
LB IP Pool
You can use public addresses for communication between your Kubernetes and the Internet. When you launch an E2E Kubernetes, we assign it a public IP address by default. This public IP is not a reserved IP address by default. You can reserve the default assigned public IP address for your account and it will remain mapped to your Myaccount until you release it.
Here, you will see an option to reserve your Public IP.
Reserving a new public IP ensures that an IP will be reserved for your MyAccount and remain with you until you release it. The reserved IP can be attached to the Kubernetes master node.
Please Note, standard monthly charges are 199.0 infra credits for each reserved IP.
Now, when you click on “Reserve a new IP”, it will ask you to Select one of the IP from the list to attach in the kubernetes cluster. If you have not reserved any IP before, then nothing will appear in the drop down list.
You will be able to see the attached IP. Also you will be given an option to attach more IP’s in case you want.
You can also use private IPv4 addresses for communication between instances in the same VPC. When you launch an E2E Kubernetes, we allocate a private IPv4 address for your Kubernetes.
Kubernetes Security Checklist:
The following guidelines are important when creating a robust and reliable Kubernetes production setup for running critical applications.
- Authentication & Authorization
- system:masters group should not be used for user or component authentication after bootstrapping.
- The kube-controller-manager should be running with --use-service-account-credentials enabled.
- The root certificate should be protected (either an offline CA, or a managed online CA with effective access controls).
- Intermediate and leaf certificates should have an expiry date no more than 3 years in the future.
- There should be a process for periodic access review, and reviews occur no more than 24 months apart.
- The Role Based Access Control Good Practices should be followed for guidance related to authentication and authorization.
- Keep security vulnerabilities and attack surfaces to a minimum for the Cluster and Applications.
Lockdown the pods and nodes, with traceable break-glass policies. Ensure that the applications you are running are secure and that the data you are storing is secured against attack. And because Kubernetes is a rapidly growing open source project, be on top of the updates and patches so that they can be applied in a timely manner.
- Segregate the Kubernetes Cluster and Configure usage limits.
Segregate Production Kubernetes Cluster to make sure that rapid changes happening in Infrastructure and application level do not impact production workloads. This segregation could be physical or logical, and based on the setup proper guardrails need to be implemented. As Kubernetes is mostly used as a shared infrastructure, proper usage limits need to be applied for running applications based on type and criticality of workloads, to minimize the impact of an outlier. Namespace level isolation and resource limits are common practice for this type of enforcement.
Kubernetes Error Handling:
Most problems with Kubernetes adoption ultimately stem from the complexity of the technology itself. There are unobvious difficulties and nuances of implementation and operation, and there are underutilized advantages.
1. The selector of the labels on the service does not have a match with the pods: In order to function correctly as a network balancer, a service generally specifies selectors that allow you to find the pods that are part of the balancing pool. If there is no match, the service has no endpoints to forward traffic to and an error occurs. Bear in mind that the load balancing towards the pods is of a random type.
2. Wrong container port mapped to the service: Each service has two fundamental parameters, “targetPort” and “port”, which are often confused and misused. This confusion then results in error messages claiming that the connection was refused or there was a lack of response to the request. To avoid this error, remember that “targetPort” is the destination port in the pods, the one to which a service goes to forward traffic. The “port” parameter, on the other hand, refers to the port exposed by the service to the clients. They can be the same, so it is essential to know their meanings!
3. CrashLoopBackOff: Another frequent Kubernetes error is the crashloopbackoff error. It occurs when a pod is running, but one of its containers is restarting due to termination (usually the wrong way). In other words, the container has fallen in the loop of start-crash-start-crash.
Log of CrashLoopBackOff error: The CrashLoopBackOff error can occur due to various reasons — the wrong deployment of Kubernetes, liveness probe misconfiguration, and init-container misconfiguration. An easy way to resolve this error is by properly configuring and deploying Kubernetes. However, you can also bypass the error by creating a separate deployment with the help of a blocking command.
4. Liveness and readiness probes: Several mistakes are made regarding probes. The first is not defining any health check for the application, which will never be restarted in case of problems and will always remain within the load-balancing pool of a service. The second type of error concerns defining equal liveness and readiness probes by contacting the same HTTP endpoint, for example. It may be due to a misunderstanding of these types of tests. The liveness probe is linked to the concept of a healthy application, so if it fails, the pod will be restarted.
Kubernetes Best Practices:
- Using Namespaces: Namespaces in Kubernetes are important to utilize while aligning your objects for creating logical partitions within your cluster, and for security purposes. By default, there are 3 namespaces in kubernetes cluster, default, kube-public and kube-system. RBAC security control can be used to control access to particular namespaces in order to limit the access of a group.
- Liveness Probes: Readiness and Liveness probes are the types of health checks. These are another very important concept to utilize in Kubernetes. Readiness probes ensure that requests to a pod are only directed to it when the pod is ready to serve requests. Liveness probe checks the container health as we tell it to do, and if for some reason the liveness probe fails, it restarts the container.
- Autoscaling: Auto Scaling can be employed appropriately to adjust the number of pods in a dynamic way. The amount of resources consumed by the pods, or the number of nodes in the cluster (cluster autoscaler), depends on the demand for the resources. Kubernetes allows you to scale the pods automatically to optimize resource usage and make the backend ready according to the load in your service. Horizontal Pod Autoscaler which is a built-in component can scale your pods automatically. Firstly, we are required to have a Metrics Server to collect the metrics of the pods. To provide metrics via the Metrics API, a metric server monitoring must be deployed on the cluster. Horizontal Pod Autoscaler uses this API to collect metrics.
- Using Resource Requests & Limits: If the node (where a Pod is running) has enough of a resource available, it's possible (and allowed) for a container to use more resources than its request for that resource specifies. However, a container is not allowed to use more than its resource limit. For example, if you set a memory