Home Why Kubernetes - k8s
Post
Cancel

Why Kubernetes - k8s

Why you should want to and maybe not Use Kubernetes

Kubernetes(k8s) is an open-source platform for automating the deployment, scaling, and control of containerized applications. K8s has become increasingly famous in current years, as greater corporations adopt containerization as a manner to simplify their utility improvement and deployment methods. While I have been involved in large scale Kubernetes deployment either with a PaaS - Platform as a Service with solutions such as Rancher, OpenShift and VMware Tanzu or as a managed offering from AWS, Google or Azure, I sometimes feel it is an overkill to leverage k8s for some types of workload.

So why must you want to leverage or not leverage K8s?.

Scalability

K8s makes it easy to scale your programs up or down, depending on your workload type, especially in Microservice(s) use case. With resource utilization tools like HPA or VPA, you can put into manifest your scale metrics for the workloads and K8s can handle the rest. Metrics can vary from application resource utilization, to traffic from an ingress point and sometimes external or custom metrics

Availability

K8s is designed to make certain high availability of applications. It gives features like automatic failover and self-recuperation, in case of an infrastructure failure from a k8s node or an application failure from the pod itself, K8s will automatically spin up a new instance to take its place or reschedule applications in pods to new nodes in the case of a node failure. This ensures that your application remains to be had even in the face of hardware or software program disasters. Bear in mind that you might need to configure the availability conditions with things like liveness or readiness probes.

Portability

K8s is a platform-agnostic tool, which means that it could run on any cloud company or on-premises infrastructure. This makes it smooth to migrate your applications among distinctive environments, and guarantees that you aren’t locked into a specific dealer or era stack. Just package your workload into containers and you are ready to go.

DevOps integration

K8s is designed to work seamlessly with present day DevOps tools and practices. It affords APIs and automation capabilities that may be integrated along with your present CI/CD pipelines, allowing you to automate your whole software lifecycle from development to deployment.

At the same time as K8s offers many blessings, it is also a complicated system with a steep learning curve. With all these, here are some few complexities that are notable with working with K8s that you might need to critically look at before taking the plunge.

Complexity of configuration

K8s configuration files can be complicated and difficult to manage, in particular for large-scale packages, there is the need to spend a massive quantity of time studying the syntax and structure of those documents as a way to create effective configurations. And do not get me started on adding extra tools to assist with templating and configuration management on K8s such as Helm, Kustomize, GitOps operations with fluxcd, Spinnaker, AgroCD and much more. Each particular tool is a huge ecosystem on its own, and one can drown in so much to handle.

Resource control

K8s gives many exceptional arsenals of resources along with pods, replicasets, deployments, storage-class, daemonsets, statefulsets, operators, CRD - which allows expanding K8s API for customizing K8s. Managing these resources and knowing the interrelationships need to get things working seamlessly can be a huge task. It might require a large team in a case of large scale deployments.

Monitoring and logging

K8s provides fundamental monitoring and logging capabilities out of the container, but these won’t be enough for extra complicated applications. There is always the need to combine additional monitoring and logging tools for you to efficiently manipulate your utility. This is another ecosystem that is vast and notable tools such as Prometheus, ELK, Datadog, NewRelic, Splunk and lots more are available to introduce a new learning curve.

Cluster control

K8s requires a cluster of nodes to run your packages in the need to meet critical application business requirements(HA and DR), handling this cluster, which include scaling, upgrading, and securing the nodes, can be complex and time-consuming. Most cloud providers are providing managed K8s solutions to combat this, but a lot of investigations needs to go into understanding these offerings to be sure it meets the desired intent.

Networking

K8s networking is governed by CNI, which controls the networking model of the container runtime. However, putting in and dealing with this community may be difficult, the choice of CNI plugin can determine the tool set available to manage the cluster networking. This is not an easy choice to work with, especially in large scale application deployment environments. I usually stick to the defaults from cloud providers but in some cases, I have noticed that leveraging on tools like network policies might not be readily available.

Security

K8s affords many built-in protection capabilities, including network policies and RBAC (function-based totally get admission to control). However, configuring and dealing with these features can be complex and calls for a deep know-how of K8s security high-quality practices.

Improvements and protection

K8s is a swiftly evolving platform, with new variations and functions being launched frequently. Maintaining your cluster updated and dealing with upgrades can be difficult, specifically if you have a large and complex deployment.

Running Cost

While K8s itself is opensource and loose to use, running and coping with a K8s cluster in production can be highly-priced. K8s calls for a full-size amount of computing sources, which can quickly add up in terms of cloud web hosting fees. Moreover, dealing with a K8s cluster requires specialized know-how and understanding, which might also require additional staffing or outsourcing expenses.

Overkill

For smaller groups or programs with constrained scalability necessities, K8s may be overkill. Other field orchestration answers, together with Docker Swarm or Nomad, can be simpler and more cost-powerful options.

Compatibility

Even as K8s is a platform-agnostic device, it may not be well suited with every software or generation stack. Legacy systems might also require particular infrastructure or networking configurations that aren’t supported by K8s. Moreover, some cloud vendors may also have their very own proprietary container orchestration gear which can be extra tightly integrated with their infrastructure.

Summary

While K8s is a popular container orchestration tool, there are certain scenarios where it may not be the best fit:

  1. Small-scale applications: If you have a simple application with little to no complexity, no scaling requirement, K8s seems improper. K8s can add complexity to small-scale applications, which can make it more difficult to maintain. Some frontend or static web-apps can fall into this category.
  2. Static applications: If you have a static application that does not change frequently and are designed for short bust of compute, over a short period, then K8s be an overkill you are better off working with FaaS. K8s is designed to manage applications that are dynamic and have changing resource requirements.
  3. Limited resources: If you have limited resources, such as a small team or a tight budget, then it is best to leverage a managed solution. K8s can be complex to set up and manage, which can require a significant investment of time and resources.
  4. Non-containerized applications: K8s is used majorly in the orchestration of containerize workloads.
  5. Unpredictable workloads: If your workloads are unpredictable and have large spikes in resource usage, then K8s might not be the best fit. K8s is designed to manage workloads that are predictable and have a consistent resource usage pattern. Although there are some offering now from cloud providers that can allow you to specify cluster resource band and leverage on a cluster autoscaling tool to scale your whole cluster to meet your workload demands but this in itself will require a lot of checks, balances and keeping a close eye on the billing as time goes.
This post/article is licensed under CC BY 4.0 .