Blog
Kubernetes vs. serverless: when to use and how to choose? — part 1

Kubernetes vs. serverless: when to use and how to choose? — part 1

Alex Ivanov
June 2, 2021

Introduction

Nowadays, when it comes to development and hosting a modern application - or even the whole IT infrastructure - there are two buzzwords you hear all the time, Kubernetes (k8s) and Serverless. Some activists of the k8s vs. Serverless discussion even say that one is beginning to replace another, which makes us smile. The reason behind our reaction is simple: each separate case in business needs an individually selected, well-planned, and manageable solution created by all possible means.

Both businesses and developers care about how efficiently each specific technology can be used for reaching business goals and how the costs of time and resources can be reduced.

Let's take a closer look at the analysis of Serverless vs. Kubernetes, see what they have in common and what is different, and why would you go with one over the other from a business standpoint or for development tasks. Our article will give you a chance to discover both Kubernetes and Serverless, and explain how to choose between them wisely.

Basic glossary or refresher course in terms

Infrastructure - a set of fundamental facilities and systems that support the sustainable functionality of your applications.

Container - an isolated environment where an application can run based on all the necessary components and dependencies packed inside, thus, containerization - the process of packing an app into containers and filling them with all necessary stuff in order to work flawlessly.

Microservices - is a design approach to complex programs or services in breaking the system into independent small services and components.

Kubernetes (k8s) - is a tool for management and launching of containerized apps in the frameworks of declared configuration of containers.

Serverless - is a concept of computing resources management where you buy only computing time for your functions on the cloud and everything else is a headache of a cloud provider.

fPaaS (Function Platform as a Service) - providing a set of interconnected functions for computing, data processing, and storage on the platform (in particular, a cloud one), was popularized by AWS Lambda.

IaaS (Infrastructure as a Service) - providing virtualized cloud computing resources connected and managed by cloud providers to run your virtual machines.

The reasons k8s and serverless technologies appeared

Let's take a small trip back to 2013. Google got to deal with the increasing traffic loads for their services like Gmail, Maps, etc, and used containerization with the help of engineers developing Project Seven or Borg.

A startup from sunny San Francisco named Dotcloud presented Docker at the 2013 Pycon conference. Docker became a game changer tool for software container management, easy to use from a command line. In a year, Google published Kubernetes as an open source code written in Go, a remake of the old Borg written in C++. It leads to one of the biggest Kubernetes pros - it's still open-sourced technology with an active community, broad standards, and blooming lists of services and tools based on k8s.

Everyone was excited, and in 2018, k8s project on GitHub became one of the most popular tools among developers, and now, in 2020 it's a well-known industry standard. Being free, portable, extensible, and able to support declarative configuration, it quickly captured the hearts and minds of DevOps engineers and businesses.

Why so?

Because in the last 10 years, more and more developers have come to the new paradigm of the DevOps culture, which holds on to the idea of the smuggling of borders between developing and operating teams for the sake of more stable work, shorter release times, faster issue fixing, fewer security flaws, etc.

DevOps lifecycle.
DevOps lifecycle.

In companies that started to adopt DevOps approaches, the whole infrastructure was changing: all was aimed at reducing costs, improving security, and simplifying the implementation of changes. And both containers and microservices are technologies started to fit the needs of fluctuating teams. Containers give DevOps engineers the possibility to run the apps with no limitations of language, platform, or environment by just correcting a previous detailed setup. Microservices recreate massive systems with vertical hierarchy and dependencies into a well-connected horizontal mass of separately launched suites that are doing the same task as the previous infrastructure.

As both technologies were implemented by more and more companies, developers used and improved k8s as the most popular tool for container management. One of the most positive things about Kubernetes is its declarative nature: a developer describes the environment once - and Kubernetes gathers everything needed: resources, libraries, needed APIs, dependencies, etc.

Thanks to the declarative nature of the service, programmers get closer to microservices implementation and deal with smaller tasks that can be easier done with Agile. The first of these infrastructure changes might happen not only on real servers, but on virtual or local machines, and everywhere Kubernetes might be launched.

This is the moment where cloud technologies let companies get a few insights.

When companies started to move into cloud and settle there, they found out that their apps or services would be competitive and more stable than others only if they will find a way to scale in response to the spike-alike activity of traffic or requests. And even decrease the volume of available resources if they aren't in use currently. This will drastically decrease the costs of cloud-located apps.

Also, lots of companies break down their apps and programs to the basic linear or tree-like event structures, and they need them to run somewhere cheap and efficiently. And always, there are those guys who like to create such simple programs that their functionality map can be fitted to the A4 list. These circumstances gave birth to Serverless as a software architectural style.

After this short historical trip, let's dive into each technology separately and go through the debatable Kubernetes pros and cons in comparison to the Serverless pros and cons.

More on kubernetes

Let's have a look at how this technology works to give you more background for Kubernetes advantages and disadvantages evaluation.

You can set up k8s literally everywhere. It widely supports a wide variety of cloud providers around the globe, and it's one of the top advantages of Kubernetes. In production, the platform would normally consist of a control plane component that deals with k8s cluster management and worker nodes to run your application workload.

If you are hosted on the cloud, not only could you host the entire k8s cluster on VMs (IaaS) services, but also decide offload a portion of the installation, management, and support of Master nodes and the whole "cluster control plane" to the cloud provider and focus on what is really important for your applications.

Typical Kubernetes project structure
Typical Kubernetes project structure. Source

This option is called a "Managed Kubernetes service", and on the Big Three it is represented by Elastic Kubernetes Service (AWS), Azure Kubernetes Service (Azure), and Google Kubernetes Service (GCP). Normally, a vendor would ask for a small fee to run and manage a Kubernetes cluster for you, i.e. on AWS, it normally costs $0.10/hour.

And that's not all. You'll find a set of Kubernetes pros and cons that's debatable for developers, as on one hand, you can choose everything for your customized need, and on the other, you'd like to delegate it and not think overly much about it:

  • An API to manage your cluster and container workload;
  • DB to store cluster and container workload configuration;
  • Storage and network plug-ins to provide software-defined storage and a network to operate your cluster and the applications it runs;
  • Core DNS implementation to support name resolution;
  • Authentication, load balancing, scaling mechanisms, probes.

Anyway, it looks a bit massive, doesn't it? That's one of the famous Kubernetes cons that scare away those developers or businesses that start to think that it's only for massive international companies. Or you have to be really rich to use this kind of tool.

Well, to be honest, one of the hidden Kubernetes disadvantages is that the availability of a full - fledged tool might be a little pricey. But alongside Kubernetes benefits for corporate clients, these expenses are totally worth investing in.

When you try the famous K8s, you feel the difference of using Kubernetes' advantages right away, like scalability, for example. As a developer, you can start using Kubernetes by installing in seconds and running micro deployments of k8s such as "minikube" or "kind" on your own laptop on top of Docker. As a company, you can run a Kubernetes cluster in your own facilities with rows of racks beefed up with mighty servers of up to 5,000 nodes per cluster.

Do you know what benefits of using Kubernetes are coming out of scalability? Its fast start and cross-platform features. We'll describe them in more detail, and then we'll discuss Kubernetes' business benefits.

What would you really need on top of your k8s infrastructure?

For a production case, you would want your Kubernetes deployment to be mature as well as easy to manage and troubleshoot. This will likely include things such as monitoring, logging, enhanced security, and backups. There is an abundance of solutions available, which maybe is one of the main benefits of Kubernetes as well as Docker.

Here is a list of a few top Open Source technologies to complement your k8s deployment:

  • Monitoring (Prometheus / Grafana);
  • Logging (Elasticsearch stack);
  • Certificate management (Certificate manager with automated Let's Encrypt certificates);
  • Reverse proxy and load balancing (Nginx).

With all these things prepared, you've got a meticulously crafted infrastructure that works on all the benefits of Kubernetes. But to be more objective, we know how to describe all the disadvantages of Kubernetes as Dysnix has worked with it since 2016.

Let's proceed to our Serverless part where we'll review the essential details about Serverless, especially Serverless disadvantages and Serverless benefits.

More on serverless

Serverless is a cloud computing execution model in which a provider lets you use its capacities for a runtime and allocates machine resources on demand, taking care of all the management instead of you. That's one of the huge Serverless advantages. Serverless pushes for a cloud-native architectural approach to developing, building, and running your applications. And it roughly removes the need for Ops (everything considering the operational part) from the DevOps paradigm.

Serverless through the eyes of developer looks like:

  • Implementing nucleus functions as microservices;
  • Tying them with all the needed vendor and cloud services, like third-party integrations;
  • Managing and synchronizing everything together based on events.

Serverless in the form of Function Platform as a Service (fPaaS) has been around for a long time. But it was not until 2014, when Amazon announced its AWS Lambda service, that it gained real traction and widespread acceptance by developers and businesses. It can also be mentioned among big Serverless pros that it gained quick popularity and high appreciation among developers. It wasn't coming in one day, AWS developed a wide range of neighboring services like Amazon Elastic Compute Cloud, Simple Storage Service, Amazon's Simple Notification Service (SNS), to name a few. By expanding their functionality, developers expanded their possibilities in writing simple services, while AWS cares about monitoring of processes, logging of events, and computing capacity scaling. This is where all AWS lambda pros come from. This strategy works well for web apps where traffic can fluctuate in a wide range and give lots of spikes which is really uneasy for a human to manage. fPaaS helps you with this challenge by automatic scaling of computing powers and elementary setup configuration.

According to Gartner, the share of enterprises running their applications on fPaaS will jump from 20% in 2020 to 50% by 2025.

But let's get back to the point, talking about AWS lambda pros and cons, let's see what AWS lambda disadvantages we can recall. Let's have a closer look at the fPaas structure, and they will reveal themselves.

Typical Serverless Web Application
Typical Serverless Web Application
  1. Pack your application and prepare it for migration.
  2. Upload it to the cloud and pay only for the request.
  3. React to events to perform actions thanks to SNS queues.

From the business perspective, there's nothing to worry about: if nobody uses your app, you won't pay for storing it in the cloud. If your traffic increases significantly, everything will be scaled automatically. Both features reduce the costs. Awesome! What else can be said about the pros and cons of Serverless?

And here's a place where Serverless cons strike back, as at one magic moment, you might figure out that your cloud provider does not comply with your necessary third-party services. Azure, GCP, or AWS are famous for their cloud services, but in the pack of integrations, you'll find different stacks of services. You might say that it's a question of choice, not Serverless advantages and disadvantages, but what if none of providers have your ideal pack, will you abandon a part of your app functionality to make it Serverless? And what if you'll get an idea of getting off the cloud? It's still a headache for developers and the whole infrastructure.

Therefore, providers do as much as they can to integrate fPaaS solutions they offer into their ecosystem of all types of products they provide. And most of the time, all you would need is to pick a bunch of managed services your application requires and let your code connect to them with usually well-documented and developed libraries provided by the fPaaS vendor.

To sweeten the pot, all you have to worry about with Serverless is:

  • Runtime platform to choose from (differs per fPaaS);
  • Integration with the other services of an fPaaS provider.

And if you find one that can be applied to your app, and you compare all Serverless architecture pros and cons twice, then we have some more recommendations to improve your app's cloud attendance.

One of the most popular technologies to compliment your development and deployment experience across Big three cloud providers (not only) is a Serverless framework. Each cloud provider would offer you a certain toolbox to execute the most important actions:

  • Configure and bind your functions with input and output (clients / services) that would trigger the functions and use the result;
  • Manage third party integrations with APIs;
  • Monitor events and traffic changes;
  • Manage your digital authentication secrets;
  • Log the app statuses;
  • Edit and update your code, and so on.

To sum up this part, the most attractive benefit of serverless is a simple "pay-as-you-go" model and a great scalability. It becomes more and more popular because cloud providers have added lots of technologies to their integrations stack and minimized the operating part of app management. But still, some cases and scenarios cannot be implemented in Serverless.

That's pretty much everything we could briefly mention about the pros and cons of serverless architecture, and we're ready to start the direct comparison of both technologies.

Proceed to Dysnix's second part of Kubernetes vs. Serverless research.

Alex Ivanov
Senior DevOps at Dysnix
Specialized in Kubernetes, infrastructure optimization, traveling, and compliments.
Table of content
Related articles
Subscribe to the blog
The best source of information for customer service, sales tips, guides, and industry best practices. Join us.
Thanks for subscribing to the Dysnix blog
Now you’ll be the first to know when we publish a new post
Got it
Oops! Something went wrong while submitting the form.
Copied to Clipboard
Paste it wherever you like