Migrating to Kubernetes: benefits and best practices | Dysnix Blog
Migrating to Kubernetes: benefits and best practices

Migrating to Kubernetes: benefits and best practices

Changing server architecture is always a big step for any project. Here at Dysnix, we lead most of our clients through this process by providing Kubernetes services. And each time, it has been a different experience.

If your company is choosing Kubernetes migration strategy, obviously, there is a good reason for that: increasing stability, environment unification, and quick autoscaling.

Kubernetes best fits microserver architectures. As a matter of fact, the more distinct the cluster entities are the better. This allows:

  • Setting precise limits to each service
  • Establishing only necessary connections
  • Choosing each service's unique Kubernetes entity type (Deployment, ReplicationController, DaemonSet, etc.)

Before I even get closer to explaining the process, I'd like you to imagine the real goal of your migration.

  • Why are you planning to migrate to Kubernetes?
  • If there is a need to change the logic of your app, will you have enough resources to modify it? Can your app's design be called "container-native"?
  • What Kubernetes benefits do you expect to get afterwards?

These and other questions should be considered before you start to migrate to Kubernetes, as this process is no joke for developers and businesses in general. Below the tip of the iceberg, a whole mountain of rework can wait for your team, so your staff definitely has to be ready for it and know for sure what these actions are for. On the other hand, even if Kubernetes migration ends smoothly, your app design or business logic can extract zero benefits of Kubernetes in your particular case.

In this article, I'll share Dysnix's expertise on migrating to Kubernetes process and tools, describe common mistakes, and how to avoid them.

Ground Zero: Decomposition of an app and reinventing it as Kubernetes-native

As a logical continuation of the previous point, I'm going from the point of strategy to that of tactics. Let's see what you have to consider and what you have to do with those answers you get from the “goals” questions.

Check and visualize your current architecture

Your documentation, visualization tools, and broad planning will help you to estimate the time and resource scope you'll need for migration. Describe each part of your app and the connections in between, mark them on the schematic. Use a deployment or hexagonal view for your convenience, or even a simple data flow chart will do the same thing. After this step is done, you'll have a full map of modules and how they are connected. This will give you a full understanding of what exactly will be migrated to Kubernetes.

Rethink the current app architecture

Despite the enthusiasm you might feel at this stage—"Let's rewrite everything!" — you have to stay cool and build your modules' migration order from simplest to hardest. With this approach, you'll be able to train your team and prepare for the most "Herculean tasks". Or try to prepare your plan using another concept: choose the most essential modules, those responsible for business logic work, for example, and set them as most important. Other modules can be marked as secondary and then work on them after the core of the app is migrated to k8s.

The tasks you'll need to solve here will be:

  1. Find an applicable logging method for your app;
  2. Choose how your sessions will store (in shared memory, for example);
  3. Think how you'll implement file storage for your future k8s app;
  4. Consider new challenges of testing and troubleshooting for your app.

I should mention that depending on the type of your app, some stages might shrink or expand in time, and that's okay. Moreover, you might need to hire some additional staff and multiply your team's expertise. Each business experiences migration individually.

But let's get back to the whole following process in a brief description:

  • Containerization stage. Here, you'll prepare a Docker launch container with configurations for your app. It describes the environment, programming languages, and other preferences for Docker's image. Later on, I'll describe how to launch container in a practical use case.
  • Get your app modules schematic and choose Kubernetes objects for each module. This stage typically goes smoothly as there are a great variety of types and options for your app's components. Afterwards, you should write YAML files to create mapped Kubernetes objects.
  • Database adaptation. The most common practice is to leave it as it is, but connect with a new Kubernetes-based application. After you launch a Docker container, you only need the executive decision to make and make the whole app containerized, including your database.

Now you have a general understanding of the Kubernetes adoption. Let's dive deeper into the theme with technical peculiarities, application migration best practices, and Kubernetes use cases.

Migrate application to Kubernetes step-by-step

Storing persistent data

When developing a project's architecture, we try to completely avoid storing data in files. Why?

  • Most platforms (AWS, Google) only allow installing block-level storage from a single point. It imposes limits on the containers' horizontal scaling used by Persistent Volume.
  • When a file system contains a lot of files, trouble accessing the file system occurs, which significantly impedes the general responsiveness of the resource.

Ways to avoid this are as follows:

  • We store static content in Object Storage. If that's Amazon we are dealing with, we use S3. If it's a hardware cluster: Ceph RDB for persistent storage and Ceph rados for S3.
  • We try to store most of the data on DB and/or NoSQL storages (such as ElasticSearch).
  • We store sessions and cache on in-memory databases (redis/memcache).
Storing persistent data

Nevertheless, if it is Persistent Volume that is required, it should be prepared properly.

  1. First, collect the list of ALL the catalogs that store persistent data. If you fail to do this, the data will be recorded without any errors, but after the container restart or migration to a different node, ALL data will be LOST.
  2. Try compiling the catalogs in such a way that all your data are stored in a single catalog at the bottom, as it is necessary to be able to use only one Persistent Volume for one container. This rule is not always applicable, and sometimes it's simply necessary to distribute data among several PVs. Only the application's architect, who knows the purpose of a persistent storage and intended data volume stored there, can give the ultimate answer.
  3. Select a suitable file system. Ext4 is good for most tasks, but sometimes choosing a more suitable file system can benefit performance.
  4. Select an optimal size PV. Don't worry, you will be able to easily extend it, if necessary. However, if a file system is overloaded, resizing will take even more resources and can affect performance.

When all requirements are met, make a YAML-file for Kubernetes Persistent Volume. In the case of AWS, it may look like this:

apiVersion: v1
    kind: PersistentVolume
    metadata:
        name: example-my-pv
        annotations:
            volume.beta.kubernetes.io/storage-class: "default"
    spec:
        capacity:
            storage: 100Gi
        accessModes:
            - ReadWriteOnce
        awsElasticBlockStore:
            volumeID: vol-0dc1fcf80ac20300a
            fsType: ext4

Note that in the example above, we deliberately put the volumeID, so that the Kubernetes PV is assigned to a specific AWS Elastic Block Storage. We also put volume.beta.kubernetes.io/storage-class , so that it can use the same AWS EBS when creating a PV again. It is important if you want to be assigned to a specific EBS. Kubernetes uses dynamic EVS creation by default. If you don't put the volumeID and volume.beta.kubernetes.io/storage-class , Kubernetes will also create an EBS when creating a PV. When deleting the PV in Kubernetes, Amazon EBS will be removed as well.

Now you need PVC (Persistent Volume Claim), which you will mount into containers.

Once again, in the case of Amazon, its code may look like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: example-my-pvc
    annotations:
        volume.beta.kubernetes.io/storage-class: "default"
spec:
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 100Gi

You will be using example-my-pvc name to mount containers.

Now let's talk about the proper way to launch a container. We are going to discuss this topic in more detail later in this article, but leaping ahead, I would like to say that for containers with PV, Deployment or ReplicationController is needed.

As a rule, these entities presuppose scaling, but, as it has already been said, in the case of PV and a platform like AWS, horizontal scaling is out of the question. Why do we need a replicable controller then? Because, if a container fails (if, say, a worker-node shuts down), the container could restore automatically, and everything would continue running as before. Apparently, you can't scale such RC or Deployment.

A controller that is using PV may look like this:

apiVersion: v1
    kind: ReplicationController
    metadata:
     name: example-my-controller
     labels:
       name: example-my-controller
    spec:
    replicas: 1
    template:
       metadata:
         labels:
           name: example-my-controller
       spec:
        containers:
         - name: example-my-controller
           image: example-my-controller:latest
           volumeMounts:
             - mountPath: /srv/data
               name: data
        volumes:
          - name: data
            persistentVolumeClaim:
              claimName: example-my-pvc

No big secret. We simply mount our PVC to /srv/data, giving default access to writing.

Shipping a container into a network

Before getting down to explanations about setting up a Kubernetes cluster, I would like to tell you how a network is set up in Kubernetes. There is a Controller and Service in Kubernetes, which are two independent entities. A Controller may have listening ports. However, accessing them inside a cluster using Kubernetes Service is recommended.

a Controller and Service in Kubernetes

Kubernetes Service announces a static IP address and DNS in Kubernetes' network. With the help of selectors, Kubernetes connects service ports with controller ports. Thus, by referring to the service address, the software binds to the controllers' open ports. If you are using several replicas (like in the case of Deployment or ReplicationController)—Kubernetes Service balances the traffic between them, while the containers can be on different node clusters. When you do the scaling, Kubernetes automatically activates new nodes to balance the load. It's extremely convenient!

We could talk how network functions in Kubernetes for many more pages, but within this article, I would like to highlight the most important points:

  • Activate only those ports that really need to be accessed; you can always customize Kubernetes Service.
  • Avoid using HostPort. Why? First, ports may conflict with the host if the same instance has several controller's replicas running. Second, it doesn't look neat and clear.
  • Use a Load Balancer on an underlayer (for example, AWS Internal Load Balancer) for critical services. No matter what they say, it's more stable. The downside is that it requires a bit more money.
  • If you need to introduce a service to the world, use type: LoadBalancer
  • Think twice before using Ingress. In most cases, Kubernetes with type: LoadBalancer is enough. You can also customize SSL in it. Ingress is useful only when you have lots of domains that you want to put on a single LoadBalancer and/or you need to automatically generate and prolong SSL-certificate LetsEncrypt.

Logging

Everybody knows how important Kubernetes logging is. When dealing with container architecture, there is one requirement: write your logs in stdout/stderror. These are two popular ways to collect the logs Kubernetes offers.

This is done so that you are able to use the working-with-logs interface and have total integration.

You can view Kubernetes logs using several ways:

  • kubectl logs – the easiest and clearest way, similar to Docker logs. The command simply outputs a specific Kubernetes Pod log.
  • stern – a more complicated variation of the previous command. It does the same thing as kubectl logs, but immediately outputs the logs for the entire group of containers using the name of the application. It is very convenient when you have more than one container.
  • fluentd + ElasticSearch + Kibana – log aggregation in NoSQL storage with an opportunity to have access through web-interface, conducting a search, and filtering.
  • Obviously, no one keeps you from going to the server and launching Docker logs, but we don't recommend using such low-level operations and remain on a higher level, if possible.

Cherry-picked advice:

  • If you want to pull out the most recent logs only, use flags -f and --tail.
    Example:
    kubectl logs --tail=10 <pod-id>

It needs to be done in order not to flood the terminal with a logs' long history and see their update in real-time. Stern by default behaves as if you've already added flag -f.

  • If a pod doesn't launch, you don't always have to go back and look at the logs. If a container failed to launch whatsoever, then it's worth looking at kubectl to describe pods <pod-id>;
  • In any case, try not to rely on containers' logs only, and additionally send important stuff to Sentry or similar log-aggregators;
  • If you are planning to use fluentd and a log aggregator—write one-line logs. However, if you'd rather use ES as your storage—write your logs in JSON. Multi-line logs are difficult to analyze in aggregators.

Kubernetes replication

The first rule of the Dysnix team is to replicate everything that requires high availability and use a replication controller in Kubernetes (or a more tricky Kubernetes replica set), starting from the moment of creating Kubernetes cluster.

The first rule of the Dysnix team

We recommend customizing three replicas minimum for all services that are listened to through the network, even for single-zone clusters. Why is that? If a single node fails in your container cluster, users shouldn't find out about it. Kubernetes replication controller is responsible for keeping all your pods' symphony on air nonstop.

The second rule of the Dysnix team goes: record the obvious resource limits (CPU, memory) for all services that you are customizing. What does it have to do with replication? Everything. It is the CPU limit that becomes the basis for decision-making on horizontal pod autoscaling, and it is directly connected with replication. That was our smooth transition to the limitation issue while establishing and creating a Kubernetes cluster.

Resource limits control

If top and bottom resource limits are not set explicitly, an application runs the risk of locating on a cluster with initial lack of resource, or, even worse, consuming all server resources and destabilizing a node. That is why we strongly recommend setting the limits explicitly.

If you don't know how much resource your application consumes, set up a test environment without any limits and run some stress tests. Performance graphs will show the consumption limits within which the application functions without failure. Add 10-30% to those performance limits and set them as strict ones. Here's an example of a controller description section with top and bottom resource limits:

resources:
    requests:
        cpu: 0.1
        memory: 128Mi
    limits:
        cpu: 1
        memory: 1024Mi

Here we mention that the application needs at least one tenth of the core and 128Mb RAM; the top limit is the entire core and 1Gb RAM.

Keep in mind that autoscaling depends on the CPU only. Memory leaks or mistakenly-set limits won't lead to autoscaling; they will only cause OOM.

Resource limits control

Selecting a controller

There are several ways to launch a container in a Kubernetes cluster. Here are the most popular ones, available now in a steady Kubernetes version:

  • Pod
  • Job
  • Deployment
  • Replication Controller
  • DaemonSet
  • StatefulSet
  • Cron Job

"Why so many?!" one might ask.

Selecting a controller

Actually, when using Kubernetes in different projects, it might seem they are too few.

Let's start with a simple one — Pod.

Pod is a single container that can be launched in a cluster in the same manner as you would launch a container via Docker run. Then, you describe a yaml-file, execute its creation in a cluster, and there you go, a pod has been launched. You can have a look at its logs by logging in with the help of kubectl exec. However, if the pod fails or is deleted by someone else, your service won't launch by itself.

It's difficult for me to name a realistic independent Pod application in production. Usually, I consider it to be a constituent of other entities.

Job is a rarely used tool, nevertheless used. It is similar to Pod in the sense that it can be launched once, and it will stop after the process stops. The difference is that after Job stops, the logs and the execution status will remain recorded. It's convenient when you need to manually perform a command within a cluster and save the details of its execution in the history.

The rest of the entity types fall under the Kubernetes Controllers group.

Deployment suits most applications. It dramatically differs from Pods and Jobs in that when the pods that are a part of Deployment and are located on a lower level stop/fail are deleted, these pods will restart immediately. It's convenient to roll out Deployment fast, do the scaling, and even customize autoscaling. The only drawback might be connection breakups during the rollout.

Replication Controller is an excellent choice for HTTP services and other applications that have critical downtime during the rollout. RC can execute a gradual smooth rollout with the help of the Rolling update operation. It gives an opportunity to roll out the code without breaking up client connections. I believe that is its main difference from Deployment.

Nowadays, there is a new generation entity named "Replica Set".

DaemonSet – a sly entity that makes it possible to place a container on each working node. We use DS for monitoring systems deployment, as well as for some peculiar software. Also, we know about deployment instances implemented with the help of a DS front-end web-server.

StatefulSet – an even layer of pods that enables assigning unique, predictable hostnames, having more stable Persistent Storage and executing soft deployment, scaling, and removal. You can't do without StatefulSet in a High Availability RabbitMQ cluster. Other than that, we rarely use StatefulSet.

Cron Job – the name speaks for itself. When you need a periodic command to launch within a cluster, you can use Cron Job. When there isn't a separate entity in Kubernetes, we use a separately-launched container with a regular cron.

Conclusion: Be ready to "migrate" your mindset first

I hope I helped you choose a cluster entity type and avoid falling into the traps we fell into while discovering the Kubernetes world.

We realize that these are not all the subtleties that can be described and foreseen in such a short manual. That is why I must mention that our team always pays attention to the peculiarities of each project when choosing the technologies and approaches that suit it best.

Kind regards,

Daniel Yavorovych, Dysnix CO-Founder

Read more

HOW WE SAVED A STOLEN MILLION

This story is about a big fintech company getting hacked. We'll call it TwinPeaks to preserve anonymity, or TP for short. The quieter we become, the more we hear. It was a cold winter Friday evening...

#Bitcoin #Hacking #Security

LEAVE THE HARDWARE ALONE!

Having worked in IT for 10 years I have realized that relying on hardware is simply unrealistic. It tends to break down. Very often. Unexpectedly.

#Kubernetes #Docker