Changing server architecture is always a big step for any project. Here at Dysnix, we lead most of our clients through this process by providing Kubernetes services. And each time, it has been a different experience.
If your company is choosing Kubernetes migration strategy, obviously, there is a good reason for that: increasing stability, environment unification, and quick autoscaling.
Kubernetes best fits microserver architectures. As a matter of fact, the more distinct the cluster entities are the better. This allows:
Setting precise limits to each service
Establishing only necessary connections
Choosing each service's unique Kubernetes entity type (Deployment, ReplicationController, DaemonSet, etc.)
Before I even get closer to explaining the process, I'd like you to imagine the real goal of your migration.
Why are you planning to migrate to Kubernetes?
If there is a need to change the logic of your app, will you have enough resources to modify it? Can your app's design be called "container-native"?
What Kubernetes benefits do you expect to get afterwards?
These and other questions should be considered before you start to migrate to Kubernetes, as this process is no joke for developers and businesses in general. Below the tip of the iceberg, a whole mountain of rework can wait for your team, so your staff definitely has to be ready for it and know for sure what these actions are for. On the other hand, even if Kubernetes migration ends smoothly, your app design or business logic can extract zero benefits of Kubernetes in your particular case.
In this article, I'll share Dysnix's expertise on migrating to Kubernetes process and tools, describe common mistakes, and how to avoid them.
Ground zero: decomposition of an app and reinventing it as kubernetes-native
As a logical continuation of the previous point, I'm going from the point of strategy to that of tactics. Let's see what you have to consider and what you have to do with those answers you get from the “goals” questions.
Check and visualize your current architecture
Your documentation, visualization tools, and broad planning will help you to estimate the time and resource scope you'll need for migration. Describe each part of your app and the connections in between, mark them on the schematic. Use a deployment or hexagonal view for your convenience, or even a simple data flow chart will do the same thing. After this step is done, you'll have a full map of modules and how they are connected. This will give you a full understanding of what exactly will be migrated to Kubernetes.
Rethink the current app architecture
Despite the enthusiasm you might feel at this stage—"Let's rewrite everything!" — you have to stay cool and build your modules' migration order from simplest to hardest. With this approach, you'll be able to train your team and prepare for the most "Herculean tasks". Or try to prepare your plan using another concept: choose the most essential modules, those responsible for business logic work, for example, and set them as most important. Other modules can be marked as secondary and then work on them after the core of the app is migrated to k8s.
The tasks you'll need to solve here will be:
Find an applicable logging method for your app;
Choose how your sessions will store (in shared memory, for example);
Think how you'll implement file storage for your future k8s app;
Consider new challenges of testing and troubleshooting for your app.
I should mention that depending on the type of your app, some stages might shrink or expand in time, and that's okay. Moreover, you might need to hire some additional staff and multiply your team's expertise. Each business experiences migration individually.
But let's get back to the whole following process in a brief description:
Containerization stage. Here, you'll prepare a Docker launch container with configurations for your app. It describes the environment, programming languages, and other preferences for Docker's image. Later on, I'll describe how to launch container in a practical use case.
Get your app modules schematic and choose Kubernetes objects for each module. This stage typically goes smoothly as there are a great variety of types and options for your app's components. Afterwards, you should write YAML files to create mapped Kubernetes objects.
Database adaptation. The most common practice is to leave it as it is, but connect with a new Kubernetes-based application. After you launch a Docker container, you only need the executive decision to make and make the whole app containerized, including your database.
Now you have a general understanding of the Kubernetes adoption. Let's dive deeper into the theme with technical peculiarities, application migration best practices, and Kubernetes use cases.
Migrate application to kubernetes step-by-step
Storing persistent data
When developing a project's architecture, we try to completely avoid storing data in files. Why?
Most platforms (AWS, Google) only allow installing block-level storage from a single point. It imposes limits on the containers' horizontal scaling used by Persistent Volume.
When a file system contains a lot of files, trouble accessing the file system occurs, which significantly impedes the general responsiveness of the resource.
Ways to avoid this are as follows:
We store static content in Object Storage. If that's Amazon we are dealing with, we use S3. If it's a hardware cluster: Ceph RDB for persistent storage and Ceph rados for S3.
We try to store most of the data on DB and/or NoSQL storages (such as ElasticSearch).
We store sessions and cache on in-memory databases (redis/memcache).
Nevertheless, if it is Persistent Volume that is required, it should be prepared properly.
First, collect the list of ALL the catalogs that store persistent data. If you fail to do this, the data will be recorded without any errors, but after the container restart or migration to a different node, ALL data will be LOST.
Try compiling the catalogs in such a way that all your data are stored in a single catalog at the bottom, as it is necessary to be able to use only one Persistent Volume for one container. This rule is not always applicable, and sometimes it's simply necessary to distribute data among several PVs. Only the application's architect, who knows the purpose of a persistent storage and intended data volume stored there, can give the ultimate answer.
Select a suitable file system. Ext4 is good for most tasks, but sometimes choosing a more suitable file system can benefit performance.
Select an optimal size PV. Don't worry, you will be able to easily extend it, if necessary. However, if a file system is overloaded, resizing will take even more resources and can affect performance.
When all requirements are met, make a YAML-file for Kubernetes Persistent Volume. In the case of AWS, it may look like this: