AI-based predictive Kubernetes autoscaling tool

Compatible with:

Get the most out of AI-based Kubernetes autoscaler

Kick-off start
Our AI model can begin working with the traffic data for two weeks to provide you with a reliable prediction for Kubernetes autoscaling nodes.
Proactive scaling
With PredictKube, you can complete autoscaling in Kubernetes before the load rises, thanks to forecasts made by our AI model incorporated in the cluster autoscaling tool.
Scaling automation
The predictive autoscaling Kubernetes tool optimizes the number of active nodes preventively, and when the traffic increases—all your nodes are ready.

Problems PredictKube solves

Overprovisioning and high cloud bills

You overpay to cover any traffic needs you might have to avoid traffic loss. That’s inefficient.

Downtime and high latency

Infrastructure gets overloaded, and your users can’t connect to your product/service. You lose traffic.

Problematic project growth

Make your infrastructure transparent and visible for you, manage it efficiently, and prevent errors.

It’s easy to start using Kubernetes cluster autoscaler right now

Install PredictKube and solve the overprovisioning problem. Get your smartest Kubernetes cluster autoscaler in a few steps:
1.Normal state
helm repo add kedacore
2.Update Helm Repo
helm repo update
3.Install keda Helm chart
kubectl create namespace keda
helm install kedakedacore/keda --namespace keda
4.Create PredictKube Credentials secret
kubectl create secret generic predictkube-secrets --from-literal=apiKey=${API_KEY}
5.Get API Key
To make our AI model access your data and make a prediction based on it, please use the API key we'll send to your e-mail.
Thank you
We will contact you as soon as possible
Oops! Something went wrong while submitting the form.
6.Configure Predict Autoscaling
tee scaleobject.yaml << EOF
kind: TriggerAuthentication
 name: keda-trigger-auth-predictkube-secret
 - parameter: apiKey
   name: predictkube-secrets
   key: apiKey
kind: ScaledObject
 name: example-app-scaler
   name: example-app
type: predictkube
historyTimeWindow: "7d"  # We recommend using a minimum of a 7-14 day time window as historical data
     prometheusAddress: http://kube-prometheus-stack-prometheus.monitoring:9090
     query: sum(irate(http_requests_total{pod=~"example-app-.*"}[2m]))
     queryStep: "2m" # Note: query step duration for range prometheus queries
     threshold: '2000' # Value to start scaling for
     name: keda-trigger-auth-predictkube-secret

Under the hood: Tools inside

PredictKube is officially recognized as a KEDA scaler
View the KEDA article

Input the data for 1+ week and get proactive autoscaling in Kubernetes for up to 6 hours horizon based on AI prediction

Get the most efficient AI-based autoscaling tool by the Dysnix team

Try now

Our colleagues, clients, partners

We’re grateful for the support and interest in PredictKube, Kubernetes autoscaler by Dysnix. Our team has the pleasure of being good friends and problem-solvers for the following projects:

FAQ: All you need to know about Kubernetes autoscaling and our autoscaler

What is a Horizontal Pod Autoscaler?

A horizontal pod autoscaler is an inner autoscaler Kubernetes has implemented. It scales pods horizontally, which means that the HorizontalPodAutoscaler just adds more pods to the current. It’s not the best solution for all cases, but for smaller projects with an expected traffic pattern, it can be a comprehensive solution. But remember, there’s no predictive option here. This function will be launched only when the load will significantly increase.PredictKube works in advance thanks to the AI model that analyzes and forecasts the pattern trend, so your pods will be deployed in time. In the Kubernetes cluster autoscaler by Dysnix, horizontal scaling is applied automatically based on prediction AI models.

Does GKE auto scale?

GKE autoscaling is a standard option for Google Cloud customers providing you the possibility to set up the configuration for up- and down-scale of your cluster’s nodes population. In a nutshell, it’s a way of setting up the lowest and highest milestone for your cluster. Depending on the workload, your infrastructure will grow in number or decrease within those limits. From the practical point of view, it doesn’t solve the problem of overprovision mainly, but it is one of the most helpful tools for Google Kubernetes Engine users—until they try PredictKube.

How do you scale a microservice?

To build up auto-scaling microservices, you must apply the principles of partitioning and concurrency from the beginning of development. To make your microservices-based infrastructure capable of scaling, you must ensure all processes can be parallelized and atomized. With this approach, your app can deal with any massive processes with ease, distributing the tasks between the most productive parts. Another way to install scaling features is to containerize each microservice and use k8s for managing and scaling those containers.

How do I enable EKS cluster Autoscaler?

Cluster autoscaler is a tool that's responsible for up- and down-scale of cloud providers' compute resources for managed Kubernetes clusters users. Specifically for AWS users, there is another solution such as Karpenter: it handles the same functionality as Cluster autoscaler, but it can increase scaling speeds dramatically due to direct communication with AWS EC2 API. Using PredictKube, you can achieve even more autoscaling speed with the help of its predictive AI models.

What is AWS EKS autoscaling?

Kubernetes AWS autoscaling is used in Elastic Kubernetes Service (EKS), applicable to the AWS cloud provider. For a fee, AWS will handle management of your Kubernetes cluster control plane and compute nodes. Regarding autoscaling, you can configure the min-max number of nodes and create managed or self-managed groups of your nodes, then EC2 is connected with autoscaling groups with everything managed by the control plane.

Does DigitalOcean autoscale?

Yes, your project located in the DigitalOcean environment can be scaled manually and automatically. DigitalOcean Kubernetes autoscaling is based on CA, Cluster Autoscaler. It’s used for the automated addition or reduction of Kubernetes nodes to fit the cluster capacity for the current needs.

How do I scale my Prometheus server?

The best way to scale a Prometheus server is to have multiple Prometheus instances scraping different sets of metrics from various nodes instead of possessing one instance that scrape all metrics. It can be easily overloaded, and data will be lost. Prometheus autoscaling can be efficient only if the metrics which influence the decision of scaling are scraped right and in the full volume.

How to use the cluster autoscaler in Azure Kubernetes Service?

It’s pretty straightforward: create an AKS cluster and enable the autoscaling feature. Under the hood, the Cluster autoscaler will watch capacity requests from your workloads and will increase the node count to fit computed resources to the requested capacity. A Horizontal pod autoscaler (HPA) will do the same, but on the workload level. You can specify several options, such as min/max pods count, up- and down-scale behavior and specify which metrics to watch for autoscaling. When the load is decreasing, HPA will decrease the number of pods and the AKS cluster autoscaler will remove nodes that are underutilized.

Copied to Clipboard
Paste it wherever you like