Blog
From AWS chaos to 99.99% uptime: The MIRA Network infrastructure evolution

From AWS chaos to 99.99% uptime: The MIRA Network infrastructure evolution

6
min read
Olha Diachuk
February 5, 2026

Our team met MIRA Network during the OVHcloud Startup Program Fast Forward Blockchain and Web3 Accelerator. They requested to upgrade their project to a Seed A grade with a tailored, high-available blockchain infrastructure by Senior DevOps.

About the client

MIRA Network is building a Web3 crowdfunding and Real-World Asset (RWA) tokenisation ecosystem operating at production scale. When your platform handles millions of users and processes 19 million queries weekly, infrastructure isn't just a cost center—it is the foundation of user trust. 

MIRA faced an infrastructure challenge: an expensive, manually configured setup that lacked visibility and drained the runway without delivering the required reliability.

About our team

The project was led by a dedicated Lead Infrastructure Architect and a Senior DevOps Engineer, who translated MIRA’s business requirements into a resilient technical blueprint. 

This success was amplified by our close cooperation with the OVHcloud team, whose direct support and technical offers ensured a seamless transition to their ecosystem.

The problem of architectural debt: Why AWS was inefficient

The initial infrastructure was a "black box" of manual configurations. Without Infrastructure as Code (IaC), every change was a risk, and troubleshooting felt like guesswork. The primary pain points included:

  • Prohibitive monthly AWS bills due to oversized, unoptimized instances.
  • Zero visibility into system health due to a lack of centralized logging and metrics.
  • Fragile deployment cycles that relied on manual intervention rather than automated CI/CD.
  • Database architecture spreads across three zones, creating unnecessary complexity and latency for their specific load profile.

MIRA needed a partner to migrate to a more cost-effective, high-performance environment while implementing modern DevOps standards from the ground up.

Another limitation of the project was the timeframes—two engineers had less than two months to design and implement the solution.

The solution: A Kubernetes-first migration to OVHcloud

Dysnix engineered, planned, and conducted a prompt transition to OVHcloud, leveraging high-performance cloud and managed Kubernetes services. The strategy focused on "right-sizing," "automation-first," and visibility principles. 

The roadmap for the MIRA Network’s infra migration project

Database footprint reduced via rightsizing and instance type changes. Although infrastructure was created via UI, early-stage autoscaling tuned toward overprovisioning, we decided to fix the core problem of infrastructure design first and deal with scalability issues later. And we installed monitoring, alerting, and logs from day zero of the new OVHcloud-based infrastructure.

Feature Legacy AWS Setup New OVHcloud Infrastructure
Provisioning Manual / Click-ops Terraform & Terragrunt (IaC)
Orchestration Unmanaged / Manual K8s OVHCloud Managed Kubernetes Service
Database Oversized RDS (3 zones) Optimized Managed PostgreSQL (2 zones)
Scalability None Predictive / Horizontal pod autoscaling (as planned)
Monitoring None VictoriaMetrics, Grafana, Loki

We moved the core backend and data layers into a streamlined, two-zone high-availability setup in the OVHcloud Germany (Limburg) region that worked best for the client. We chose optimal instance types and sizes based on traffic needs. The whole architecture schema looked as follows:

MIRA Network’s after-migration architecture

High-level overview:

  • Dev environment: K8s single region + S3 storage to serve static content;
  • OVHCloud load balancers;
  • CI—GitHub actions;
  • CD—FluxCD for backend apps, GitHub Actions with frontend apps.

Deploy backend applications

The MIRA backend application was containerized and deployed into the managed Kubernetes cluster with zero manual steps. 

  • We configured health checks (liveness and readiness probes) to detect and restart unhealthy pods automatically. 
  • Resource quotas were set per namespace to prevent runaway consumption. 
  • The deployment pipeline integrated with the new CI/CD system, allowing engineers to trigger production releases by merging code to the main branch. 
  • Rolling updates ensured that traffic was never dropped during deployments, and automatic rollback was triggered if error rates spiked post-deployment.

Deploy frontend applications

The frontend assets were migrated from AWS S3 and CloudFront to OVHcloud's object storage and CDN, reducing latency for end users globally. We've configured CI/CD pipelines for frontend, ensuring new versions are seamlessly deployed to S3.

Static assets were cached aggressively at the edge, while HTML was served with short TTLs to allow rapid updates. This unified deployment model eliminated the operational friction of managing separate frontend and backend release cycles.

Configure CI/CD: From manual to fully automated

The previous setup relied on manual deployments and ad-hoc scripts. We implemented a GitHub-based CI/CD pipeline using industry-standard tools that automated every step from code commit to production release. The pipeline included:

  • Automated testing: Unit tests, integration tests, and security scanning on every pull request.
  • Container image building: Docker images were built, tagged, and pushed to a private registry automatically.
  • Kubernetes manifests were automatically validated by CI/CD pipeline before deployment to the staging environment. Then promoted to production via manual approval.
  • Audit logging: Every deployment was logged with who triggered it, what changed, and when.

This shift from manual to automated deployment reduced deployment time from hours to minutes and eliminated human error as a source of outages.

Monitoring and metrics-based alerting: From blind spot to full visibility

The previous AWS setup had no centralized observability. When issues occurred, the team was reactive—discovering problems only after users reported them. We implemented a three-layer monitoring stack to transform MIRA into a self-healing, self-aware system.

Prometheus scrapes metrics from every Kubernetes node, pod, and application endpoint every 15 seconds. This captures CPU, memory, disk I/O, network throughput, and custom application metrics, such as query latency and blockchain transaction confirmation times.

Grafana stack (Loki, Alloy, Grafana) dashboards provide real-time visualization of system health. The team now sees query patterns, database connection pools, and pod restart rates at a glance. Custom dashboards track MIRA-specific KPIs: user login rates, tokenization transaction volume, and AI verification accuracy.

Requests by status stats from the MIRA Network’s Grafana dashboard

Alert rules are defined in code and version-controlled alongside infrastructure. Thresholds trigger automated responses, and an alert fires to the on-call engineer.

The result is a system that no longer surprises. MIRA's engineering team now spends time optimizing, not firefighting.

Technical execution: The blue/green cutover during migration

To ensure MIRA’s four million users experienced no disruption, we utilized a blue/green deployment strategy. We built the entire target environment in parallel using Terraform and Helm. Data replication was established between AWS and OVHcloud to keep the state in sync until the final moment of transition.

Latency stats from the MIRA Network’s dashboard

The actual cutover was completed in under 10 minutes. We maintained the AWS environment as a "warm standby" for seven days to provide a guaranteed rollback path, though it was never needed. The result was a clean break from legacy debt into a documented, version-controlled infrastructure.

Quantifiable outcomes

Before After
  • Infra built via AWS UI, limited repeatability
  • Oversized DB resources, high spend pressure
  • Low visibility: missing metrics/logs/alerts
  • Early autoscaling with overprovisioning bias
  • Production migrated to OVHcloud with a blue/green cutover and rollback window
  • IaC and deploy-as-code workflow defined
  • Monitoring/logging/alerting put in place
  • Downtime held to 5–10 minutes during cutover

By moving to a managed Kubernetes environment on OVHcloud and rightsizing the database clusters, MIRA Network achieved:

  • 99.99% uptime: Eliminating the "silent failures" of the previous manual setup.
API availability stats from the MIRA Network’s dashboard
  • Full observability: Real-time dashboards now track every query and system bottleneck.
  • Cost efficiency: Significant reduction in monthly burn by utilizing OVHcloud’s competitive pricing and commercial incentives.
  • Deployment speed: CI/CD pipelines now allow the team to ship updates in minutes rather than hours.

The roadmap ahead

The migration represents the first phase of MIRA’s infrastructure maturity. As the ecosystem continues to scale in line with its 2026 roadmap as a Web3 crowdfunding and RWA tokenisation ecosystem, the following enhancements are planned:

  • Wholesome documentation: Creating the manuals for each component of the refreshed environment.
  • Frontend migration: Moving the remaining web assets from AWS to the new unified OVHcloud environment.
  • Advanced autoscaling: Fine-tuning kernel-level parameters and predictive scaling to handle the next 10x spike in user activity.

MIRA Network now possesses the technical foundation to scale from millions to tens of millions of users without the fear of infrastructure collapse or runaway costs.

Olha Diachuk
Writer at Dysnix
10+ years in tech writing. Trained researcher and tech enthusiast.
Copied to Clipboard
Paste it wherever you like