Blog
Cloud Underlayer Agnostic Architecture Case: Elegant Solution for Wand.ai

Cloud Underlayer Agnostic Architecture Case: Elegant Solution for Wand.ai

Daniel Yavorovych
February 17, 2023

As you might have guessed, our cooperation with Wand.ai hasn’t stopped on consulting part. The reason why we continued our cooperation in the practical dimension lies in the fact that Dysnix offered the most comprehensive project of working solutions compared to other companies Wand hired. We won this “competition” with our expertise, our concept was better described, more tech-savvy, and comprehensive in general. 

So the team that worked on consulting got involved in the practical realization of the consulting project. As before, it was one Solution Architect and two Senior DevOps Engineers. The working load depended on the implementation stage, but anyway, the time dedicated to the project was enough to cover all issues we’ve planned.

Solution implementation background

Let's step back a little to see the complete picture of the case. The business function of the future product that should work on Dysnix architecture is in creating separate multitenant environments for users that can build multiple custom business data pipelines based on the low-code principle and make use of the collected data pool by applying AI/ML models on them. We can extract the main requirements of the architecture from this description:

  • The end users are representatives of fintech, medtech, Web3 industries, and various-sized businesses that might have a vital requirement of launching similar services in the closed perimeters. This means that the architecture we have to create has to be underlayer agnostic and can be started on-prem or in the cloud with the same success.

For us, as Architects, it means that we have to select a single set of tools that will work as, let’s say, a common denominator that will give us the possibility to create such an architecture. The requirements for this toolbox still were relatively high: it must consist of reliable tools, standardized, supported, and complimentary. So we can’t rely on something way too modern or exotic.

  • The ETL functions of the future architecture have to be designed flawlessly, meeting all multitenancy requirements: all data of one end user should be isolated from others, and everything should be secured and 100% in control of the owner. The product should work samely good in separate environments under a load of any volume. No matter if the end user wants to upload only data from Google Analytics plus a single-paged spreadsheet with clients to analyze or ten business data sources for the last 10 years.

From the technical side, we understand that we have to provide data transferring, storing, and flexible scaling options for any ETL processes to go smoothly. Without an infrastructure that’s ready for such challenges, the product won’t work.

  • We also had to care about serving the ML models’ part of the product. The connection with this part is essential for delivering the product's value to the end users. We also considered this while building the product’s architecture.

The tech stack we select should include AI/ML-compatible technologies that work without lots of additional code or complex integrations.

  • Another point we always care about is time. Our client has to possess enough time to develop, write the code using our architecture guides and deliver the product as fast as possible to the market.

The tight deadlines for our work are a typical requirement for us. We can deal with it thanks to the expertise accumulated with tens of other projects, ready-made inner inventions, wide open-source practice, and abilities to make the right technical decisions in the short term. 

The requirements were precise, and we proceeded with our work. We created a plan to make things happen.

Implementation process stages

This is the helicopter view of the whole implementation process from the beginning till the final moment of cooperation.

  • Consulting stage: 2 weeks. As a result, the client got a technical concept of architecture
  • SOW + R&D stage: 2 months.
During this stage, we conducted technology benchmarking for selecting the best fit for the client. Each part of the architecture was analyzed during the search for best-tailored solutions and estimated the possibility of their implementation. For example, in selecting the tools for serving ML models, we’ve come up with Seldon Core after reviewing over 7 other products.  
  • Architecture implementation: 2 months.
  1. Compatibility testing: all selected technologies were checked for efficient, coherent work. 
  2. Preparation of the documentation and guidance for developers for each component and process
  • Code development: ~12 months
  1. From our side, we provided support services and development curation.
  2. The project was successfully completed.
  • Project launch

The project tech stack and architecture we’ve built

To comply with the project's requirements, we created the cloud/underlayer agnostic architecture based on the universal toolset available for similar tasks we completed earlier. 

We hand-picked the Kubernetes-native set of tools and technologies applicable to our goals. We packed a solution using k8s because of its availability, standardization, predictability, and experience in using these tools. We had to minimize the number of absent integrations or the need to write additional code, so we just used the maximum of the reusable solution, which was quite an “ecological” approach, if we may say so. We used Kuber-based tools not only for functionality but also for orchestration operations, everything was pulled through the k8s configurator. 

Nevertheless, we can’t say that our created set has become too common. The tools list includes, but is not limited to:

  • Terraform
  • Kubernetes
  • FluxCD
  • Prometheus Stack
  • Istio
  • ElasticSearch cluster
  • Seldon Core
  • Prefect Orion
  • Jaeger 
  • Ory stack

The architecture that grew based on these tools was modern, stable, expectable, scalable, automated, and supported. So nothing could conflict with the requirements of multitenancy or security matters.

Benefits we bring to the project and lessons we’ve learned

As the development phase showed us, the issues we’ve been trying to foresee in the consulting or R&D stages weren’t only in our imagination. With our preparation, the development team avoided multiple dead ends and junctions regarding tool selection and their integration, bug tracking, fixing, and quick delivery and deployment. 

Two points should also be mentioned regarding the benefits we’ve brought to the table.

  1. Іnfrastructure аs Сode: with this approach applied, the client will be able to add any modifications to the infrastructure of the project in minutes and upgrade it seamlessly. The whole setup is not a black box for the internal team but explained guide to their product.
  2. The distributed tracing system is based on Istio/Jaeger. This solution will help developers to find problems throughout the whole infrastructure in minutes if they appear.

Regarding what our engineers learned from this project, we were happy to use the most favorite tool stack and build another genuine solution for the client on a Kubernetes basis. We believe this technology will stay irreplaceable for any projects that need a clear and elegant solution.

If you’re wondering if your project can have an efficient solution, meet our Architects personally
Contact us
Daniel Yavorovych
CTO and Co-founder at Dysnix
Brainpower and problem-solver, meditating and mountain hiking.
Copied to Clipboard
Paste it wherever you like