Skip to main content
ditto-case-study-banner_urzr7n.avif

Building a Cloud-Native Bring Your Own Cloud (BYOC) Platform to support Enterprise Customers


About the Client

Ditto is a distributed database platform designed to empower developers to focus on building great user interfaces without worrying about complex data infrastructure. Unlike traditional solutions that rely on centralized servers, Ditto enables seamless access, synchronization, and replication of data across devices through peer-to-peer database synchronization for SaaS applications—even without an internet connection. Its architecture combines a horizontally scalable, Kubernetes-based server for handling large-scale centralized deployments with native connectors. At the same time, the core engine allows data to be defined, queried, and instantly replicated wherever needed, providing secure, reliable, and highly available data experiences for app users.

The Ditto Server is a horizontally scalable, Kubernetes-based, clustered database designed as a centralized datastore for your Ditto data, capable of handling large amounts of data and devices. Includes native connectors and CDC capability. It is offered as a Cloud (SaaS) and BYOC (Bring your own cloud).

The Challenge of Modern Enterprise Deployment Demands

Enterprise software deployment has reached an inflection point where traditional approaches fail to meet the diverse, often conflicting requirements of modern organizations. Today's enterprises demand the flexibility to deploy applications across public clouds, private data centers, and hybrid environments while maintaining centralized control, consistent security postures, and operational simplicity—driving the need for a modern enterprise software deployment strategy.

This case study examines how to build a BYOC SaaS product by transforming from manual, weeks-long customer deployments to an automated BYOC platform that enables customer self-service onboarding in hours rather than days. The solution combines cloud-native technologies, management cluster architecture, and Kubernetes platform engineering best practices to create a scalable foundation that serves hundreds of customer environments across diverse infrastructure targets.

To address these challenges, the organization needed a modern platform stack that could deliver:

  • Best-in-class automation and operational excellence through declarative infrastructure management for SaaS companies
  • Support multiple deployment targets: public cloud providers, bare-metal clusters, and private/on-prem cloud environments.
  • Reduced operational overhead and toil by eliminating manual deployment processes
  • Self-onboarding capabilities that enable customers to provision environments independently
  • Dramatically reduced onboarding time from weeks to hours while maintaining security standards

BYOC platform

The initiative created a self-service, fully managed "bring-your-own-cloud" solution that enables customers to provision applications in their cloud accounts through an automated, managed approach. This transformation reduced customer onboarding from 3+ days to 2-4 hours through complete automation of what were previously manual processes—demonstrating how effective designing self-service onboarding can revolutionize enterprise software deployment.

Management-Oriented Architecture

Traditional approaches to multi-cluster management suffer from tight coupling, limited fault isolation, and operational complexity.

The management cluster-workload cluster pattern emerges from this need, representing a mature architectural approach that separates the "thinking" from the "doing" in complex distributed architectures.

  • Management Control Plane: Carries administrative traffic and provides the interface for human operators and external systems. It handles policy definition, resource provisioning requests, and consolidated monitoring across all clusters. Critically, it remains non-mission-critical and passive in its operation.

  • Data Plane: Carries user traffic and executes the actual application workloads. This is where business value is delivered, processing customer requests and running production applications with minimal operational complexity.

Each workload cluster's control plane operates with local intelligence, making decisions based on local conditions while adhering to globally defined policies. This creates a system that can tolerate network partitions and management plane outages while maintaining operational continuity.

Why Cloud-Native

To meet the requirement of supporting highly diverse deployment environments, the organization embraced a cloud-native architecture with Kubernetes as the foundation. It serves as the foundation for the management-worker plane architecture. K8s itself follows similar architectural principles and is a perfect fit for such an implementation.

It provides:

  • Universal Standardization: Kubernetes provided a consistent abstraction layer across public clouds and on-premises bare metal, enabling a single deployment model across all targets. The platform could consume block storage, object storage, secret management solutions, and ingress uniformly across different infrastructure providers, hiding cloud-specific implementation complexity.

  • DevOps excellence: Kubernetes offered natural integration with automation pipelines, GitOps workflows, and multi-environment rollout strategies. This integration enabled fully automated deployment pipelines that maintained consistency across diverse infrastructure targets.

  • Control plane-based orchestration: Application components should be deployed as Custom Resource Definitions (CRDs) to provide operational consistency across diverse deployment environments. This approach abstracted the multi-component architecture into version-controlled resource definitions that integrate seamlessly with GitOps workflows and Kubernetes tooling, transforming the application into a self-healing, cloud-native system where operators automatically handle lifecycle management, configuration updates, and inter-component dependencies based on declarative specifications – implements “workload cluster's local intelligence”

Management Control Plane Design

Cloud-native cluster lifecycle management with CAPI: To manage and orchestrate Kubernetes clusters consistently across different environments, Cluster API (CAPI) was selected as the cluster provisioning engine. CAPI's declarative model aligned perfectly with the cloud-native philosophy, enabling automated cluster lifecycle management.

  • Deployment Parity: By standardizing on kubeadm-based self-managed control planes, the platform ensured deployment parity across cloud, bare-metal, and private environments. This consistency eliminated environment-specific cluster management procedures and reduced operational complexity.

  • Scalability and Maintainability: CAPI enabled declarative, repeatable cluster lifecycle management that reduced operational overhead across different environments. Cluster provisioning, upgrades, and decommissioning became automated, version-controlled processes rather than manual procedures.

  • Multi-Provider Support: CAPI's provider ecosystem supported consistent cluster management across AWS, Azure, GCP, and bare-metal environments through standardized APIs and workflows.

  • Designing self-service onboarding with ArgoCD: The management cluster hosts ArgoCD configured with ApplicationSets that template applications across different customer environments. ApplicationSets use cluster generators to automatically deploy applications to newly provisioned workload clusters while handling environment-specific configurations and policies, enabling truly self-service customer onboarding.

  • Crossplane for multi-cloud resource provisioning: Crossplane extends the Kubernetes API to manage cloud resources declaratively. The management cluster uses Crossplane compositions to provision EKS/GKE/AKS clusters with consistent networking and security configurations, standardizing cloud-specific provisioning procedures across multiple cloud providers.

  • Orchestration Operators: Custom operators handle organization-specific orchestration that CAPI and Crossplane don't provide:

  • License enforcement: for air-gapped environments using cryptographic validation

  • Compliance validation: ensuring workload clusters meet regulatory requirements before application deployment

Data Plane Design

  • Operational Simplicity: Workload clusters are designed for easy operation with minimal ongoing management requirements. After initial setup, clusters operate autonomously without requiring continuous connectivity to the management cluster.

  • Fault Isolation: Management cluster outages don't impact workload cluster operations, providing natural fault boundaries that maintain customer application availability even during control plane maintenance.

  • Local Intelligence: A Set of operators that orchestrate the setup and operation of the complex applications. The responsibilities here are different from the control plane operators and are critical for the application's setup and operations, independent of infra and other non-functional concerns.

Business Impact and Results

  • Market Expansion: The BYOC platform increased the addressable market by enabling customers on different public clouds and providers to adopt the solution. Previously inaccessible regulated industries became viable market segments through on-premises deployment capabilities.

  • Onboarding Excellence: Dramatically improved onboarding time and ease, increased trial conversion rates - more prospects could quickly experience value with reduced activation energy. Self-service capabilities eliminated the need for dedicated support resources during customer onboarding.

  • Competitive Differentiation: The platform transformed infrastructure from an operational bottleneck into a competitive advantage that accelerates market expansion while maintaining enterprise-grade security and compliance standards.

Enterprise Platform Evolution

This implementation demonstrates how to build a BYOC SaaS product using cloud-native technologies, when properly architected through management-workload cluster separation patterns, to solve the fundamental challenges of multi-environment enterprise software deployment. The combination of Kubernetes standardization, cloud-native cluster lifecycle management with CAPI, and Kubernetes platform engineering best practices creates a foundation that scales operationally while meeting diverse customer deployment requirements.

The success of this modern enterprise software deployment strategy relies on treating infrastructure as a product, embracing declarative infrastructure management for SaaS companies, and implementing thoughtful separation of concerns that enables both centralized control and distributed execution. Organizations adopting similar patterns can achieve significant reductions in operational overhead while dramatically expanding their addressable market reach through effective peer-to-peer database synchronization for SaaS and automated multi-cloud provisioning capabilities.

Ready to modernize your platform engineering approach? Explore how CloudRaft’s proven expertise in automation, Kubernetes architectures, and multi-cloud enablement can drive the next phase of your enterprise transformation. Contact CloudRaft today to unlock your potential!