
Production-ready Kubernetes clusters managed by Sigilhosting. We handle the control plane, etcd, version upgrades, and certificate rotation. You deploy your workloads and scale your node pools.
When you create a cluster, we provision three control plane nodes across isolated infrastructure. The API server, scheduler, controller manager, and etcd are fully managed — we handle version upgrades, certificate rotation, etcd backups, and monitoring.
The control plane is not billed separately. You pay only for your worker nodes. API server endpoints are exposed via a load-balanced address that you add to your kubeconfig. RBAC is enabled by default.
Kubernetes versions are supported for 14 months after initial release. We test upgrades against common workload patterns before making them available. You choose when to upgrade — we never force a version change without notice.

A cluster can have multiple node pools, each with its own instance type, size, and scaling configuration. This lets you run different workload types on different hardware — general-purpose nodes for web services, high-memory nodes for caches, GPU nodes for inference.
Auto-scaling monitors pod resource requests and adds nodes when pods can't be scheduled. When demand drops, nodes are drained and removed. You set minimum and maximum node counts per pool — the autoscaler operates within those bounds.
Nodes are provisioned from our VPS fleet, which means they benefit from the same dedicated vCPU, NVMe storage, and 10 Gbps networking as standalone instances. No oversubscribed control plane nodes, no shared resources.

Control plane is free on all tiers. You only pay for worker nodes at standard VPS pricing.
We use Cilium as the default CNI plugin. Cilium implements networking, load balancing, and network policies using eBPF programs that run in the Linux kernel — replacing the traditional iptables-based kube-proxy with a more efficient, observable, and scalable data plane.
Benefits include kernel-native packet processing (no iptables chain traversal), built-in network policies with L3/L4/L7 filtering, transparent encryption between pods using WireGuard, and Hubble observability for flow-level visibility into pod traffic.
For Service type LoadBalancer, Cilium integrates with our managed load balancers. External traffic enters through the load balancer, and Cilium handles internal routing to the correct pods.

NVMe-backed persistent volumes with CSI driver integration.
Persistent volumes are backed by our NVMe block storage. The Sigilhosting CSI driver handles dynamic provisioning — when a PVC is created, the driver automatically provisions a volume and attaches it to the node running the pod.
ReadWriteOnce volumes attach to a single node. ReadWriteMany volumes (for shared file systems) are backed by NFS servers running on dedicated storage nodes. Volume snapshots are supported via the Kubernetes VolumeSnapshot API and can be restored to new PVCs.
Create a cluster and deploy a workload in under 5 minutes.

Every cluster includes access to a private container registry hosted in the same region. Push images from your CI pipeline and reference them in pod specs without configuring image pull secrets — authentication is handled automatically between the cluster and registry.
The registry supports Docker and OCI image formats. Garbage collection runs automatically to reclaim storage from untagged and unreferenced images. Vulnerability scanning is available via integration with Trivy.
A pre-configured monitoring stack based on Prometheus and Grafana is deployed into a dedicated namespace. Cluster-level metrics (node CPU, memory, disk, network) and Kubernetes metrics (pod counts, restart rates, scheduling latency) are collected automatically.
Application metrics can be scraped via standard Prometheus annotations on your pods. Pre-built Grafana dashboards cover cluster overview, node health, pod resource usage, and Cilium network flows. Alerts can be configured via Alertmanager with webhook, email, and Slack integrations.

Security defaults that don't require a dedicated platform team to configure.
RBAC is enabled on every cluster. The cluster creator gets cluster-admin, and additional users can be granted scoped permissions via standard Kubernetes RBAC roles and bindings. We recommend using service accounts with minimal permissions for CI/CD pipelines.
Pod Security Admission is configured in "warn" mode by default — pods that violate the "restricted" profile will log warnings but still be admitted. You can escalate to "enforce" mode per namespace when ready. Secrets stored in etcd are encrypted at rest using AES-256.
Cilium provides transparent WireGuard encryption for all pod-to-pod traffic within the cluster. This can be enabled per cluster and adds encryption without requiring application-level TLS between services.
Free control plane. Pay only for worker nodes.