Kubernetes. Managed clusters.

Production-ready Kubernetes clusters managed by Sigilhosting. We handle the control plane, etcd, version upgrades, and certificate rotation. You deploy your workloads and scale your node pools.

Managed
Control plane
HA
Multi-master
Cilium
CNI
Auto
Node scaling
Architecture

Managed control plane

When you create a cluster, we provision three control plane nodes across isolated infrastructure. The API server, scheduler, controller manager, and etcd are fully managed — we handle version upgrades, certificate rotation, etcd backups, and monitoring.

The control plane is not billed separately. You pay only for your worker nodes. API server endpoints are exposed via a load-balanced address that you add to your kubeconfig. RBAC is enabled by default.

Kubernetes versions are supported for 14 months after initial release. We test upgrades against common workload patterns before making them available. You choose when to upgrade — we never force a version change without notice.

Managed Kubernetes Architecture We manage the control plane — you manage your workloads on configurable worker node pools MANAGED BY VECTRAL Control Plane Multi-AZ · auto-healing · always updated API Server Scheduler etcd Controller CoreDNS Cilium CNI Platform Services Monitoring · log collection · cert management · ingress controller Auto-patched · auto-scaled · 99.99% SLA · free MANAGED BY YOU WORKER POOL 1 · GENERAL Worker 1 · 4c/8G Worker 2 · 4c/8G Worker 3 · 4c/8G Auto-scales 2–10 nodes · general compute workloads WORKER POOL 2 · GPU GPU Worker · A100 80GB GPU Worker · A100 80GB Fixed 2 nodes · ML inference + training workloads WORKER POOL 3 · HIGH-MEM Worker · 8c/64G Worker · 8c/64G Free control plane You pay only for workers Multi-pool Mix CPU, GPU, high-mem Auto-scale Per-pool min/max nodes

Worker node pools

A cluster can have multiple node pools, each with its own instance type, size, and scaling configuration. This lets you run different workload types on different hardware — general-purpose nodes for web services, high-memory nodes for caches, GPU nodes for inference.

Auto-scaling monitors pod resource requests and adds nodes when pods can't be scheduled. When demand drops, nodes are drained and removed. You set minimum and maximum node counts per pool — the autoscaler operates within those bounds.

Nodes are provisioned from our VPS fleet, which means they benefit from the same dedicated vCPU, NVMe storage, and 10 Gbps networking as standalone instances. No oversubscribed control plane nodes, no shared resources.

Cilium CNI. eBPF networking. No iptables.

Pricing

Cluster tiers

Control plane is free on all tiers. You only pay for worker nodes at standard VPS pricing.

Development
Free control plane
1 control plane node
1–10 worker nodes
1 node pool
Manual scaling
Community support
Daily etcd backups
Production
Free control plane
3 control plane nodes (HA)
1–100 worker nodes
10 node pools
Auto-scaling
Priority support
Hourly etcd backups
Enterprise
Free control plane
5 control plane nodes (HA)
1–500 worker nodes
Unlimited node pools
Auto-scaling + GPU scheduling
Dedicated support engineer
Continuous etcd backups + custom retention
Networking

Cilium CNI with eBPF

We use Cilium as the default CNI plugin. Cilium implements networking, load balancing, and network policies using eBPF programs that run in the Linux kernel — replacing the traditional iptables-based kube-proxy with a more efficient, observable, and scalable data plane.

Benefits include kernel-native packet processing (no iptables chain traversal), built-in network policies with L3/L4/L7 filtering, transparent encryption between pods using WireGuard, and Hubble observability for flow-level visibility into pod traffic.

For Service type LoadBalancer, Cilium integrates with our managed load balancers. External traffic enters through the load balancer, and Cilium handles internal routing to the correct pods.

Storage

Persistent storage

NVMe-backed persistent volumes with CSI driver integration.

Persistent volumes are backed by our NVMe block storage. The Sigilhosting CSI driver handles dynamic provisioning — when a PVC is created, the driver automatically provisions a volume and attaches it to the node running the pod.

ReadWriteOnce volumes attach to a single node. ReadWriteMany volumes (for shared file systems) are backed by NFS servers running on dedicated storage nodes. Volume snapshots are supported via the Kubernetes VolumeSnapshot API and can be restored to new PVCs.

Deployment

Quick start

Create a cluster and deploy a workload in under 5 minutes.

Terminalbash
# Create a production cluster sigilhosting k8s create \ --name production \ --region us-east-1 \ --version 1.29 \ --tier production # Add a worker node pool sigilhosting k8s pool create \ --cluster production \ --name web \ --size 4vcpu-8gb \ --min 2 --max 10 \ --auto-scale # Get kubeconfig sigilhosting k8s kubeconfig production > ~/.kube/config # Verify kubectl get nodes # NAME STATUS ROLES AGE VERSION # pool-web-01 Ready <none> 42s v1.29.2 # pool-web-02 Ready <none> 44s v1.29.2
Terraformmain.tf
resource "sigilhosting_k8s_cluster" "prod" { name = "production" region = "us-east-1" version = "1.29" tier = "production" } resource "sigilhosting_k8s_pool" "web" { cluster_id = sigilhosting_k8s_cluster.prod.id name = "web" size = "4vcpu-8gb" min_nodes = 2 max_nodes = 10 auto_scale = true } resource "sigilhosting_k8s_pool" "gpu" { cluster_id = sigilhosting_k8s_cluster.prod.id name = "inference" size = "gpu-a100-1x" min_nodes = 0 max_nodes = 4 auto_scale = true }
Integrations

Container registry

Every cluster includes access to a private container registry hosted in the same region. Push images from your CI pipeline and reference them in pod specs without configuring image pull secrets — authentication is handled automatically between the cluster and registry.

The registry supports Docker and OCI image formats. Garbage collection runs automatically to reclaim storage from untagged and unreferenced images. Vulnerability scanning is available via integration with Trivy.

Monitoring and logging

A pre-configured monitoring stack based on Prometheus and Grafana is deployed into a dedicated namespace. Cluster-level metrics (node CPU, memory, disk, network) and Kubernetes metrics (pod counts, restart rates, scheduling latency) are collected automatically.

Application metrics can be scraped via standard Prometheus annotations on your pods. Pre-built Grafana dashboards cover cluster overview, node health, pod resource usage, and Cilium network flows. Alerts can be configured via Alertmanager with webhook, email, and Slack integrations.

Security

Cluster security

Security defaults that don't require a dedicated platform team to configure.

RBAC is enabled on every cluster. The cluster creator gets cluster-admin, and additional users can be granted scoped permissions via standard Kubernetes RBAC roles and bindings. We recommend using service accounts with minimal permissions for CI/CD pipelines.

Pod Security Admission is configured in "warn" mode by default — pods that violate the "restricted" profile will log warnings but still be admitted. You can escalate to "enforce" mode per namespace when ready. Secrets stored in etcd are encrypted at rest using AES-256.

Cilium provides transparent WireGuard encryption for all pod-to-pod traffic within the cluster. This can be enabled per cluster and adds encryption without requiring application-level TLS between services.

Features
Free Control Plane
The managed control plane (API server, scheduler, etcd) is included at no cost. You pay only for your worker nodes at standard VPS pricing. No hidden management fees.
Free
Auto-Scaling
Node pools scale automatically based on pod resource requests. Configurable minimum and maximum node counts. Scale to zero during off-peak for dev/staging pools.
0 to N
GPU Scheduling
Request GPUs in your pod spec and the NVIDIA device plugin handles placement. Multi-GPU pods supported. GPU node pools can auto-scale independently.
NVIDIA
Managed Upgrades
Kubernetes version upgrades tested and rolled out with zero downtime. Worker nodes are drained and replaced one at a time. You choose when to initiate.
Zero downtime
Private Registry
Container registry included with every cluster. Automatic auth between cluster and registry. Garbage collection and vulnerability scanning built in.
Included
Monitoring Stack
Prometheus, Grafana, and Alertmanager pre-configured. Cluster and workload dashboards out of the box. Custom alerts via webhook, email, or Slack.
Built-in

Launch a cluster in minutes.

Free control plane. Pay only for worker nodes.