Back to Blog
Network3 min read

How We Built a Redundant BGP Network Across 28 Regions

Our network carries 4.2 Tbps of customer traffic through four transit providers and seven IXPs. Here's how we designed it to survive any single failure — and most double failures.

TA
Tunde Adebayo · Network Engineer

Design principles

Our network design follows three principles that we refuse to compromise on. First, no single point of failure at any layer — not in hardware, not in software, not in vendor relationships. Second, every failure mode must be automatically recoverable without human intervention within 60 seconds. Third, the control plane must survive independently of the data plane, so we can always reach and manage our infrastructure even during catastrophic events.

These principles sound obvious when stated plainly, but they have expensive consequences. They mean we maintain at least two transit providers in every region, even regions where traffic volume doesn't justify the cost. They mean we run our own route servers rather than depending on IXP route servers alone. They mean we have out-of-band management networks on entirely separate physical infrastructure from customer traffic.

Autonomous System architecture

Sigilhosting operates AS398330. We announce approximately 2,400 IPv4 prefixes and 800 IPv6 prefixes across our 28 regions. Each region has a minimum of two border routers, each connected to at least two different transit providers.

AS398330 Transit & Peering Architecture NTT · AS2914 Cogent · AS174 Lumen · AS3356 Telia · AS1299 Core BGP · AS398330 4.2 Tbps · dual-stack · 3,200 prefixes 7 IXPs · AMS-IX · DE-CIX · LINX Equinix IX · NYIIX · JPNAP AMERICAS Virginia · Oregon São Paulo · Toronto 8 regions · 2+ transit each EUROPE Frankfurt · Amsterdam London · Paris 8 regions · 3+ transit each ASIA-PACIFIC Tokyo · Singapore Sydney · Mumbai 8 regions · 2+ transit each MEA Johannesburg Dubai 4 regions 28 regions 5 continents 4 transit providers No single upstream dependency 45% peered Lower latency + cost 4.2s converge 30-day avg BGP
Sigilhosting AS398330 — BGP transit and peering architecture

Our transit providers are NTT (AS2914), Cogent (AS174), Lumen (AS3356), and Telia (AS1299). We chose these four specifically because their backbone networks have minimal physical path overlap — a fiber cut affecting NTT's path between two cities is unlikely to simultaneously affect Cogent's path between the same cities.

router bgp 398330
  bgp router-id 10.255.0.1
  bgp bestpath as-path multipath-relax
  maximum-paths 16
  
  neighbor TRANSIT_NTT peer-group
  neighbor TRANSIT_NTT remote-as 2914
  neighbor TRANSIT_NTT route-map TRANSIT-IN in
  neighbor TRANSIT_NTT route-map TRANSIT-OUT out
  neighbor TRANSIT_NTT maximum-prefix 900000
  
  neighbor PEERING_IXP peer-group
  neighbor PEERING_IXP route-map PEER-IN in
  neighbor PEERING_IXP route-map PEER-OUT out

We use BGP communities extensively to control traffic engineering. Every prefix is tagged with communities indicating region of origin, traffic type (anycast DNS vs unicast customer IP), and intended traffic policy. This allows us to drain all traffic from a specific region for maintenance by simply modifying the communities we announce from that region's border routers.

Peering strategy

We maintain peering sessions at seven Internet Exchange Points: AMS-IX (Amsterdam), DE-CIX (Frankfurt), LINX (London), Equinix IX (Ashburn), NYIIX (New York), Equinix IX (Singapore), and JPNAP (Tokyo). Peering traffic currently accounts for approximately 45% of our total traffic volume.

IXPLocationPort SpeedPeak TrafficPeers
AMS-IXAmsterdam100 GbE42 Gbps847
DE-CIXFrankfurt100 GbE38 Gbps1,241
LINXLondon100 GbE31 Gbps592
Equinix IXAshburn100 GbE56 Gbps438
NYIIXNew York10 GbE8 Gbps214
Equinix IXSingapore100 GbE22 Gbps316
JPNAPTokyo100 GbE19 Gbps271

Peering is important because it reduces both latency and cost. Traffic exchanged at an IXP travels a shorter path (fewer network hops) and doesn't incur transit fees. Our blended transit cost is approximately $0.40/Mbps committed, while peering is a flat port fee regardless of volume.

Failure testing

Every quarter, we run "chaos days" — deliberately failing transit links, border routers, and IXP connections. In our most recent test, we simultaneously failed both transit providers in Frankfurt plus our DE-CIX session. Traffic automatically rerouted: 60% through Amsterdam, 40% through London. Total reconvergence: 8 seconds for transit, 3 seconds for anycast DNS.

The best time to discover a failover bug is during a planned test, not during an actual outage at 3 AM.

We monitor BGP convergence time continuously. Our internal tooling measures the time between a route withdrawal and the last border router updating its forwarding table. Current 30-day average: 4.2 seconds, well within our 60-second target.

What's next

We're evaluating Segment Routing v6 (SRv6) to replace MPLS-based traffic engineering within our backbone. SRv6 would give us more granular traffic steering and eliminate the operational complexity of maintaining a separate label distribution protocol. Phased rollout expected Q3 2026, starting with Americas regions.

We're also expanding peering to IX.br (São Paulo) and HKIX (Hong Kong), both expected live by end of Q2 2026.