AI data centers in space? It's closer than you think. 🚀 Last week, Google unveiled Project Suncatcher—their roadmap to orbital compute. But how do you actually connect space-based AI clusters to Earth? Our CTO Brian Barritt breaks down the math: these systems need 1-5 Tbps of connectivity through unpredictable atmospheric conditions and constantly shifting satellites. That's where Aalyria comes in. Our Tightbeam (free space optics) and Spacetime (network orchestration) technologies are built for exactly this challenge. Read Brian's full breakdown 👇
Space-based data centers are no longer a thought experiment. Huge congratulations to our friends at Google on Project Suncatcher — a serious, systems-first exploration of on-orbit compute. 👏 Let me break down the requirements of deploying such a system and how I think Aalyria can be a valuable asset in making it happen. How much ground ↔︎ space I/O will an orbital cluster need? Recent measurements from Sandvine’s put total global Internet traffic at roughly 33 exabytes per day, with a small set of hyperscaler platforms (including Google/YouTube, Amazon, Meta, Microsoft) together accounting for around half of that volume. Spreading this multi-exabyte load across on the order of a hundred-plus Google-scale facilities implies that a single modern data center is engineered for multi-terabit-per-second external ingress+egress— with tens of petabytes per day moving in and out of the facility. In Project Suncatcher, Bloom et al. describe an 81-satellite TPU cluster whose performance is “roughly comparable to a terrestrial datacenter,” linked to Earth via optical ISLs and dedicated feeder links to ground; so, if we assume external I/O scales with compute, a conservative estimate is that each Suncatcher-class on-orbit cluster would require on the order of ~1–5 Tb/s of aggregate Earth–space feeder-link capacity. Where Aalyria fits: Tightbeam (optical ground terminals): designed to make those feeder links possible. Our early units support 100-400Gbps full duplex per terminal (range & conditions dependent) at 1550 nm. You would deploy several per site — and across multiple sites, for weather/site diversity. That’s how you support Tbps-class clusters on the feeder-link side. Spacetime (temporospatial SDN): once you run optical feeder links through air, weather will impact the network. Spacetime proactively retasks steerable terminals — on the ground and in space. So if Spacetime forecasts clouds obscuring one site, we’ll pre-switch to a clear site and reconcile the on-orbit inter-satellite link graph, because a different ground look angle often means a different/adjacent satellite should carry the hop. Spacetime is built to optimize the link topology, spectrum resource allocation, and path computation across L1/L2/L3 concurrently, anticipating fades and evolving paths in sub-second timeframes (we’ve demonstrated sending updates to reconstitute a new topology after a satellite failure in less than 200 ms). This is the control plane you need when the physical fabric itself is dynamic. 💡Put simply: If Suncatcher makes the compute and power story compelling, Tightbeam and Spacetime can make the networking story real. Early systems can start with a few hundred Gbps of optical feeder capacity and scale out to multi-site, multi-terabit aggregates — while Spacetime keeps the whole thing stitched together as weather and orbital geometry evolve. https://lnkd.in/et2pnNei
Global Head of Industry - Telecom - Google Cloud
1wWho would have thought Loon technology would be extremely valuable to Google once again. 🙂