As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
1. YOLO. Yeet big data into orbit!
2. People will pay big bucks to keep their data all the way up there!
3. Profit!
It could make sense if the entire DC was designed as a completely modular system. Think ISS without the humans. Every module needs to have a guaranteed lifetime, and then needs to be safely yet destructively deorbited after its replacement (shiny new module) docks and mirrors the data.
Doesn't this massively surface area also mean a proportionately large risk of getting damaged by orbital debris?
Cooling is one of the main challenges in designing data centers.
Reliable energy? Possible, but difficult -- need plenty of batteries
Cooling? Very difficult. Where does the heat transfer to?
Latency? Highly variable.
Equipment upgrades and maintenance? Impossible.
Radiation shielding? Not free.
Decommissioning? Potentially dangerous!
Orbital maintenance? Gotta install engines on your datacenter and keep them fueled.
There's no upside, it's only downsides as far as I can tell.
This premise is basically false. Most datacenter hardware, once it has completed testing and burn in, will last for years in constant use.
There are definitely failures but they're very low unless something is wrong like bad cooling, vibration, or just a bad batch of hardware.
- lots of cheap power - deploy 100s of ASICs, let each of them fail as they go
Paradoxically the datacenter in LEO is cheaper than on the ground, and have bunch of other benefits like for example physical security.
A Falcon Heavy launch is already under $100M, and in the $1400/kg range; Starship’s main purpose is to massively reduce launch costs, so $1000/kg is not optimistic at all and would be a failure. Their current target is $250/kg eventually once full reusability is in place.
Still far from the dream of $30/kg but not that far.
The original “white paper” [1] also does acknowledge that a separate launch is needed for the solar panels and radiators at a 1:1 ratio to the server launches, which is ignored here. I think the author leaned in a bit too much on their deep research AI assistant output.