Cloud Trust Is Still Trust
When federal reviewers can't verify how Microsoft encrypts data in transit, it's a reminder that cloud adoption means accepting someone else's risk profile.
ProPublica published a piece today about the Federal Risk and Authorization Management Program’s (FedRAMP) review of Microsoft’s Government Community Cloud High - the cloud platform used by the Justice Department, Energy Department, and defense contractors to handle some of the most sensitive government data in the country. Federal reviewers spent five years trying to get Microsoft to explain how data is encrypted as it moves between servers. They never got a satisfactory answer. The product was authorized anyway, largely because it was already too embedded to reject.
This isn’t a “Microsoft is bad” story. It’s a cloud story. And it goes deeper than most people realize.
The Encryption You Can’t See
When you run your own servers in your own datacenter, your encryption might not be perfect. But it’s your imperfect encryption on your hardware. You can audit it. You can fix it. You control the architecture.
When you move to someone else’s infrastructure, you inherit their architecture - including the parts they can’t easily explain. Microsoft’s challenge here wasn’t about willingness. Their cloud is built on decades of legacy code, and mapping exactly how data is encrypted as it hops between services turned out to be genuinely difficult. One reviewer compared the architecture to “a pile of spaghetti pies.” Other major providers built their cloud platforms from the ground up - Microsoft was retrofitting something that already existed.
That complexity isn’t unique to Microsoft. Any large cloud platform has layers that even the provider struggles to fully document. The difference between how Signal encrypts your messages - end-to-end, you hold the keys, nobody else can read them - and how a major cloud platform encrypts your data in transit is enormous. One is verifiable by design. The other requires trust.
It’s Not Just the Cloud Layer
And it’s not always the provider’s fault. Risk lives at every layer of the stack - sometimes in places nobody’s looking.
Spectre and Meltdown were CPU-level vulnerabilities. The actual processors - Intel, AMD, ARM - had design flaws that could let attackers read memory they shouldn’t have access to, potentially undermining encryption at the hardware level. It didn’t matter how good your cloud provider’s software was. The silicon underneath had a problem.
Heartbleed was a critical flaw in OpenSSL, the cryptographic library that practically everything on the internet relied on for encryption. One bug in one library, and suddenly the encryption protecting millions of servers was compromised. Not a cloud issue. Not a vendor issue. A foundational infrastructure issue.
The point is: trust isn’t just about your cloud provider. It’s about the entire stack - hardware, firmware, libraries, hypervisors, application code. Every layer introduces risk. The cloud just makes it harder to see.
Tolerated Risk vs. Unknown Risk
For most organizations, this is a tolerated risk - and that’s fine. You’re not the Department of Defense. Your data probably isn’t classified. The benefits of cloud (scale, availability, managed updates, not running your own datacenter) outweigh the fact that you can’t independently verify every encryption hop.
That’s a legitimate business decision. But it should be a conscious one.
The problem is when organizations never classify this risk at all. They moved to the cloud because it was time, and they assume the provider handles security end to end. That’s not risk tolerance - that’s risk ignorance. And there’s a real difference between the two.
You Have Options
Depending on your risk profile, there are ways to reduce your exposure without abandoning the cloud entirely:
-
Hybrid deployments - Keep your most sensitive data on infrastructure you control. Use the cloud for what it’s good at (collaboration, scale, availability) and keep the crown jewels closer to home. Not everything has to live in the same place.
-
Encryption outside the provider’s loop - Solutions like customer-managed encryption keys, hardware security modules, and confidential computing environments let you encrypt data in a way that even your cloud provider can’t access. The provider hosts it, but they can’t read it. That changes the trust equation significantly.
-
Limit the blast radius - This is the big one. Segment your environments. Don’t put everything in one tenant, one provider, one basket. If something goes wrong - a vulnerability, a breach, a provider misconfiguration - the damage should be contained, not catastrophic. The organizations that survive security incidents are the ones that designed for containment from the start.
None of these are free, and none are simple. But if your risk tolerance says “we can’t just trust the provider,” these are real options that exist today.
The Bigger Picture
When even a company the size of Microsoft can’t produce encryption documentation that satisfies federal reviewers after five years, it’s a signal. Not that the cloud is broken - but that the trust model requires more deliberate thought than most organizations give it.
The question isn’t whether to use the cloud. For most businesses, that ship sailed years ago. The question is: do you know what you’re trusting, what layers that trust depends on, and what you’d do if any of them turned out to be weaker than you assumed?
Source: ProPublica/Ars Technica, March 2026.

Ed Brownlee
Ed Brownlee
CTO | N2CON
Ed Brownlee serves as CTO at N2CON, architecting technical solutions across security, disaster recovery, and infrastructure. His approach connects enterprise-grade practices with p…
Connect on LinkedIn