IndieStack: Rethink the Cloud

What the AWS Outage Reveals About Cloud Dependency

Published October 21, 2025

On October 20, 2025, Amazon Web Services experienced a major outage that disrupted apps and websites around the world. Reports listed impacts across social, finance, gaming, media, and even parts of Amazon’s retail operations. See coverage from Reuters, Associated Press, and the Guardian.

AWS later said services recovered and shared an explanation that pointed to DNS resolution issues affecting DynamoDB endpoints in the US-EAST-1 region, with knock-on effects to other services. See the official AWS update here: About Amazon: AWS service disruptions update. Independent analysis from ThousandEyes also traced the incident timeline and blast radius: ThousandEyes: AWS Outage Analysis, October 20, 2025.

This was not just a technical story. It was a business story about control, cost, and resilience.


Why this matters for business leaders

It is easy to get lost in service names and status pages. The core business impact is simpler.


The hidden cost of over-reliance

Cloud convenience is real. So are the tradeoffs that do not show up in a demo.

None of this means abandon the cloud. It means own your risk and your fallback plan.


When AWS is still a good choice

Using AWS can be the right call. The point is to be deliberate about where and how you depend on it.

The question is not “cloud or not.” It is “which parts must we control, and what is our plan when a provider fails for several hours.”


Independence does not mean isolation

Independent infrastructure is not about rejecting cloud services. It is about clear ownership and simple, portable building blocks.

This is not nostalgia. It is choosing clarity over guesswork so incidents are fixable.


A 30-day audit any company can run

You do not need a replatform to improve your position. Start with visibility and drills.

  1. Map your dependencies. List every third-party service that sits on the critical path for customer value. Note which ones you cannot operate without vendor action.
  2. Rehearse failure. Choose one service and simulate a four-hour loss. What breaks for customers and what continues to work. How would your team respond.
  3. Document recovery. Write a short restore runbook for your database and files. Perform a test restore into a clean environment.
  4. Reduce unknown costs. Break your bill into compute, storage, transfer, and services. Remove anything you do not need in the next 90 days.
  5. Pick one fallback. Identify one component you can own directly, such as static assets or a secondary status page on a different provider, and implement it.

Small, focused steps are better than a big plan that never ships.


Closing thought

Outages fade from headlines, but the lesson remains. Technology control equals business control. The cloud is a powerful tool. It should not be your only one. Own the parts that matter, keep options open, and make recovery a habit.



As we outlined in Why the Cloud Is Failing Us, the problem isn’t the technology itself but how dependence has replaced design. The AWS outage is a clear example of that dependency playing out in real time.

IndieStack - Helping companies cut costs and complexity by owning the technology they rely on.

© 2025 IndieStack. Built for the independent internet.