The AI Briefing

In this brief on-the-go episode, Tom discusses the risks of building businesses on centralized AI infrastructure. Sparked by Cloudflare's recent major outage, he explores what happens when AI vendors go down and how companies should think about their risk appetite when depending on services like OpenAI, Anthropic, or other AI providers. From wrapping entire business strategies around AI APIs to considering self-hosted alternatives, Tom breaks down the strategic considerations for both startups and established businesses looking to integrate AI into their core operations.

Key Topics
  • The Cloudflare outage and its implications for internet infrastructure
  • Risk management when building on third-party AI vendors
  • Different deployment options: OpenAI direct, Azure AI playground, or self-hosted models
  • How risk appetite should differ between startups and established businesses
  • Strategic considerations for making AI a core part of your business
  • The AI bubble discussion and vendor dependency concerns
Need help navigating AI infrastructure decisions for your business? Get in touch at https://www.concepttocloud.com

What is The AI Briefing?

The AI Briefing is your 5-minute daily intelligence report on AI in the workplace. Designed for busy corporate leaders, we distill the latest news, emerging agentic tools, and strategic insights into a quick, actionable briefing. No fluff, no jargon overload—just the AI knowledge you need to lead confidently in an automated world.

Tom Barber (00:00)
Hi folks, just a quick one today as I'm very much on the move, but welcome to the AI briefing podcast. And I was having a think on the fly over of like the issues and the pitfalls that can come with the way that modern internet architecture is architected. For anyone who didn't see the news today, yesterday now, Cloudflare suffered a large

outage caused by some suspicious or erroneous traffic going through their system that took out a large portion of the internet. And of course, this makes me think from an AI perspective, when you've got such a reliance, just as so many customers have on cloud infrastructure, like many of the AI organizations, is what's the backup plan when it doesn't work? When

all the systems go down and you've built a product or you have a large reliance on a specific AI vendor to provide either, know, GPT support or API support or image generation support. What happens? Like what's the risk and what's your appetite for risk when it comes to working in that space? Google

the boss guy from Google today came out, it was similarly talking about the AI bubble and the potential for it bursting. Like as you build these companies that wrap so many AI services, you also have to weigh up your risk appetite for what is acceptable risk when it comes to deploying your entire business strategy around such a hot commodity. Now for a lot of people,

It's fine if you're like super startup and basically you want to create something that's in the AI space, then sure, go ahead and wrap your business strategy around open AI or Anthropic or whoever's model you want to use. if you're

a more stable business long-term been around for a while and you're considering dabbling in AI. If it's also to make it a core part of your business, you have to then weigh up the risk and the way that you want to be able to deal with that. Do you use open AI directly? Do you use the models hosted in Microsoft's AI playground? Do you self-host a model that you can get from one of the open source ones? It depends on what features and functionality you require.

as to what you can do when it comes to actually deploying that stuff. So anyway, just a quick one that was a bit more food for thought than anything groundbreaking, but let me know what you think in the comments and I will see you again tomorrow.