Welcome to Crashnews, a daily bite-sized news podcast for tech enthusiasts!
To grow together, join our community crsh.link/discord
To support our work and have special perks, support us on crsh.link/patreon
Welcome to Daily AI Crash News.
Today is May 6th, 2025.
- Hello.
- In this deep dive, we're digging into some
really key headlines making waves in AI right now.
- Yeah, lots happening.
- We'll be talking about Apple teaming up with Anthropic
on something called vibe coding.
Very mysterious.
- Intriguing name.
- Also, Google's plan to let kids under 13 use Gemini,
with some controls of course.
- It's a big one.
- And then there's the whole ongoing thing about Google
using web content for training it's search AI,
even if people opt out.
- Still a hot topic.
- And finally, Visa getting into AI agents for,
well, spending your money.
- Right, AI powered shopping.
So, quite a range today, from coding tools
to kids' safety, data rights, and finance.
- Exactly, a lot to unpack.
Let's dive in.
- Okay.
- So first up, Apple and Anthropic,
this vibe coding platform.
What exactly is it?
- Well, the name's a bit catchy,
but essentially it's about using AI agents
to actually help write software code.
- Like an AI coding assistant.
- Pretty much, yeah.
A very advanced one integrated directly
into Apple's Xcode development environment.
- Okay, so developers building apps for iPhone, Mac,
all that stuff.
- Exactly.
And it's powered by Anthropic's Claude Sonnet model.
- Which is known to be pretty decent, right?
- Oh yeah, it's a capable large language model.
The idea is to automate parts of the coding process.
- Like what specifically?
Generating code snippets?
- Generating code, yeah, helping with editing,
maybe even testing and finding bugs, speeding things up.
- But it's internal for now.
Apple's just using it themselves.
- That seems to be the case.
No word on a public release.
It makes you think, especially after those reported delays
with their Swift Assist AI tool earlier.
- Right, so maybe this partnership with Anthropic
is a way to kind of jumpstart their efforts.
- It certainly looks like a strategic move.
You know, we're seeing more of this big tech giants
partnering with these specialized AI labs.
- Because building this stuff is hard.
- Incredibly complex, yes.
And Anthropic has that focused expertise.
It says a lot about where software development
might be headed.
- Faster development maybe.
- Faster cycles potentially, yeah.
But also maybe it changes the developer's role,
more focus on like high level design overseeing the AI.
- Interesting.
And separately, Anthropic's been in the news too, right?
Something about a share buyback?
- That's right, offering employees a chance
to sell some shares.
The valuation mentioned was around $61.5 billion.
So they're doing okay.
- Clearly.
And they upgraded their cloud research mode too.
- Yeah, longer runtime, up to 45 minutes
and a new integrations feature connecting cloud
to other apps like Jira or PayPal.
So they're pushing forward on multiple fronts.
- Okay, so a lot going on there.
Let's shift gears now to Google and Gemini for children,
under 13s.
- Yes, quite a development.
The plan is to make Gemini accessible on Android devices
that are managed using Google's family link
parental controls.
- So parents have to okay it.
- Right, parents get an email notification
and presumably have to enable it.
The idea, Google says, is for things like homework help
or maybe interactive storytelling.
- Okay, sounds potentially useful,
but there are warnings too.
- Oh, definitely.
Google's being upfront that Gemini can make mistakes,
hallucinations, inaccuracies, and importantly,
that kids might stumble upon inappropriate content.
- Yeah, that's the worry, isn't it?
- It is.
So Google's advising parents to talk to their kids
about AI's limitations, you know, critical thinking
and warning them not to share sensitive personal info.
- And family link has controls,
but how effective can they really be with a chat bot
that can generate almost anything?
- That's the million dollar question.
What are the ethical implications here?
The potential developmental impacts on young kids.
Parental controls are a tool,
but they might not be foolproof against a powerful,
sometimes unpredictable AI.
- It feels like a space where we need a lot more discussion,
maybe more guard rails.
- Absolutely, responsible deployment is critical,
especially with younger users.
- Okay, let's move to the next Google story.
This one's about data,
specifically how Google trained its search AI.
- Right, this came out
during the Department of Justice antitrust trial,
confirmation that Google can use web content
to train its search AI models,
even if the website publisher has tried to opt out
using certain methods.
- Wait, even if they opted out, how does that work?
- Well, it seems the specific opt out protocols,
like the one for the common crawl.org bot,
mainly restrict Google's deep mind division.
- Ah, not the main search division.
- Apparently not to the same extent.
The search team seems to have,
let's say broader permissions
to use publicly available web data
for training its own systems.
- So if you run a website,
the only real way to stop Google search
using your content for AI training is,
what, blocking Google entirely?
- Pretty much, using the robots.txt file
to block Google bot from crawling your site.
- But that's huge.
That means you disappear from Google search results.
You lose all that traffic.
- Exactly, it puts content creators in a really tough spot.
You potentially lose your audience,
but your content might still be ingested and used
to make Google's AI better, which competes with you.
- That seems fundamentally unfair to many creators.
- It highlights a huge tension, doesn't it?
The AI needs vast amounts of data,
often publicly accessible data,
but the creators of that data
feel like they're losing control
and maybe not being compensated.
- You'd expect more pressure now
to revisit those opt-out standards, right?
- You would think so.
This whole situation really brings the complexity
of AI data usage and creator rights right to the forefront.
What's fair use?
What are the economic impacts?
Big questions for the future of the web.
- Definitely ones we'll be watching.
Okay, finally, let's talk about Visa
and AI stepping into our shopping carts, essentially.
- Yeah, Visa's looking ahead
with something they're calling Visa Intelligent Commerce.
- Sounds futuristic.
- It does.
The idea is to enable AI agents think,
like your personal AI assistant,
to securely make purchases for you online.
- How secure?
- Well, they're partnering with big names
like OpenAI and Microsoft,
and the plan involves using tokenized credentials.
So instead of sharing your actual credit card number,
the AI uses a secure one-time use token.
- Ah, like Apple Pay or Google Pay uses tokens.
- Exactly, similar principle,
but applied to AI agents making the purchase autonomously.
- So what could this be used for?
Like automatically reordering groceries?
- That's the kind of thing, yeah.
Automating routine, everyday purchases,
paying recurring bills, perhaps.
It could really change how we handle, you know,
those mundane shopping tasks.
- Convenience is the big selling point, I guess.
- Convenience, efficiency, sure.
But it immediately brings up some pretty serious questions.
- Like what?
Trust.
- Trust, definitely.
Privacy, what data is the AI accessing?
Control, how much oversight do you have?
And reliability, can you really trust an AI
not to mess up your finances?
- Yeah, letting an AI spend your money
requires a huge leap of faith.
- It really does.
What are the safeguards against errors or fraud
or the AI just misunderstanding your instructions?
The potential benefits are clear,
but the risks around security, privacy,
and user control need very careful handling.
- So a lot of potential, but also a lot to figure out
before AI becomes our personal shopper.
- Precisely.
Robust security and clear user controls
will be absolutely essential if this is gonna take off.
- Okay, so let's just quickly recap.
A really busy landscape today.
- For sure.
- We saw Apple partnering with Anthropic,
pushing into AI assisted coding,
potentially changing software development.
- We like that vibe coding.
- Google planning to bring Gemini to kids,
which comes with a whole host of opportunities
and serious concerns.
- Big ethical questions there.
- The confirmation about Google's broad use of web data
for training search AI,
putting content creators in a bind.
- The data rights debate continues.
- And Visa exploring AI agents
for automated secure commerce, hinting at a future
where AI manages more of our daily transactions.
- Yeah, AI moving into finance in a new way.
- It really shows how AI is weaving itself
into, well, almost everything.
From how software gets made, to how kids learn,
how information is used, and even how we might shop.
- It's becoming deeply integrated, often behind the scenes.
- So as you, our listener, hear about all this,
it's worth thinking.
How are these trends gonna shape your interactions
with technology down the line?
- Yeah, and maybe the bigger question
that emerges from all this is,
as AI gets embedded more deeply,
what are those fundamental issues of trust,
control, and maybe even responsibility
that we really need to grapple with?
Where is this all pointing?
- Definitely something to mull over.
Thanks for joining us for this deep dive
into today's AI news.
- Always fascinating, thanks for listening.
- Until next time.