Welcome to Crashnews, a daily bite-sized news podcast for tech enthusiasts!
To grow together, join our community crsh.link/discord
To support our work and have special perks, support us on crsh.link/patreon
- Welcome to Crash News.
Today is May 14th, 2025.
Ever feel like the tech world is just moving way too fast?
Like you're trying to keep up,
but it's leaving you behind.
- Yeah, it's a lot sometimes.
- Well, today we're kind of hitting pause.
We're doing something a bit different.
We're not just, you know, listing a bunch of headlines.
- Right.
- We're carefully pulling out the really interesting bits,
the stuff that actually matters and connecting them
so it just, well, clicks and hopefully sticks.
- Think of it as your shortcut maybe
to feeling totally in the loop.
- Exactly.
Explain simply with a bit of fun too.
We'll start kind of big picture,
then zoom in, adding more detail,
but always keeping it clear, engaging.
- And if we hit any techie terms that sound like,
I don't know, complete jargon.
- We'll pause.
We'll take a quick step back, explain it, no problem.
- Precisely.
So our goal today is really to dive
into the most significant AI news and related tech stuff
from the past day or so.
- We're using news reports analysis
to give you the essential understanding,
but you know, without the information overload.
- And hey, if you find this helpful,
if you enjoy these deep dives,
there are a few really easy free ways you can support us.
- Please do.
- You can give us a rating, tap that like button,
subscribe to Crash News on whatever platform
you're listening on right now, simple stuff.
- We also have our Discord community.
It's growing, lots of good chat there.
- And if you're feeling extra supportive,
you can become a patron.
We always appreciate that.
- Definitely.
- Okay, so to get us thinking,
with all this AI stuff happening, I mean, it's incredible.
Where do you think we'll actually feel the biggest impact,
like in our daily lives over the next year or so?
- That's a great question to kick off with.
Okay, let's jump in.
Because it really does feel like AI is just,
well, everywhere you look now, doesn't it?
- It absolutely does.
- Let's start with something loads of people probably use,
TikTok.
They've just rolled out this new feature called AI Live.
- Oh, yes, I saw it.
- It's pretty neat, actually.
It uses the smart AI editing tools.
Basically, it takes your still photos and animates them,
turns them into short videos right in your stories.
- So like your photos.
suddenly get movement, dynamic effects.
- Exactly, imagine that.
It's fascinating how these, you know,
quite sophisticated AI creative tools
are just being baked into platforms
billions of people use daily.
- Right, TikTok's reach is huge.
They're essentially putting advanced animation,
well, maybe not advanced advanced,
but pretty cool animation capabilities
into everyone's pocket.
- It really could change how we think about creating
and sharing those short videos, couldn't it?
- Absolutely.
And speaking of media, let's talk audio.
Amazon's Audible, they're significantly expanding
their library of AI narrated audio books.
- Okay, so more computer voices reading books.
- Essentially, yeah, they're partnering with publishers
and they now offer, get this,
over a hundred different AI voices
and not just English, Spanish, French, Italian too.
- Wow, a hundred voices, that's quite a range.
Makes you remember Spotify did something similar,
didn't they, earlier this year with 11 labs, I think.
- That's right, so it really seems like AI narration
is quickly becoming, maybe not the default,
but a standard option in the audio book world.
- And if you connect that to the bigger picture,
AI moving beyond just text and images, right,
into more complex media.
- Exactly, more books can become audio books faster.
That could mean more accessibility potentially,
making literature available to more people.
- Yeah, that's a good point.
Though it does raise questions, doesn't it,
about the future for human narrators,
the unique sort of quality they bring.
- It does, that's a whole other conversation,
but definitely something to watch.
- Okay, let's shift gears a bit.
Talk about something happening more behind the scenes
in the sort of AI engine room.
The team behind WizardLM, you might remember them,
they were part of Microsoft's AI group.
- Yeah, I recall the name.
- Well, they've now joined Tencent's AI division,
it's called Hunyuan, and they haven't wasted any time.
- Oh, what have they done?
- They quickly launched a new AI model.
It's called Hunyuan Turbo S0416, quite a name.
And they're claiming it performs better
than some of Google's current AI offerings.
- Wow, okay, straight out of the gate with fighting talk.
- Right, that's a pretty bold claim
in such a competitive space.
- It really just highlights the intense competition.
doesn't it? Between the big tech companies, they're all trying to snap up the top AI talent
and push out these cutting edge models super fast. And comparing it directly to Google. Yeah,
it shows how quickly things are moving and how determined they are to lead. Definitely. Now,
here's something I found really interesting. Researchers have been looking into how chat
GPT is being used for scientific research, but specifically in countries where open AI has
actually, you know, prohibited its use. Ah, so where it's officially banned. How do they even
track that? Well, they use these tools called AI Word Choice Classifiers. Think of them like
detectors that spot writing patterns suggesting AI involvement, like maybe using certain fancy
words really often. Okay, I see. So what do they find? They found that, yeah, it looks like chat
GPT use for research is actually higher in some of these prohibited countries. Like by August 2023,
a much bigger chunk of academic papers shared online in China showed signs of AI content
compared to countries where access was legal. Huh. That really makes you wonder how effective
these geographical blocks actually are, right? In our super connected world. Exactly. Companies
might try to limit access, but the reality is if people want to use these tools, especially
researchers, they often find ways, VPNs, things like that. Yeah, makes sense. Did the research
find anything else? It did. And this is quite revealing. While using chat GPT seemed to correlate
with more people like looking at and downloading these papers. Okay, so more views, more downloads.
Right. But it didn't actually change how often other scientists cited those papers or where they
ended up getting published in journals. So more eyeballs didn't necessarily mean more scientific
impact or recognition? Precisely. It kind of suggests that while AI might help with the
writing process, maybe make things look slicker, get noticed initially, the actual core quality
and importance of the research itself. That's still what really counts in the scientific
community. That's a crucial point. Okay, moving on to how we actually use these AI systems.
There's been some analysis looking at the chat interfaces we mostly use now, like talking to
chat GPT. Uh-huh. The text box approach. Yeah. And it identified five key problems.
The main one seemed to be that these interfaces basically force us, the users, to adapt to
how the AI model works.
Instead of the other way around.
Exactly.
Instead of the AI adapting to us.
The expectation, or maybe the hope, is that the next generation of AI assistants will
have much more flexible UIs, user interfaces.
And they'll be better at just telling us upfront what they can and can't do.
Yeah.
That's a really important point.
These interfaces were great for getting started, right?
Making large language models accessible.
But maybe they're not the perfect long-term solution.
Seems like it.
We should probably expect future AI assistants to be more intuitive, maybe more personalized,
learning our needs over time.
Yeah, adapting to us.
And it's not just the interface, it's how companies are thinking about AI fundamentally.
Look at Duolingo.
The language learning app.
Yeah.
They've announced this major strategic shift.
They want to be an AI-first company.
AI-first.
What does that actually mean for them?
Well, they're integrating AI across everything.
Their products, their internal workflows, they've got these new core principles like
start every task with AI, dedicate time for teams to learn about AI, experiment carefully,
keep up technical standards.
Wow.
That sounds like a huge commitment.
A big bet on AI's transformative power.
It really does.
And Duolingo's move probably reflects a broader trend, right?
Companies realizing AI isn't just an add-on feature.
It's foundational tech that could reshape their whole business.
Totally.
And their focus on learning and careful experimentation seems smart.
You're going to have to tread carefully in this fast-changing space.
Absolutely essential.
Now, this next one is pretty fascinating, digs a bit deeper into AI architecture.
A Japanese AI company, Sakana AI.
Sakana AI.
Okay.
They've introduced a new AI model.
They're calling it the continuous thought machine.
Continuous thought machine.
Intriguing name.
What's different about it?
Well, it's inspired by how the brain works.
Specifically this idea of neural timing.
Basically, the individual sort of neurons in this AI can remember their past actions
and coordinate based on the timing of the activity.
Okay.
That sounds complex.
Brain inspired.
Does it work better?
Right now, maybe not across the board compared to traditional models, but it offers a potentially
big advantage.
Researchers seem to get a better understanding of how it reaches its conclusions.
It's more transparent.
Oh, understandability.
That's huge in AI.
Exactly.
And interestingly, like some other recent AI models focused on reasoning, it seems to
give better answers if you give it more time to think.
Let it mull things over.
Kind of, yeah.
What's really exciting about Sukana's work is this push towards more biologically inspired
AI, moving away from purely statistical models.
It could lead to AI that's not just capable, but also more understandable, and that's crucial
for trust.
Definitely a space to watch.
Okay.
Quickly on the more technical research side, a couple of things.
First, something called EAR.
EAR.
Like the thing you hear with-
No.
EAR.
It stands for visual autoregression without quantization.
Okay.
That sounds very technical.
Break it down.
Okay.
Simpler terms.
Yeah.
It's a new way for AI to generate images directly.
But it avoids a step called quantization.
Think of quantization like rounding numbers.
It simplifies data, but you can lose fine detail.
Ah, okay.
So EAR skips that simplification step.
Right.
The idea is it could lead to more nuanced, more detailed AI generated pictures, smoother
transitions, finer details.
Got it.
Subtle, but potentially significant for image generation quality.
Then there's UCGM, short for unified training and sampling for generative models.
Another acronym, UCGM, what's that about?
This is basically a common framework, like a shared set of tools.
You can use it for training different kinds of generative AI models, the ones that create
images, text, audio, all that stuff, and also for getting those models to actually produce
the content.
So like a standard toolkit for building and using these creative AIs.
Pretty much.
It could help standardize and maybe streamline how researchers develop these generative models,
make things more efficient.
Makes sense.
A shared foundation.
Okay.
Moving into maybe more practical stuff, the community around AI.
Yeah.
Meta just held its first Lomacon hackathon.
Llama, their family of AI tools.
Oh, right.
Lomacon.
How did that go?
Big success, apparently.
Hundreds of developers took part, built projects, lots of submissions.
The winners were picked based on innovation and, you know, how well they were actually
built the technical execution.
Hackathons like that are so valuable, aren't they?
For driving innovation, building a community, people get to play with the latest tech, come
up with cool stuff.
Definitely.
And speaking of useful tools for anyone who deals with audio transcription interviews,
podcasts, whatever, Hugging Face just released a new endpoint for their whisper model.
That's the open source speech-to-text AI, right?
Pretty popular.
Exactly.
And this new endpoint is apparently significantly faster.
They're saying up to eight times faster at transcribing audio.
Eight times.
Wow.
That could be a massive time saver for a lot of people.
Huge.
Yeah.
The speed and efficiency of tools like that whisper endpoint are really key.
Making AI tech more practical, more accessible for real world uses, transcription, better
voice interfaces, all that.
Totally.
And the next one is an interesting strategic move in the big tech world.
Microsoft.
Okay.
What are they up to?
They're planning to host Elon Musk's Grok AI model on their Azure AI Foundry platform.
Grok.
On Azure.
But isn't Microsoft super tight with open AI with chat GPT?
That's what makes it noteworthy.
It suggests Microsoft is looking to maybe broaden its AI offerings, not put all its
eggs in the open AI basket.
Solidify their position as the platform for AI development maybe?
Give users more choice?
Access different models with different strengths?
Seems like it.
They might even announce this Grok hosting at their Build Developer Conference, which
is coming up very soon.
Maybe even this week.
Interesting.
It really shows how competitive that cloud AI platform market is.
Offering more model diversity could attract more customers with different needs, setting
themselves up as the central AI hub.
Makes sense strategically.
Okay.
Let's turn to startups and investment.
Y Combinator, the big startup accelerator.
Yeah.
Always influential.
What are they focused on?
They've outlined their themes for the summer 2025 batch.
And a major one is full stack AI companies with a big emphasis on AI agents.
AI agents.
So AI systems that can actually do tasks.
autonomously. Exactly. Potentially replacing or maybe more likely enhancing traditional roles
across industries. Think virtual assistants, healthcare automation, personalized education,
lots of possibilities. YC focusing heavily on agents signals a growing belief in AI moving
beyond just assisting us, right, to operating more independently. Yeah, could spark a whole
new wave of AI-powered services and businesses soon. Definitely one to watch. Anything else on
the model front? Just quickly, AI2, the Allen Institute for AI, they released a new relatively
small AI model called OMO21B. OMO21B. Small, you say? Yeah, only one billion parameters,
which sounds like a lot, but it's small compared to the giant models. But here's the thing,
it's reportedly outperforming similar sized models from Google and Meta on several key benchmarks.
Huh, so smaller can still be mighty sometimes. Suggest so. You don't always need those massive,
super complex models to get impressive results in certain areas. That's great news for efficiency,
right? And maybe deploying AI where resources are tight makes the tech more accessible too.
Good point. Okay, finally, a sign the importance of AI is really hitting the mainstream,
especially in education. How so? Over 250 CEOs, including leaders from huge companies like
Microsoft, Uber, they've signed this open letter. They're urging for AI and computer science
education to be integrated right into K-12 schooling. Wow, 250 CEOs, that's significant
backing. Yeah, they see it as crucial for future US competitiveness, preparing the next generation
for an AI driven world. That widespread support from business leaders really underlines it,
doesn't it? AI and computer science are becoming foundational skills like reading and writing
almost. Pretty much. Integrating them early is vital for equipping students for future jobs
and just navigating a world that's going to be increasingly shaped by AI. No doubt about it.
Okay, so bringing it all back home, back to what we try to do here at Crash News,
it's fascinating how some of this stuff like the TikTok AI Alive, the audible AI narration,
it's all about making complex creative things more accessible, more engaging for everyday folks,
which is kind of a...
exactly what we aim for here, right?
Breaking down this dense tech news for you
in a clear, understandable way.
- That's a great connection.
And it's interesting too, how we started our chat today
with those more user-friendly applications,
photo animation, audiobooks,
and then kind of gradually moved into the deeper tech stuff.
- Like the neural timing models,
the visual auto regression.
- Exactly, it mirrors how the field itself evolves,
doesn't it?
Starts with more intuitive uses,
then dives into the heavier research and development.
- Yeah, it really does.
So it makes you think, how do we find that sweet spot?
Balancing the desire for these incredibly powerful AI tools
with the fundamental need for them to be easy
for everyone to grasp and use.
- That's the million dollar question, isn't it?
As this tech keeps racing ahead,
how do you design these powerful tools
so they empower people,
but don't just overwhelm them with complexity?
- This huge challenge, absolutely.
Okay, so let's wrap up this deep dive.
The key takeaways today, I think they're pretty clear.
- Go for it.
- AI is advancing incredibly fast.
We're seeing really exciting new creative uses
popping up constantly.
- Right.
- We're also seeing major progress
in the underlying AI models themselves.
And there's this growing focus on making AI more accessible,
more impactful in our actual lives.
- And don't forget the big players making strategic moves,
the Ten Cents, the Microsofts.
The landscape is definitely shifting.
- Absolutely.
Ultimately, our goal here was just to give you
a concise, clear understanding
of these important developments,
save you some time, keep you informed
on what really matters.
- Hope we achieve that.
- We really hope so too.
And we encourage you, think about how this stuff
might affect your own life, your work.
AI isn't some sci-fi future thing anymore.
- It's here, and it's changing fast.
- Exactly.
So maybe a final thought to leave you with.
- Yeah.
- With AI becoming so deeply embedded now
in platforms we use daily,
even in how science gets done,
what new ethical questions or maybe societal questions
you think we'll need to tackle soon.
Stuff that maybe wasn't even on our radar
before all this rapid change.
- Ooh, that's a big one.
Definitely something to mull over,
what unforeseen consequences might emerge.
- Yeah.
Something to keep in mind as we watch all this unfold.