Chaos Lever examines emerging trends and new technology for the enterprise and beyond. Hosts Ned Bellavance and Chris Hayner examine the tech landscape through a skeptical lens based on over 40 combined years in the industry. Are we all doomed? Yes. Will the apocalypse be streamed on TikTok? Probably. Does Joni still love Chachi? Decidedly not.
Welcome to tech news of the week.
This is our weekly tech news podcast where Chris and I get into four
of the best tech news of the week. We're going to be doing a podcast. We're going to be doing a podcast. We're going to be doing a podcast. We're going to be doing a podcast where
Chris and I get into four interesting
articles that caught our attention.
I'm going to go first,
Chris, if that's okay with you.
My mic's muted.
Good.
It should be.
The EU takes a big bite out of Apple.
The European court of justice has handed
down a landmark ruling that forces Apple
to pay 13 billion euros in back taxes
based on what they judge.
Based on what they judge to be an illegal
tax structure put forth by Ireland.
The case dates back to 2016 and regards a
period of almost 11 years when Apple's
effective tax burden in
Ireland was a mere 1%.
Since that time, the loophole providing
such relief has been closed and a
universal 15% corporate tax has been
adopted by most of
the EU's member states.
Not surprisingly, Tim Cook claims no
wrongdoing, as does the Irish government.
The case was originally ruled in Apple's
favor back in 2020 by a lower court, but
the European court of justice was primed
to have the final say.
And after a brisk four years, I think
that's fast in the legal world.
They decided that Ireland
had granted Apple unlawful aid.
You might be wondering about the money.
Is the EU going to send Apple a PayPal
invoice or a written
proclamation by a carrier pigeon?
Will swallows, laden or
unladen be involved somehow?
Fortunately, no.
The back taxes have been sitting in an
escrow account since 2018.
And with this judgment, they can finally
be released to the Irish state.
Just like with GDPR, the EU once again
shows us the way
forward on corporate taxation.
If only we could get Amazon or Walmart to
pay 15% of their net income with a lame
duck president, like
maybe he could do it.
Well, or more likely Biden is too busy
racing his Corvette down the Delmarva
peninsula room, room bitches.
Seems like a Camaro guy.
Maybe.
Quantum update error corrected.
Cubit count alert.
As we've talked about a number of times
on the show, creating cubits in quantum
computers is getting pretty routine.
I mean, relatively speaking, creating
systems that can
withstand errors, however,
continues to be devilishly hard.
Just so we're all on the same page, we
are way over 1000 cubits in a number of
running systems.
So where are we with
error corrected cubits?
You might ask.
Well, Microsoft of all people announced
an answer with what
they're calling the largest
current number of error corrected cubits.
And that number is 12.
The approach is interesting.
Microsoft has partnered with a quantum
computing organization called Adam
Computing.
The approach they're taking is to spread
the value of each cubit across several
cubits, thus making any errors or issues
that come up, quote, less catastrophic.
Hilarious language.
Love it.
It looks like they're going with
something around a four
to one ratio, creating 12
logical error corrected
cubits backed by 56 physical ones.
And the approach does seem to be working
at least for certain algorithms.
The test improved the error rate from
2.4% down to 0.11%, which is substantial.
Yeah.
Now it's important to note that error
corrected systems are
helpful for a number
of reasons, one of which is sometimes in
quantum, there can be errors that can't
be detected, which is different than
errors that can be detected.
And I will leave the difference and
challenge for both of them as an exercise
to the reader.
Long story short, though, spreading out
the work and creating logical cubits like
Microsoft and Adam are doing in this
means that even these failures, the ones
that are not detected
can at least be mitigated.
Neat.
Open AI announces strawberry models.
Quick open up chat GPT or copilot and ask
it how many R's are
in the word strawberry.
Go ahead.
I'll wait.
Listen, buddy, I've got two liters of
Joel Cola, a Sudoku
book and adult diapers.
I can wait it out.
You done?
I could let us proceed before my heart
leaps out of my body and strangles my
teeth chances are your
friend, I couldn't get through it.
Chances are your friendly LLM told you
that there are two R's in strawberry,
which unless you are terrible at
spelling, you know is wrong.
So what?
LLMs get stuff wrong all the time.
Even better.
If you tell it the correct answer, it
will cheerfully suggest that you are
the one counting stuff wrong.
What is happening?
It's like, I'm afraid you're mistaken.
There are only two R's.
What is happening is that LLMs break
things into tokens to process information
and the word strawberry is broken into
two separate tokens.
The best guess is that chat GPT season R
in each token and counts two R's.
This thorny problem is so well known that
open AI codenamed their new AI
model line as strawberry,
also known as O1 for reasons.
The new model is allegedly capable of
reasoning through an answer, much like
a person does, instead of just trying to
vomit the whole thing out at once.
O1 is the new model developed in parallel
with the forthcoming GPT-5,
and it makes use of reinforcement
learning, aka telling the model
when it gets things wrong.
The reinforcement learning and multi-step
reasoning should allow O1 to arrive at
the correct answer of three for R's in
strawberry, and also help it solve
math word problems that have so far
stumped previous generations.
I got to try the O1 preview
today and it apologized to me.
Quote, "You are absolutely correct and I
apologize for the oversight earlier.
The word strawberry contains three R's."
End quote.
Absolutely amazing stuff.
AI dude bro lies
about model capabilities.
Gets caught.
Hilarity ensues.
This past two weeks has been
pretty wild for other side AI.
The company became AI world famous for
its product, which is called Hyper Write,
which is apparently a writing assistant.
Is it hyper wrong?
But um, but of course, success in the
Hyper Write realm wasn't
enough for other side AI.
And thus they started hyping up their own AI.
Going under the brand name reflection.
Allegedly based on llama 3.1.
This past week, CEO Matt Schumer
breathlessly announced
reflection 70 B, which
he claimed insane performance on.
He showed tables and everything.
He even published the model and uploaded
it so other people could download it and
test it.
This is the first time that we've ever seen a model like this.
This turned out to be a mistake as nobody
could come close to the claimed
performance numbers.
In order to counter this, Matt went ahead
and claimed that the
upload was corrupted.
Sure, Matt.
Other side AI opened access to a private
API so that people could test reflection
70 B at home base.
Seems like a not bad idea.
Except that what the testers found was
while there was better performance, there
was plausible evidence that this private
API was simply scrubbing answers pulled
directly from an anthropics Claude model.
Oh, so that's not a good look.
After this Matt went dark, basically
hanging all his supporters out to dry.
Eventually he went on Twitter
apologizing, sort of saying that he
quote, got ahead of himself.
This, as I'm sure you
know, is also not a good look.
It turns out fake announcements of wild
success using repeatable tests of known
benchmarks against a product that other
people can download is a bad idea.
But Chris, he was in founder mode.
Move fast and break stuff.
We're done. Go away now. Bye.