The AI Briefing is your 5-minute daily intelligence report on AI in the workplace. Designed for busy corporate leaders, we distill the latest news, emerging agentic tools, and strategic insights into a quick, actionable briefing. No fluff, no jargon overload—just the AI knowledge you need to lead confidently in an automated world.
Hi folks, welcome to the AI briefing.
My name is Tom, where we go through short snippets on a near daily basis about AI
technology and what's hot, what's not, and some things that you should think about.
And today, after yesterday's discussion about AI slop, we're going to have a quick chat
about Google's Synth ID.
Now, for those of you who do not know what Synth ID is, Google have a framework that
they've released that allows for detection of AI generated images
using other AI platforms, and specifically Google's platform.
The idea, course, being that it's quite hard for users to be able to tell whether or not
an image has been generated organically via a camera or via an AI that these days can
generate supremely realistic looking images.
so SynthID takes a watermark that it then embeds into an AI generated image.
So if you generate an image using Google's
banana model for example, it will embed a watermark into it that tells Google or other
platforms that use the same technology whether or not that image was generated by AI.
You can't see it but it's in every single one of those images.
at the moment
a good starting point to figure out if something is AI generated is take that image,
upload it to Google, ask Gemini if the image was created using AI, because if it's got a
digital watermark in there, it will be able to tell you that it was AI generated or not.
Now, of course, it's not generated with something using simple ID.
Good luck with that.
In the short term, at least Google are putting some thought and effort into figuring out
how to tell
programmatically whether or not an image has been created artificially or via a real
living person and a camera.
So as you start to think about leveraging AI technology and stuff, start thinking about
stuff like this.
How can users tell whether or not something is AI generated?
Talked about AI Slop and AI imagery, AI video will become more and more relevant as time
progresses.
so putting some thought and effort into how to allow users to figure out what's real and
what's not is going to be super important.
This is the AI briefing.
Today was a short one.
We'll be back tomorrow.
some information about Sam Altman and his code red email.
I'll back then, see you tomorrow.