Techlore Surveillance Report is your weekly deep-dive into the privacy and security news that matters for your digital freedom. Hosted by Henry Fisher, founder of Techlore and long-time digital rights educator, each episode cuts through the noise with carefully selected stories, context, analysis, and historical perspective.
Topics include: privacy tool updates and vulnerabilities, data breaches, surveillance technology and government overreach, Big Tech privacy policies, encryption standards, digital rights legislation, and corporate data accountability.
Whether you're just starting to take privacy seriously or you're a seasoned expert tracking the ecosystem, Surveillance Report delivers the weekly news you need. New episodes every Wednesday. Subscribe and join the community at techlore.tech
This week's surveillance report covers using a VPN, which may actually subject you to more NSA spying, potentially.
It's a complicated political situation.
North Korea's weeks-long open-source supply chain hijack.
Android malware infecting millions of devices via Google Play, as Google ensures that their Google Play store is free of malware.
Apple's device-level age verification expanding globally.
the EU Parliament's dramatic vote to kill chat control. Very big week, lots of fun updates,
and I'm really excited to be here to share them all with you. Welcome to the Techlore Surveillance
Report, your essential weekly tech news delivering deep analysis on the latest threats to security,
privacy, and digital freedom, and empowering you along the way and the people around you
to reclaim control and defend your rights. A couple very quick announcements before we dive
into this VPN story here. One, there is a new hosting provider behind the scenes. It has a
redirect. So you probably don't need to do anything but just a general PSA if there's anything weird
about you being able to fetch this feed. Second thing, we did recently revamp our tools to
rebrand them under the Security, Privacy, and Anonymity, or SPA tools. And so if you go on our
website, you can check out all of our tools under the new SPA ecosystem. That is all. And now let's
dive into the highlight story. This one's quite fascinating. So a little bit of context, virtual
private network, VPNs, a lot of people use these in order to try to get more privacy and sometimes
better security in certain contexts. The NSA actually has some very interesting restrictions,
which I will start off by saying I don't think are followed as closely as they are supposed to.
For context, here on the screen is not the NSA, it's the FBI, but it's important to,
you know, very similar situation where they're not supposed to be spying on Americans. It's only
supposed to be for people outside of the US, but the FBI has confirmed that it actually does
purchase Americans' data and location, even though they're not supposed to. Again, this is because
government agencies have to convince a judge to authorize a search warrant and actually have a
reasonable cause to have to search individuals, not just purchase data and get around them via
private entities collecting the data anyway. So similar thing here with the NSA. If you are an
American, you have a few more rights. And supposedly that means that the NSA can't just freely spy on
all Americans, though, of course, I am a bit critical of that. But that is technically how
it's supposed to work. Now, six lawmakers are actually pressing the intelligence officials to
disclose whether Americans who use commercial VPN services risk being treated as foreigners under U.S.
surveillance law. The reason for this is simpler than you might think. Pretty much, if you're
connecting to a VPN server and you are connecting to Canada, you're connecting to Germany, you're
connecting to Denmark, it doesn't matter which country, you are now going through a foreign
server, and the NSA potentially, if they were able to try to spy on that traffic, they're
not going to try to distinguish between American and non-American citizens who are going through
that server.
The really funny thing here, too, is that federal agencies, including the FBI, the NSA
itself, and the FTC have actually recommended consumers use VPNs to protect their privacy,
but also following that advice may inadvertently cost the Americans the very protections that
they're seeking because now their traffic could be open to NSA snooping, which might not otherwise
happen if they were treated as American citizens. This all has to do with a warrantless surveillance
program that's authorized under Section 702 of the Foreign Intelligence Surveillance Act,
which is set to expire next month. And so this is not a coincidence on the timing of when
they're asking these questions. But what is quite fascinating, you might be wondering,
is the NSA collecting VPN traffic without everybody's knowledge? We don't know. So that
is apparently classified information. The letter does not actually make that claim directly. It's
just asking about it. But, well, when we look at this, and I think The Wire puts this very nicely,
Senator Ron Wyden has a history of not declassifying information, but pretty much putting forward
these carefully worded public statements to draw attention to surveillance practices he's unable to
discuss openly. So the idea here is that it's quite possible this is something that is happening,
but they're not really allowed to say it is happening.
But again, we don't know.
This is all speculation.
So this is quite fascinating
because now it calls in the question,
are you actually in some ways less safe if you use a VPN?
And how do you use a VPN?
Do you stick with the US servers
and then you're guaranteed, blah, blah, blah.
So I actually, yes, this is my shameless plug
that we have our newsletter each week.
It's free, you can join,
you can just use an alias email
or you can use a real email, whatever you want to do.
But I already kind of put my thoughts in this.
I'm just going to read those out.
So what you can do, and here's my list from the newsletter, no individual action I think eliminates the risk.
This is a structural legal issue, which is quite fascinating to cover because this makes no sense, right?
Like this is something that really is a systemic problem that we need to look at in a systemic way.
So I do really encourage you all to get involved politically in this problem as well.
So organizations like the EFF are actively campaigning against this kind of stuff.
So if you can support the EFF, contact your politicians and make sure to also communicate
Section 702 reform and how that matters to you.
My second point is that the story exposes this deeper flaw in how governments are approaching
digital rights.
There's this, you know, people think that the way that we have borders in the real world
works in the digital realm, that we can draw these lines around data that flows globally
by design.
The reality is, if you are an American citizen and you are connecting to a website, you
don't know if that website is being hosted in a different country. So is your data being handled
differently because of that? So I don't think that the real fix here is continuing to carve out
only exceptions for Americans, because yes, that's probably something that sounds good in theory.
There's no way to ever actually guarantee that Americans are Americans unless they are KYC-ing
themselves. And the ones who want privacy, which is the whole reason we should be caring about this,
aren't going to be identifiable.
So this is always going to be a problem
for anybody who wants to take their privacy
a little bit seriously.
So I think we need to recognize
that mass surveillance is kind of wrong,
regardless of whose citizens it targets.
I think the TikTok saga back in the day
really spoke to this because we had the situation
where we wanted to ban TikTok
because of the data collection, surveillance,
privacy concerns, et cetera.
But we never actually took those issues seriously.
All we did was transfer ownership of TikTok to the US.
And now the platform goes mostly unchanged
in the way that it exploits users.
It's just a power shift to a more U.S.-aligned ownership.
But the surveillance apparatus underneath it all didn't really change.
You're still no better off.
And even then, if you trust, let's say, the U.S. stuff more,
this global data is being shared globally, right?
Like, there's third-party data trackers on all these different websites.
They all share data with each other.
All it takes is one person to have a data breach.
So we really have to see this in a global lens
and try to tackle this, I think, in more of a global way.
In terms of practical advice and my views on what you should do or shouldn't do,
I don't think this is any reason at all to abandon your VPN if you are currently using one.
I think even if the NSA was collecting all of your traffic, plain text,
compromised every server, which I don't think is the case necessarily,
you still aren't worse off than leaving all of your traffic exposed to your internet service provider
who will log and sell it freely.
And that's still going to be more overall exposure of your data than if just the NSA.
If you are curious at VPNs, we have a VPN chart.
It just compares different things.
It's called our VPN Finder.
You can go ahead and just compare different VPNs.
Some of our favorites are at the top.
They're all open source as well, those VPN providers.
And ultimately, kind of my take-home message that I have here is that the more of us who
are using privacy tools, the less targeted any specific person is going to be for using
them.
I do believe that this is how we can advocate for others who might even need the tools more
than ourselves.
And so if you want to use a VPN and it overall works with your workflow and you see some of
the benefits in your workflow, that's actually helping someone who might actually really need
that technology a little bit more.
Same thing applies to tools like Tor, which are a little bit more trustless than the VPN
if you want something more trustless.
So those are overall my thoughts on this story.
I will definitely keep you all updated as the weeks go on and we hear more about Section 702.
in the meantime we're going to talk about a pretty crazy compromise of open source software
uh yeah this is from a north korean cyber attack which briefly hijacked one of the most widely used
open source projects on the web and it actually took weeks to carry out because this is something
they were pretty methodical about if you're wondering what the project is it's called axios
which allows developers to connect their apps to the internet and what happened was these attackers
pretty much socially engineered.
They tried to get close with him
over the course of several weeks.
They created a realistic-looking Slack workspace.
They posed as a real company.
They used fake profiles.
And then pretty much they invited him to a web meeting
that prompted him to download malware,
masquerading as an update necessary to access the call.
They were able to compromise his machine,
which is how they were able to hijack the project.
The malicious version of Axios was spotted
and stopped in about three hours.
And so there was this brief window
where if you installed this,
it's very likely your system was compromised.
That is the assumption that you should have
according to a security company.
So that is the story there.
The best advice I can give,
it really depends, I think,
on the sensitivity of your industry.
It's always good to be safer than sorry
in this situation,
but there has been a dramatic, dramatic uptick
in these really sophisticated schemes
where it seems like you're hopping on a call.
It's very well socially engineered.
It's just the social engineering
is getting so much better over time.
And we also are seeing some AI social engineering attacks
where you have like all these fake meetings
that look completely legit
and they are able to like respond to you in real time.
It's pretty crazy stuff.
This is just a good reminder
that social engineering is always going to be a thing
and it's only going to be a little bit harder
and harder to predict against.
You know, the typical Nigerian prince red flags aren't really as much of a thing anymore in
2026.
It's, you know, stuff like this.
So be careful out there, everybody.
This is a pretty common problem.
All right.
So before we get into Apple expanding device level age verification, the EU killing chat
control and the Defense Bulletin, of course, with all the big updates from the last week,
I wanted to talk about this story, which is Android malware called No Voice on the Google Play Store.
Now, before I get into the story itself, I want to remind people that there is this movement right now called Keep Android Open.
This is because Google is trying to lock down sideloading on Android,
a.k.a. installing your own apps away from the Google Play Store.
A huge part of the reason they're doing this is because they say that getting apps away from the Play Store isn't safe.
Things like F-Droid, which they say, well, this kind of stuff might hurt the project, if not kill the project, if it passes.
So, in light of this, and Google saying how safe the Play Store is, and how everybody should get their software from the Play Store,
and also giving them complete control over the Android ecosystem, we see this story.
Totally unrelated.
A new Android malware dubbed NoVoice exploited known vulnerabilities to gain root access and has been distributed through more than 50 apps on the Google Play Store with at least 2.3 million downloads.
The apps carrying the malicious payload included cleaners, image galleries, and even games.
They required no suspicious permissions, and they provided the promised functionality.
They were legit applications that actually did what they promised.
So there was absolutely no immediate red flags.
When you launch the app, it tried to obtain root access on the device and has actually exploited old Android vulnerabilities that have already received a patch from between 2016 and 2021.
In terms of the attack itself, they concealed malicious components in actually a Facebook package, a Facebook SDK.
And there is a hidden PMG file, which was loaded in system memory.
And it's actually a very sophisticated attack.
So if you want to learn more about the attack, you can see it on screen.
Or, of course, there's the show notes in the description, as always, if any of you audio or video listeners want to look more into the story.
It's quite technical, but it's very solid.
And guys, this really gets deep into your system because there are multiple layers of persistence, including installing recovery scripts, replacing the system crash handler with a rootkit loader, and storing fallback payloads on the system partition.
Because that part of the device's storage isn't wiped during a factory reset, this malware literally persists even after an aggressive cleanup.
So this is one of those rare situations where even a factory reset of your device will not
remove this malware from your phone.
It can steal WhatsApp messages.
It has full system device information.
I mean, it is as bad as it sounds.
The thing is, there is no formal advice shared on what to do if you're already impacted by
this.
The obvious answer is to update your devices and get the latest security patches.
And if your device can no longer get security patches, see if you can update to a custom
ROM that can still give you those later security patches, or you might just need a new phone at
this point. But there is actually no advice in the article, nor is there any advice that I can think
of if you already were impacted by this other than getting a new phone. And that is my overall best
guess. I don't know and I cannot confirm if updating to a patched version after you're infected
will somehow make this benign. I don't think that is the case, so I would not assume that either.
Crazy story.
And yes, Google is kind of saying,
well, this exploits old vulnerabilities addressed years ago
that impacts outdated devices.
But you facilitated that software on your store.
And I think Google is kind of glossing over that a little bit, right?
Google hosted software on its store
that is literally meant to exploit old devices
that Google also supports.
So yes, Google's right.
And I'm always, of course, going to agree with this,
that everybody should keep their software up to date,
especially when it comes to security updates.
but I think Google is being a little bit tricksy here with how they are approaching the situation
especially given what's going on and how they're trying to prevent sideloading.
They are hosting malware on the Google Play Store and this is not the first time and it will not be
the last time and I think it's quite ridiculous for them to try to claim the only way to get
security on an Android device is through the Play Store which is what they are unintentionally or
intentionally implying when they try to force people away from sideloading. Those are my thoughts
on that story. I'd love to hear what you guys have to say in the comments, apparently on Spotify too.
So leave your comments on Spotify or on YouTube and I'd love to see what you guys think.
Now we're going to have some good news out of the EU in just a second, but before we get to that,
we have a little bit of bad news here. So Apple is now going to be requiring device level age
verification in the UK. This came out with one of their recent updates and there is this overall
question that Gizmodo poses, which is, could the US be next? And we'll actually, right afterwards,
there's actually been somewhat of an update to the story about a couple other countries that
were impacted by this. But let's start with just the first story here. This is roped in with Apple's
latest update. After downloading that, you will have to confirm that you are 18 or older to access
unrestricted features. So you have to confirm your age with a credit card or by scanning an ID.
For those who are underage or have not confirmed their age, Apple will turn on web content filter
and communication safety, which will not only restrict access to certain apps or websites,
but will also monitor messages, shared photo albums, airdrop, and FaceTime calls for nudity.
Apple didn't specify which services and features are banned for under 18 users,
but it will likely be in compliance with UK regulation.
And kind of the question of is the US next is because they are talking about
new users in Utah and Louisiana will also have age categories, which is a somewhat similar thing.
And in California with its Digital Age Assurances Act,
And we also have Colorado who's trying to seek what California is doing.
Let's not even talk about the privacy implications, right?
Like we already know this.
It's talked about.
Everybody has discussed these kind of problems at the privacy angle, right?
Like we don't want people to have their IDs leaked.
We don't want to especially jeopardize potentially children who are trying to,
you know, upload their ID to verify what their ages are.
If there is any kind of age verification between the ages of 13 and 18.
But let's just say it's completely private.
We are now living in a situation where when you buy a phone,
You have to unlock the phone and now permanently attach to your identity.
Your phone knows your age, who you are, and now everything else you're going to download.
This, to me, just isn't a good situation, right?
Why is it a good situation?
If I download 30 apps on a device, let's say five of them are quite sensitive about me.
They share my religion.
They share who I am as a person.
They share something that I care about on an advocacy basis.
It's an organizational basis.
It's a rights group.
what if I'm a political dissident? Now I have to upload my ID to a company just to like start
messaging securely over Signal or something like this. It's mind blowing to me actually how scary
this stuff is. And then all it takes is a government tapping on Apple going, hey, we need to know,
you know, what it is about this user and Apple's going to have to comply. That's what happens here.
So I have a lot of concerns with this. I haven't actually been able to find a huge amount of
technical information on how this works. And if Apple has found a way to do it without where it's
not tied to your Apple account and it's done locally on your device. Again, I'm not even
touching on the privacy implications of this. I'm looking at this from a freedom of information
perspective. And I think this is quite scary from that perspective. The UK is really passing a lot
of stuff aggressively here. And I think they're doing it quite recklessly. They aren't considering
the privacy implications of what they're doing. They're not even considering the child implications
of what they're doing and what it's like for children to have to use an internet where this
is the kind of situation. I also don't think that the UK is realizing that they are literally
gatekeeping child safety behind companies that are all like 10 miles away from each other
in Silicon Valley. If they are trying to have more digital sovereignty and they are trying to not have
more control of these companies, maybe they shouldn't have them be the gatekeepers of the
technology. So there are so many angles to this that I think are just totally wrong. And I am
I am upset that Apple hasn't fought back harder. I am upset that these politicians aren't really seeing what's going on and the side effects of what they're doing, or maybe they do. We guess we'll never know. We're still in the fight. If you're in the U.S., get involved, contact your politicians, tell them the problems you have with device level age verification and the overall concerns you have when it comes to freedom of information, privacy, security, etc.
These are very concerning things that are being passed without, in my opinion, the care and attention that's required to actually maybe even be able to do this properly.
I don't know if it's possible to do this properly still, but I don't think there's any kind of care when these things are being passed.
On a similar note, Apple is also continuing this rollout in Singapore and South Korea.
So that is what's going on.
South Korea's law requires Apple to re-verify your age annually.
And to see how crazy this stuff goes, moving past Apple, there's a vape company now that wants to know how old you are.
They hope that biometric age verification tech in cartridges could put flavored vapes back in business, but it's unlikely to solve the real problems.
So I believe the context here, I am not a vaper, but I think they banned or made it much harder to access flavored vapes because they were so addictive.
So the goal is to be able to get this technology back out there, but with age verification.
This is all to say that I want us to think very critically about age verification.
It sounds very good, keep people off social media, keep people off of this, but we're never dealing with the damn systemic issues.
If we think that these five things are harmful, let's say vapes are harmful, pornographic content is harmful, social media algorithms, AI algorithms, etc.
They're things that are meant to just trap you in and not actually give you a fair reality of what the world looks like.
And it's meant to polarize people.
It's meant to be anti-democratic.
It's meant to make you feel extreme.
It's meant to make you feel like crap about yourself and the people around you.
All of those things are harmful.
But yeah, no, it's cool if you're over 18.
We don't care if you're over 18.
This is a failure of politicians, I think.
I think this is a failure of companies.
This is a failure of people using those companies as well.
There's some responsibility that I think everybody has in here.
as well as myself. I could have educated people better. I think people could have avoided these
services better. I think these services could have been run by more ethical people. And I think these
politicians had so many opportunities to hold these companies accountable. And now we're trying to
deal with this problem in the most surface level BS way imaginable, which is, well, let's at least
try to keep kids off of it, but we're not going to deal with any of the other root problems.
And in the meantime, we're actually going to make kids less safe because now we have to have
all of this weird technology that has had data breaches and leaks children's data in the process.
And we're going to make the whole internet very difficult for adults to access and very frustrating
for adults. And we're also going to now enable this kind of black market of accessing things
that you're not supposed to access, going on more suspicious websites that aren't even going to try
to comply with this. And so now you're going to get people away from safer established platforms.
I put this in a defense bulletin, but I shouldn't have. I'm moving it back here. Also, I'm sorry for
video listeners, Gizmodo's website is freaking out here. There is this Gizmodo article that says,
this is the title, Group Pushing Age Verification Requirements for AI Turns Out to be Sneakily
Backed by OpenAI. Now, I believe it was last week we talked about how there are these like
nonprofit advocacy groups. Some of them were just completely started by or just completely funded
by companies like Meta who are trying to push age verification. And so this is, to me, kind of an
extension of that. So the people pushing for policy changes, including like child rights groups and
safety groups, they say that a number of the people involved in the California-based Parents
and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by
OpenAI. One of the leaders for the nonprofit said, quote, it's a very grimy feeling to find out
they're trying to sneak around behind the scenes and do something like this. I don't want to say
they're outright lying, but they're sending emails that are pretty misleading. This is your reminder,
Big tech companies are not your friend.
These big tech companies are doing a ton of slimy things behind the scenes.
They are trying to destroy the legal process.
They don't want accountability.
They want laws to be passed in the way that only benefits them.
And we need to represent ourselves.
We need to represent everybody else around us who can't be represented.
It's important to contact politicians.
It's important to get involved.
It's important to stay up to date with this news so you can see when this kind of stuff gets exposed.
If you want to see a real-world example of why age verification is dangerous in the way that these politicians are just rolling it out willy-nilly,
there was an age verification firm that was fined for excessive data retention and invalid consent.
This was a British age verification company called Yodi.
They were fined a total of €950,000 by Spain's Data Protection Authority for violating the GDPR.
Pretty much, the idea here is that when they use their facial scan feature, they have many ways of verifying your age,
But when they use the facial scan version, they then act as a data processor and actually hand off your data to another organization.
So that violates the GDPR.
On top of that, really, when you strip away all the legal language, what they say is that the violations come down to something very simple.
They collect too much sensitive data, give users too little real choice, and held on to that data for far longer than it should have.
They implement dark patterns.
They store geolocation data.
They keep it for five years.
And they can manually review your ID internally for up to 28 days, which means all of their
contractors and all of their employees can get access to that information along the way.
This is also not just a nothing company.
It's one of the industry leaders.
So this is literally kind of the companies you can expect to be doing this.
They run a million age checks per day.
And when the U.S. Supreme Court ruled last summer that online age verification does not
violate the First Amendment, which is a crazy ruling, by the way, the court relied in part
on technical information provided by Yodi.
It's all the same company, guys.
I think AdGuard's article on this is just excellent.
They say age verification is currently designed
to create centralized high-risk data environments
that privacy laws were meant to prevent.
Yodi is not an outlier, it's a blueprint.
Unless the underlying model changes,
it won't be the last company to overstep these boundaries.
This fine isn't just about one company failing compliance,
it's a warning about an entire system
that normalizes biometric surveillance,
incentivizes data hoarding,
and asks users to trust it blindly
that it will all be handled responsibly.
stop age verification
until you actually are going to try to do things right
in a way that keeps people safer,
not put them at greater risk.
Man, the age verification section is huge today.
Pretty much Greece has become the latest country
to announce a ban on access to social media for its youth.
Another weak attempt at trying to roll in big tech companies
and prevent them from causing all the catastrophic damage
they're causing to the world,
only apparently to people under 15.
When we turn 16, though, it's all cool.
Everything's fine.
you can be exploited by these companies for the rest of your life.
The next 60, 70 years of your life is fine.
If you want to learn more and you are in Greece,
I do recommend checking out the show notes
and making sure to get personally involved in this story.
On a note of Greece, it's worth mentioning that Austria
is also proposing something quite similar.
Show notes in the description.
Check it out to learn more, especially if you're in Austria.
Sorry, I was sharing the wrong tab there.
You can see it now in the right tab if you're watching the video.
Quick signal boost too from also AdGuard.
They pretty much had an open letter
where over 400 privacy researchers signed it,
calling for age verification to be, I guess, reconsidered.
It's a really good letter.
Check out the show notes if you want to learn more,
and I'm glad people are speaking up about this.
Now we're going to move into more positive stuff
before I get into the defense bulletin.
But Patrick Brayer pretty much came forward
and said the end of chat control is an opportunity,
five-point action plan for genuine child protection.
So Patrick has been pretty much at the forefront of fighting chat control, which was something that tried to do this mass scanning of all messages inside of the EU.
And fortunately, it looks like it's been at least for now killed off.
I'm sure it's going to be attempted again, but this kind of mass scanning is killed off.
And it was all done in the interest of children.
So we're going to scan everybody's messages with this technology that's literally like 50% effective.
And in the meantime, we're going to expose everybody's messages.
We're going to break the whole concept of end-to-end encryption.
It's pretty ridiculous.
Same kind of reasoning that's applied to age verification in my view.
Now, the cool thing that Patrick has said is we can actually start talking about how we can keep children actually safe.
And so here's his five-point action plan that I wanted to share with you.
The first, they actually say that there is a systemic depiction of CSAM, child sexual abuse material,
in darknet forums that aren't removed because they've cited a lack of personnel.
Because there's not enough staff.
And so he says, now that we actually are no longer trying to do this chat control stuff,
let's use the freed up police that we were using for that,
for the systemic deletion of actual child abuse material that exists online.
The second thing is security by design for applications.
Tech companies must stop shifting responsibility onto algorithms.
Apps must be designed to protect users from unwanted contact by strangers.
Profiles must not be publicly visible.
Contact from strangers must be blocked.
Nude images must be blurred.
And users must be warned before sharing personal data.
No politician in the US, in the UK, I have seen personally propose these kind of things because that would require actually regulating a social media big tech company.
They are too chicken to do something like that.
And so instead, they're going to go after something else that's easier and far more drastic.
Patrick also says a third thing, quality over quantity.
Instead of paralyzing police forces with tens of thousands of false or previously known hits from US corporations, investigations must be professionalized.
Lawful targeted instruments and technology and personnel should invest in investigative
capacities.
Pretty much he's just asking for a modern, more quality, more methodical approach because
chat control had this just spray method where pretty much we're just going to collect everything
and see what to do with it.
But Patrick's like, let's slow down.
Let's actually think about this and deal with this from a quality perspective so that we
can make the biggest impact with the least amount of effort to keep children safer.
His fourth thing is prevention in schools.
Distribution of digital self-defense materials.
This is the education piece.
It's a prevention kit, which should be distributed to high school students, teaching them in an
age-appropriate way how to recognize grooming.
Crucial tips for digital self-defense include never trusting the claimed identity of a stranger,
never sharing location or phone numbers, never meeting an online contact alone, and reporting
abusive messages instead of responding to them.
According to a poll, 43% of children suggest that improving media literacy and training
minors on risks of inappropriate responses is the most effective approach to protect them from harm
on the internet. That poll's from children. So children think that what they need is more
education. This is from children themselves. No politician that I've seen has proposed this,
but they do want to propose age verification to keep children safe. But they don't want to propose
educating children and making a better effort to actually just keep them inherently more safe.
And the fifth point that Patrick makes is anchor protection concepts locally in real life.
Abuse happens in the real world.
We demand the mandatory introduction of safeguarding concepts in all organizations where children spend time, including schools, daycare centers, churches, sports clubs, clinics, and youth camps.
All of this helps deal with the root problems that cause these awful things that happen to children.
None of them require any kind of, you know, additional exploitation of children.
none of them require any kind of ID that needs to be handed over by adults all of them genuinely
would and we could probably measure a beneficial impact would help children so I just really wanted
to share my Patrick's blog it's very well done if you want to read the whole thing I do recommend it
of course everything's in the show notes but I wanted to share some of my views on this because
I very much align with the way Patrick sees things and I think the way he put it is very well done
and I hope to see that the EU can maybe model what this looks like maybe if the EU can pass something
like this, then maybe it could be a step in the right direction of other countries going,
hey, look, the EU is trying to deal with this in a more systemic, organic way, rather than
just trying to force a solution through that sounds good, but has countless problems behind
the scenes.
So that is all I have to share about that.
And I'd love to hear your guys' thoughts on this.
Okay, now we're going to get into the defense bulletin.
And I have so many stories to go through this week.
So I will say right now, we are going to rapid fire through these guys.
So we're going to have kind of the threats and the threat landscape and the data breaches.
And then we're going to go into service updates for many of the more privacy, security, digital rights focused services out there.
So you all can keep up with what's going on.
But again, I'm going to go through these pretty quickly in the interest of time.
The first thing is that the developer of Veracrypt, which is an open source encryption software,
has actually said that Windows users may face boot up issues after Microsoft literally locked
his account.
Microsoft prevented him from pushing out updates.
They terminated the account that he's used for years to sign Windows drivers and the bootloader.
The most unfortunate part, and I'm sure everybody's dealt with this who's tried to contact Microsoft,
he tried to contact them but was unable to literally reach a human, period.
We've made guides and tutorials on Veracrypt.
I'll leave those in the show notes and in the card on YouTube if you want to check them out.
Veracrypt is an excellent piece of software. It is probably the best way still to encrypt things
in like a drive style format. If you want to encrypt a drive with full disk encryption,
this is the way to do it. It's fully open source. And I think this is completely embarrassing for
Microsoft to do this. I have apparently a Microsoft spokesperson did not comment when reached out
by TechCrunch. So that's good. And at the time of recording, this is quite fresh. This actually
happened this morning and I'm recording now. And so by the time you listen to this, there could be
an update, but at the time of recording, it seems like nothing has happened. Now, Apple, just a quick
little update here, is going to push out a rare backported patch to protect iOS 18 users from a
DarkSword hacking tool. If you want to learn more about this, this was all in last week's episode.
Last week, I talked all about DarkSword on that surveillance report. And so the update here is
that if you are on iOS 18, you can actually still update to a new version of iOS 18 with a patch for
this hack, which you should very much do, even if you're not able to get iOS 26 or if you don't
want iOS 26. So at minimum, make sure to update to this security fix in iOS 18. If you use a
WordPress plugin called Ninja Forms, make sure to get it updated because there's a critical
vulnerability in that extension. So that's just a quick little PSA for you all. This one actually
made the rounds and it was almost a main story, but you know, there was other big stories as well,
but LinkedIn was caught secretly scanning for over 6,000 Chrome extensions,
and they are collecting data and fingerprinting users
and submitting that to, I believe, third parties as well.
They deny this and say that it's to prevent spam and prevent scraping,
but they are literally fingerprinting users.
It is the most BS response I have ever seen in my life.
There is absolutely zero reason to do this kind of tracking.
It is not in their privacy policy.
There is no opt-out.
now if you're curious I did do a little bit of digging into this so the way that they do this
fingerprinting and the way that they scan for your extension it seems to be chrome specific so if you
already were not using chrome or a chromium based browser you're probably not directly impacted from
the extension scanning thing I don't know about the fingerprinting I would still assume that you
were so if you use anything like Firefox or Safari you're probably largely going to avoid this problem
now one of the brave engineers for the brave browser who I met at the ad filtering does summit
last year. He posted on his Twitter that he did confirm that Brave is not impacted by this. So if
you have Brave Shields in the Brave browser, you were not impacted. And my assumption is that if
you use something like Firefox with uBlock Origin and you're using a good quality privacy-based
browser, that is something that you can do. You can always check out our spa essentials for all
of our latest kind of recommendations in the browser space there. The one thing I was not
able to confirm, so if you do have any kind of information, definitely let me know in the comments,
is whether or not using a privacy-focused extension in the Chrome browser itself would have prevented this attack.
So using something like uBlock Origin or Ghostery or AdGuard inside of regular Chrome, if you would have been safe.
I have not been able to find an answer to that question.
And so if you do have any kind of technical information you can share with me, I'd really appreciate it.
That is all I have in that story.
More LinkedIn shenanigans.
This is a quick little PSA for macOS users.
There's something called the Infinity Stealer malware, which grabs macOS data via click fix lures.
This happens through a fake Cloudflare captcha on a browser, and it pretty much asks you to verify by pasting a command into your terminal, which does naughty things on your system.
So please do not fall for this.
And if you want to learn more, check out the show notes.
There's also a backdoored Telnex PyPy package.
I am not familiar with these things.
Python package index, which pushes malware hidden in a wave audio.
So this is a supply chain attack, and they were able to find this.
If any system imported this malicious package, it should be treated as fully compromised.
And so if you do use this Telnix PyPy package, you definitely need to look into this because that's a pretty urgent problem.
This is just a little tiny PSA, but Cloud Code had a leak.
We're not going to talk about the industry side of that because that's not as relevant for this podcast.
but the more relevant side of it is that it was used to push an info stealer malware on GitHub
because pretty much they used fake GitHub repos to deliver information stealing malware
by trying to utilize the leaked cloud code.
It's like a fake cloud code, leaked source code repo, but it's actually malware.
So be careful if you're trying to dig into this in any meaningful way that, you know,
you see it here on the screen if you're watching the video that Google literally has just leaked
cloudcodeplussitegithub.com
and you can see all these fake websites here.
On a similar note, just another reminder,
very recently there was a top Google search result
for a Cloud plugin that was also planted by hackers.
So in general, just remember
that you cannot always trust search results
even when they come from Google,
especially if they're ads from Google.
They are well-known to run malicious ads
that will literally get your system infected.
It's not good.
We've talked about that countless times on this podcast.
Very quick here, this probably isn't going to impact
any of you listening, thank goodness,
but WhatsApp is notifying hundreds of users
who installed a fake app made by government spyware maker.
And so if you do get this notification
or you want to learn more about it,
check out the show notes.
But again, it seems like it's about 200 users,
so it's probably quite targeted.
And hopefully it doesn't impact anybody listening.
Next one is a quick signal boost for this 404 article
on a secure chat apps encryption,
which is so bad that it is, quote, meaningless.
And it's actually called Teleguard is the app.
First off, the key is derived from the user's password.
There's a meme here,
but the password is the public user ID.
So you can literally download and decrypt all the keys to the messenger, making it not really that secure.
I just wanted to signal boost this because if you're a cryptographer, you can definitely have something fun to sink your teeth into.
But it's a good reminder that you like it.
Guys, don't mess around with encryption.
Like there's a reason to say don't roll your own crypto.
So definitely try to stick to more safe, established options when it comes to your encrypted messaging.
Signal is a great option.
Our resources on the Spa Essentials webpage is something else that you can look at if you want to see other trusted messenger options as well.
This one kind of pissed me off.
So Perplexity, the AI model slash agent company, whatever you want to call them.
They have an incognito mode, which is supposed to be more privacy respecting.
It's not supposed to train your data, whatever.
But there's a lawsuit.
So it's a lawsuit not confirmed that it is completely a sham.
because behind the scenes, it was actually sharing the entirety of those chats with Google and Meta
without users' knowledge or consent.
Pretty much what's happening is in their infrastructure,
they had these trackers for both companies in the infrastructure that included those chats as part of the data collection.
And so that is the accusation that is made.
It's a class action, and it's not in their privacy policy.
And so it's going to be quite a fascinating story to deep dive into as more details come to light.
Perplexity does reject the accusations.
So we will see what happens there.
Some of the comments are quite entertaining on Ars Technica.
The first one is, I'm shocked.
Shocked!
Well, not that shocked.
And the other one says, it's fine.
They asked ChatGPT if it was legal and got the go ahead.
So I just thought those comments were funny.
Next up, there is this very fascinating attack here where adversaries are exploiting vacant homes to intercept a mail-in hybrid cybercrime.
Pretty much, there's a lot more, you know, nitty gritty to this.
So if you want to learn more about it, I do recommend checking us out in the show notes.
But if you are traveling or you're away from your house for a long time,
your mail would be intercepted essentially and used to start doing some fraud, identity theft, etc.
And so if you want to learn more about this and ways to prevent it and just see how the attack works,
I do recommend checking it out in the show notes.
Okay, this is a super, super quick signal boost,
but Samsung is officially discontinuing its native messages app in favor of Google Messages.
very quick PSA
this is a slight advantage
from a security perspective
as Google messages
if you are using it
with other people
using Google messages
and potentially someday
very soon maybe
iOS users as well
you could actually get
RCS end-to-end encryption
which actually uses
behind-the-scenes signals
encryption protocol
it's not going to guarantee
the same level of metadata
protection and safety
that Signal or another
messenger might
but it's still a step
in the right direction
so just a quick signal boost
another quick signal boost
in a negative way
Apple has confirmed
Apps app will begin showing ads to users, quote, this summer.
I'm drawing attention to this because Apple overall has had a predominant business model of selling hardware,
which overall aligns its values with users a little bit more than other companies, right?
I talk about the fact that Facebook, Meta, etc. doesn't really have a business model other than exploiting users and commodifying user data.
That is its complete business model.
But when we look at something like Apple, they at least are able to sell some hardware
and they have an actual business model outside of exploiting user data.
And I think that is why Apple is genuinely a bit better on privacy than some of the other
big tech companies.
So I bring this all to your attention because if Apple starts getting more into the advertising
space, which they already have started to dive a little bit more into, then I think their
kind of priorities could change over time.
Outside of the obvious annoyance of announcing ads in Apple Maps.
So this is just kind of a signal boost to keep an eye out on the direction that Apple is going.
On a topic of big tech companies exploiting users, they also exploit children.
There was this really big trial that was a big loss for these companies where Meta lost.
They argued that child exploitation was inevitable to their product and it wasn't intentional and it's just how it works.
But they actually lost this lawsuit.
They plan to appeal, but we will work hard to keep people safe on our platform.
Yeah, I'm sure they will.
That's why they do so many important things to keep children safe on their platform,
as well as everybody else safe.
Meta is a great net good to the world.
I'm being very sarcastic if you can't tell.
But also, YouTube was also found to be negligent in a landmark social media addiction case.
So these, funny enough, TikTok and Snap both settled with the plaintiffs
for an undisclosed term before the trial started.
But all these companies decided to proceed with the trial.
And here's where they are now.
It's nice to see these companies being held somewhat accountable,
but we can't just keep fining these companies.
That's what's happening.
Like these companies exploit users, they exploit children, they exploit data.
And we just find them, you know, a very small amount of money relative to how much money
these companies are working with.
And then nothing ever changes.
So I would really like to see more real fundamental change.
On the data breaches section, the European Commission has confirmed a data breach after
a Europa.eu hack.
So if you want to learn more about that, check out the show notes.
The Dutch police have disclosed a security breach after a security attack, phishing.
attack on their kind of networks here. And so if you want to learn more about that, show notes.
There is a telehealth giant called HIMS and HERS, which says its customer support system was hacked.
And so if you use this service, check out the show notes to learn more. Hasbro has said that it was
hacked and may take several weeks to recover. This is your reminder that these data breaches aren't
just about leaking data, but they're also about operations and keeping a company going and staying
afloat. And so when companies deprioritize security, they're not just deprioritizing their users and
their user's safety and their personal information, but they're even sacrificing themselves along the
way. Crunchyroll has confirmed a data breach after a hacker claims unauthorized access.
They're still investigating things and getting more information. There's actually very little
information that's established, so we will probably learn more. Cars, Mazda has disclosed a security
breach exposing employee and partner data, and so if you've ever purchased a Mazda, you might be roped
into this. It impacted a lot of different people, especially if you're an employee, so make sure to
check that out. This one I really felt the need to share because we recently talked about how there's
this new surveillance law which is going to require like sobriety measurements that are required in
all new vehicles in the U.S. and this is a very good reason why those kinds of things should be
maybe reconsidered at least the implementation because hackers hit an Iowa company and so cars
all around the country failed to start. Driving after a DUI conviction can be a dicey experience.
Many states require drivers if they want to keep using their cars to install ignition interlock
that measure alcohol levels before allowing the vehicle to start.
So there's this Iowa-based company called Intoxaloc.
Intoxaloc?
It's I-N-T-O-X-A-L-O-C.
So, you know, you can imagine where this is going.
You actually have to pay $70 to $120 a month for this device,
which is kind of insane.
There was a cyber attack,
and people were locked out of this thing for over 10 days.
It made calibrations impossible,
which meant that some users in each state
weren't able to calibrate on time
were in danger of having their vehicles locked.
They said that 7% to 10% of users in Connecticut had been affected.
At the time of recording, it's been a few weeks,
and they have actually said that their systems resumed.
But I think that this is just a testament to how,
when we have these internet-connected devices,
and we don't think about the privacy implications,
we don't think about the security implications,
we don't think about the digital rights implications,
we end up in these crappy situations that really does harm people.
And I just want lawmakers, companies, individuals to help,
to just factor these things a little bit more into their decision-making processes.
I'm not asking for that much, but apparently I am because the next story is about an infinite
campus, which is a information system used for K-12 children, which is warning of a data
breach following an extortion attempt.
So again, we snap our fingers and try to pass age verification to keep children safe,
but we don't really consider literally leaking children's information to anyone who wants
to just pay a few dollars for it.
Might I remind you that in 2024, PowerSchool leaked 62 million students' information.
Okay, there are so many service updates, which is actually a good thing because almost all of these are positive things.
And so I'm really hoping that you guys like this section and it's a way to end the week in a positive note.
So we're going to start with Proton.
They released something called Proton Meet.
We've actually used this internally when we were communicating with a couple other team members.
And I can say that it worked really well even a few months ago.
And I did recently use it for a few meetings
because Cal.com does not work with Proton Calendar,
which is a very frustrating problem.
And so I haven't been able to use Cal.com video.
There was an open GitHub issue for it.
They said they'd fix it in August of 2025.
And I am using this platform to complain about that.
But ProtonMeet is now a thing.
And that is what I was using as my backup.
And it worked quite nicely.
So if you want to see like a Google Meet
slash Microsoft Teams alternative,
this one worked quite nice.
It's end-to-end encrypted
and it's integrated quite nicely in the Proton ecosystem.
So I was pretty happy with this one.
On that note, you can also find the security model
if you want to learn more about how the security works
behind the scenes and the technical design behind it.
And that's also in the show notes.
And kind of as an extension to this,
I guess they're really formally Proton
presenting themselves as a Google Workspace alternative.
They're literally calling themselves Proton Workspace,
an encrypted suite for team collaboration.
And so they have some pricing plans listed here.
It seems like more of a formal switch of a toggle
to really say that, yeah,
we are trying to be a Google Workspace alternative
for companies.
So I found that quite fascinating.
Firefox also had a lot of updates this week.
There is now a free VPN that's built into Firefox,
which I think is super cool, actually.
It has 50 gigabytes of bandwidth, which is really awesome.
And it seems right now, just quick looks that I've seen so far,
it seems quite well thought out.
I will probably be doing some dedicated content on this in the coming days,
but so far, this seems like a very positive direction.
On another note, there is now split view in Firefox.
You can have two tabs side by side, which is much appreciated.
So I'm really excited to see this as well.
This is a feature that I use in Brave all the time.
In fact, if you watch our weekly live streams on Friday,
you'll see that this is actually the very feature I use
to have the live chat on the left
while I show you guys something on the right.
So it's cool to have that in Firefox.
And finally, there is something called tab notes in Firefox.
So you can leave a note on any page.
I don't know if this is one that I would personally use,
but I've definitely seen people who have more permanent tabs
or they want to be able to do something and add a note
and keep tabs of things a little bit better,
that is an option you now have.
Tor browser hit version 15.0.8.
There is a full changelog if you want to learn more.
It seems like mostly minor updates,
but I still wanted to showcase that update.
Tails 7.6, the operating system,
now has automatic Tor bridges, which is pretty cool.
They have GNOME Secrets to replace KeePass XC.
They think it's a simpler interface
and it's better integrated in the GNOME desktop.
So that's quite fascinating.
For those wondering, yes, it seems like Secrets also supports the KeePass database file.
So you don't have to switch over to a different database.
You can still use a KeePass file.
It's just switching essentially the app that's going to be opening it.
And there's other changes and problems that they fixed.
Since then, they did release version 7.6.1, which just has some more minor updates.
Matrix, the protocol, has released version 1.18,
which adds policy servers, invite blocking, and safety API updates.
I would have liked this when I was still using Matrix
because anytime I did anything with Matrix,
I would open it up and I would get like 10 invites on a daily basis
for people who just randomly would send me invites to talk to me.
So this would be very nice to have, and I'm glad they finally released that.
This is pretty quick here, but Entei, the photos provider,
they have some other stuff in their suite as well,
but they did a Rust crypto audit.
And so if you want to read the audit and see what that looks like,
they did do that.
This one I was really excited about. So Crypti is an open source and an encrypted photo storage
provider, but also docs in the same place. It's all web based. And I'm pretty good friends with
the person who runs it, John Osvey. And they released photo sharing. So now if you have an
album, you can share it with a link to somebody else so they can also access that album. It is
quite a fascinating approach they took to try to balance different priorities. And it is very
beautiful and quite seamless. And I already got an album from when I visited John. And I'll show
some photos on screen if you want to see those. So John actually sent me those photos with this exact
feature from his own account. So it was pretty cool how that worked. And I can say it worked
quite nicely if any of you are Crypti users. This is a good positive update for any of you using
lockdown mode. If you have a higher threat model and you're using an Apple device,
they just confirmed that they still have not seen anybody who uses lockdown mode
be hacked with spyware. So that's just more confirmation and validation that it does do
what it's supposed to do. And it's an up feature that they continue to actually promote and make
updates to. Lockdown mode today is not the same lockdown mode they released when it was first
announced. And so this is quite exciting. And I'm really hoping that Apple and other companies are
inspired by it. Very quick update. If you're using Quad9, the DNS provider, they have enabled DNS
over HTTP3 and Quick globally. So they're expanding their encrypted options. If anything,
you do can support these better options, I would say just migrate it. It's an easy swap. You can do
it in like a few minutes. Really quick shout out to Linux because Steam on Linux skyrocketed above
5% in March, which is awesome. GNOME 50 has dropped Google Drive integration. So if you want
to learn more about why that was done, check out the show notes. Kali Linux, the pen testing Linux
distribution has released version 2026.1 with Linux 6.18, a theme update, backtrack mode and
eight new tools. So if you use Kali, make sure to get yourself updated. There's something called
Glass UI, which is making a comeback on Linux thanks to KDE developers. So if you do want more
of this Glass-inspired theme, apparently it's being prioritized again. I'm curious if that's
inspired by Apple and Liquid Glass or if it's just irrelevant. But that is an update there for you.
The PineTime Pro is coming for context. We had the PinePhone, which is an open source Linux device,
and it's a pretty solid device. People have overall said really good things about it. I
still have one sitting in my drawer that I need to review. Then they released the PinePhone Pro,
which I also have in my drawer that I need to review. And this is all, again, from Pine64.
I should have mentioned that earlier, but now they have the PineTime Pro with AMOLED,
GPS, and more. Quite fascinating. The GPS edition is quite nice, so I would like to look more into
this because it looks like a pretty compelling device. Okay, this one caught me very much off
guard. So a bit of context, Waterfox is a fork of Firefox that's meant to be kind of open source,
privacy respecting, more private by default than typical. And this is just called 15 years of
forking. Today marks 15 years of Waterfox. And it was a really cool project update. They talk about
kind of the evolution and they go through all of their logos. It's actually quite fascinating to see
their logo updates. I'm a big fan. They talked about when it was purchased by System One. Then
reclaiming the project.
But really, the main update here,
they are going to be releasing Waterfox
with Brave's Adblock library,
which is crazy.
That's just a crazy overlap between two projects
that I really enjoy. They say that this
isn't going to be subject to the same limitations
as uBlock Origin. They say it's faster, more
tightly integrated, and it doesn't depend on a
separate extension process or require
them to constantly pull in upstream updates.
Brave's Adblock library is also mature.
It has page engineers working on it,
a wide filter set, and crucially, it's licensed under MPL2,
which is the same license as Waterfox, which makes it a natural fit.
ViewBlock Origin, as good as it is, carries a GPLv3 license
that would have created real compatibility headaches.
So I'm actually quite excited to see this.
I've never seen Brave's Adblock library adopted by anybody else,
let alone a Firefox-based browser like Waterfox.
So I'm quite excited to see what this looks like,
and maybe it will inspire other projects to follow suit.
That, my friends, is the end of this week's surveillance report.
We had a lot of stories this week,
So thank you all for being patient.
If this analysis helped you reclaim control in your life,
you can become a Techlorian by visiting the link in the show notes.
You'll gain access to our exclusive communities.
We'll get key perks.
In fact, I have my own private RSS feed that I use to pretty much curate all these stories.
What you guys see is maybe like 70% of the stories that I collect throughout the week.
And so if you want to keep those stories updated in your own RSS feed
without needing to follow all the sources that I do,
you can get just a nice condensed version of that RSS feed if you become a Techlorian.
all managed by yours truly.
And if you become a tech lawyer,
and you're also supporting this podcast
and keeping it free for everybody going forward,
and that is much appreciated,
and it really couldn't exist
without all of our community supporters.
So thank you all.
If you don't want to support financially,
totally cool, I get it.
You can take a moment to at least leave a rating,
like this video on YouTube,
share it with friends and family,
especially anything that could help
improve their digital freedom.
Thank you all for listening,
and I'll see you in the next episode of Surveillance Room.