Limitless Podcast

Amazon AI glasses, robots replacing 600,000 jobs, OpenAI Atlas, Anthropic Claude desktop app—here’s your rapid-fire Limitless AI Roundup. We debate whether Bezos’s glasses supercharge drivers or quietly train delivery robots, and why we both uninstalled Atlas (security, censorship, and UX deal-breakers). Inside the $10K-per-model trading battle, Quinn and DeepSeek surge while Grok and GPT stumble—and we call who’s likely to win next. Plus: Karpathy’s “AGI is ~10 years out” reality check, NVIDIA’s puzzling “GPUs in space,” XAI’s Grokipedia and steerable feed, and DeepSeek’s OCR breakthrough that 10× compresses long docs—stick around for the Sora invite code details.

------
🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️
https://limitless.bankless.com/
https://x.com/LimitlessFT

------
TIMESTAMPS

00:00 Amazon's Big AI Moves
08:02 AI Trading Winners and Losers
12:19 OpenAI Atlas Browser
18:38 Claude Browser
20:25 GPU's In Space??
24:01 Andrej And The Great Capex Bubble
31:29 Grokipedia
37:56 Deepseek OCR

------
RESOURCES

Josh: https://x.com/Josh_Kale

Ejaaz: https://x.com/cryptopunk7213

------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures⁠

Creators and Guests

Host
Ejaaz Ahamadeen
Host
Josh Kale

What is Limitless Podcast?

Exploring the frontiers of Technology and AI

Ejaaz:
[0:03] Welcome back to the Limitless AI Roundup. It has been a jam-packed week this week. Amazon is apparently releasing their own AI glasses, and they're replacing 600,000 people with robots, Josh. But that's not all. We have six Frontier AI models that were each given $10,000 each. And let's just say some are performing better than others. OpenAI released a brand new AI browser called Atlas, and there's been some mixed opinions, mainly negative. But also Anthropic is joining them, releasing their own desktop and web app,

Ejaaz:
[0:39] and much more to cover in this week's episode. Josh, why is Jeff Bezos releasing AI glasses?

Josh:
[0:46] So I think there's two headlines here, right? There's the $600,000 jobs that are going to be removed and then there's the glasses that they released in the intermediary the glasses are interesting we're seeing a video here on screen of how they work because i think they really they augment these drivers in a way that makes them so much more efficient each as if you've ever seen one of the ups drivers or the amazon drivers they have this like big remote and they're scanning all the packages they're like checking the screen and they're trying to find all the inventory glasses definitely make people much more efficient and much more effective it's funny to see amazon jumping in the ring because it the display looks almost as good as the meta glasses and they assumedly have spent much less money on them than meta actually has i think what's interesting about this and what's what's particularly cool is if you're an amazon driver, you are probably now much more efficient which means you can probably deliver either a a lot more packages per day or b you could just finish your job earlier in the day and just get everyone the packages earlier because, What Glasses do is they just, they remove the need for this third party device and they're just much smarter. You could see as a delivery driver picks up the package, it immediately knows where it is. It has a map overlay of where you're supposed to be going. It's just turning delivery drivers into super delivery drivers, which I think is a really interesting dynamic. And I love that it's Glasses. I mean, Glasses very obviously is the perfect form factor for this. You're giving me a funny look. What do you think about the Glasses for Amazon?

Ejaaz:
[2:08] Okay, I have a slightly more pessimistic take, which is, and I shared my thoughts earlier when they made this announcement. This is just one massive fakery, in my opinion. Amazon, in my opinion, is launching these smart glasses under the guise of making their process of delivery and fulfillment more efficient. But I think what they're ultimately going to end up doing is using a bunch of this data that these glasses pull for them, all the visual data, all the proficiency data, and they're going to feed it to an army of robots that are going to replace these workers. Now, maybe that sounds a little dystopian, but, you know, they did have another major headline this week, which was basically their plan to replace, what was it, 600,000 workers with robots. So I don't think it is too crazy of an ask. The optimistic side of me is excited about this, and it is yet another proving point that maybe the future form factor of AI glasses, sorry, of AI are going to be glasses. You've got Meta that released their Ray-Ban displays recently. You've got the Apple Vision, new Apple Vision Pro with the M5 chip that got released in the last week. And now you have these. It makes me confident that it's going to be some kind of visual form factor. So maybe this is Amazon's attempt to kind of join that. But I think maybe this is like a guise to feed robots, Josh. Do I sound kind of crazy? Do I sound like a You know, should I be wearing a tinfoil hat?

Josh:
[3:34] Like, as you're saying this, I'm questioning the technical logistics of how that works, because in order to do that, you need to upload a lot of that footage and data back to a main server. So I'm thinking, do these glasses have 5G chips in them? Probably not. Otherwise, they would be twice the size. Do they have enough battery life to roll cameras all day long and then upload back to the servers? I'd be curious to know the tech specs on whether that's even possible. It does make a ton of sense that Amazon wants to collect as much data as possible. So yeah, absolutely. If they could do it, I'm sure they will do it and I'm sure it will be massively beneficial when training robots on how to do the genre films.

Ejaaz:
[4:07] I need your expert opinion. Is this real? This kind of looks fake now that I'm looking at the demo. Is this like a mock-up or is this real? Based on how crappy

Josh:
[4:16] The overlay looks, I'm actually going to say it is real. And they're just, they have a camera pressed right up to the lens and they got a really nice focal thing going. But we'll see. It's one of those things that like it's an early testing now. We'll see how it goes. We'll see the second order effect. Yeah.

Ejaaz:
[4:27] It's interesting because we, I don't think we have. We haven't spoken about Amazon in AI at all. And actually, Jeff Bezos, founder and former CEO, has been super and outwardly critical of AI, saying it's all just a bubble whilst he, whatever, drinks champagne from his yacht. And recently this week, he's kind of been leaning into being pro-AI and pro-robotic, saying that, you know, most people are going to be living in space by 2045. I saw that crazy headline. And now, you know, Amazon making the push to replace a lot of their workers with robots, which might be like considered physical embodied AI. I don't know where this ends up. It sounds a little far-fetched. There was something within this New York Times article which mentioned that this is going to happen over eight years. Josh, you were saying before we started recording that that sounded pretty slow to you. I would agree. See, I don't know how much of this is just a clickbait headline versus like actual change.

Josh:
[5:25] Yeah i imagine it's probably going to be actual change when you think about i wrote a newsletter about this last week if you haven't subscribed to our sub stack go subscribe because you could read all about this but the idea of robots replacing human jobs and you could think of like the work stack as this like vertical thing and on the ends are the humans in the middle is the robots so as these simple things get replaced in the middle these are things that are like very repetitive they are very simple they are very pragmatic and the steps that are required to do them. Those get replaced by robots. You can think moving packages around a warehouse, scanning things, dropping things off. Those are very easy to replace. The things at the end, the goal setting, the taste making, the actual quality assurance and making sure it works. Those are humans. Because we have all the sexual leverage with robots, you need less of those humans in the loop. But this increased output of these companies yields a lot more new opportunities for humans, like data labeling or data collection or whatever these new job sets may be. So there's a lot of different ways to look at it. There is no doubt in my mind that a lot of these autonomous, like seemingly autonomous jobs will be replaced very easily by robots. We'll see exactly how much and how they do it. But yeah, Amazon, they're getting in the ring. I'm happy to see this.

Ejaaz:
[6:37] Whatever theory it ends up being, it's quite clear that the top companies are heavily investing in some form of eyewear to combine AI into. I'm showing a tweet of Samsung releasing something called Galaxy XR this week, which, you know, Google is advertising because it's using Android. And it looks pretty slick. I saw a side-by-side of this against Apple's new Apple Vision Pro, Josh. And dare I say it looked super cool. I think they maybe even have copied a bunch of Apple's features. But the point is, a bunch of these companies are super focused on AI hardware. And it's interesting to see like all different facets of like social media companies like meta all the way to fulfillment companies, delivery companies like Amazon doing the same thing, embodying the same type of hardware.

Josh:
[7:22] I thought you were going to say that they looked as good or better than the Vision Pro. And I was about to jump through the screen.

Ejaaz:
[7:27] No, no, no. Strangle me. I think you're right.

Josh:
[7:30] Like this is, I mean, again, another data point of the inevitable that glasses are one of many form factors in the future, but one that has a lot of utility.

Josh:
[7:36] Now, we have a lot of topics we need to get through, Jess. We should probably start moving along. One of these is I'm very excited to talk about, which is, can we talk about a trading competition from the episode earlier this week?

Ejaaz:
[7:45] Please. So I believe a victory lap is in order.

Josh:
[7:48] Earlier this week, we had an episode about AI models and they were trading. They were each given $10,000 of money. Each one went along on their way and decided what would be the best series of trades to make to make the most amount of money. Well, we placed our bets. We now have some results from just a few days later. I mean, granted, it's only been up for a week, but we have some clear divides here. And Quinn's in first place. And EJ, as if you remember, just from a couple of days ago, someone might have guessed Quinn to be in first. And there it is, the underdog at the time they were pinned right as like zero percent is now in first place because you know why they just they took leverage trades on my favorite coins which are bitcoin and ethereum and now they're at the top look so it's nice to see that currently yeah yeah that's to see that's a good position that's a winning position you just want to be long the majors and make a ton of money but i do find it equally if not more funny to see the bottom half of this chart particularly open ai um just getting absolutely clobbered they're down what 25 percent it's a no.

Ejaaz:
[8:45] No they're down 75 josh

Josh:
[8:48] Oh that's a 27 2700.

Ejaaz:
[8:51] Oh they're getting cooked

Josh:
[8:53] Okay so were we right about that i think one of us guessed that open now would be in last because yes obviously yes um now that you're looking at this after a couple days of digesting what are your thoughts on this at this trading battle.

Ejaaz:
[9:04] Okay, so I'll give you just my stream of conscience right now.

Ejaaz:
[9:08] Number one, didn't see Quan rising to the top so quickly. You're right to say that it takes a more conservative set of trades. I'm showing it on the screen right now. It only has one position compared to most of the other models having like four or five positions open at any one time. And it's 20x long, the major, the most obvious asset, right?

Ejaaz:
[9:28] You know, why trade other assets when you could just go higher leverage on the most obvious and confident asset? and it seems to be a strategy that's paying off for it. So that's number one, it's doing really well. Number two, Josh, DeepSeek seemed to have reacted super well to the volatility in the market over the last couple of days. When we filmed our episode talking about DeepSeek and Grok, they were both in the lead. In fact, DeepSeek had the number one position and then markets shifted and it looks like Grok didn't react quickly enough or it reacted in a worse fashion than DeepSeek. DeepSeek adapted really well. It understood that the markets had shifted sentiment had shifted, and it started trading better. And it's still in number two, not too far behind, when it's about a $1,200 difference right now, which is, you know, it's not unavoidable, like it could still end up winning. So I'm impressed with the adaptability of the DeepSeek model. It kind of doesn't surprise me, because the model was created by a quant fund in China, but impressive nonetheless and it's open source. So if any of you listening are kind of technical forward and you want to kind of like try giving DeepSeek some money, you technically could spin up your own version of this.

Ejaaz:
[10:40] I'm not going to lie. I'm surprised, but also not surprised to see Grok now currently 1.5k in the hole, Josh. So as we said, like it started with 10k and now it has $8,500.

Ejaaz:
[10:53] If I look into this, I was looking into this. Let's see if I can click this and look at the positions that it currently holds. It's short all the majors or like two of the biggest majors, ETH and BTC, which seems to be the biggest error that it made over the last day. Like at one point, Josh, it had an account value of $15,000, which would have put it currently in the number one position. But at some point, I think it switched to being short when the market's actually just shot up and now it's down in money. So there's a lesson there for all traders like that. I feel like I've traded like crock in the past. So that's pretty funny. And then yeah, like you said, Google and GPT, I won't lie, like these are like the smartest frontier models in everything but trading. So it's super funny and kind of surprising to see them so low. Like Gemini as well is down there with GPT. It's lost, what is it? 6,500 bucks from the initial 10K. Absolutely terrible.

Josh:
[11:49] Well, there's an update for the people who are watching, please.

Josh:
[11:52] I would love to know who you think is going to win after the next 10 days. This isn't the only update we have this week. We have another update on a previous episode about the OpenAIS browser, Atlas. A couple days in i have some takes either i wonder if you have some takes i i posted this morning i i deleted it it's gone it's not even on my computer anymore i'm done atlas oh it's gone it's gone completely off my computer and okay it didn't even take i think you asked if i was going to have it like a month from now it didn't even take like four days yeah it's gone the reasoning behind it i have a couple i want to share quickly just as an update one of them being the most annoying is that when you use the browser's functionality when you use chat gpt within the browser it gets added to your chat history and added to your memory and all stacked on which seems good in theory but.

Josh:
[12:40] But now my really nicely elegant prompts in my ChatGPT app are muddled with like, how do I cook this thing? Or how do I clean this stain out of this shirt? And it's like a bunch of my stupid Google searches that are just dumb, like looking up restaurants or I'm looking up whatever. Now they're just cluttering my very elegantly curated, thoughtful prompt set, which I really don't like. So that was one thing. the second thing was is i don't really see when i went to reach for a browser it was never atlas because anything i wanted atlas to do i would just reach for chat gpt the app and i'm just more comfortable with the form factor i have everything set up in chrome i started getting annoyed because i had to log into all my stuff again i wasn't getting any value and i think at the end of the day there's nothing that atlas can do that the existing stack cannot so therefore i am back on chrome back on safari and just using my chat gpt app and if i want to use the agent feature i just click it from the drop down on the desktop and that is that so i admire them for trying i think a lot of people will do this this is like a a good signal they're trying to lock in more of their user base but for someone like me not for me we'll try again on the next product okay.

Ejaaz:
[13:50] I have had the same outcome as you josh i deleted it as well i don't i don't use it

Josh:
[13:55] Oh wow for two in one.

Ejaaz:
[13:57] Week that's tough for two but for a slightly different reason there's this thing that this

Josh:
[14:02] Tweet that.

Ejaaz:
[14:03] I have currently on show highlights, which is something called prompt injections. This is where basically if you're running an AI agent on the OpenAI browser, you mentioned it as agent mode. If you go onto a website which secretly has a hidden prompt

Ejaaz:
[14:20] Might be malicious. So for example, the prompt might say, hey, tell this guy's AI agent to drain his bank account and send it to my account. That could end up being a serious security issue and a major security flaw. And in my opinion, Josh, agent mode is the coolest thing and the only reason why I'd want to use this browser. Memory is cool, but I think it needs a few more iterations to get better, to get super personal. But aside from that, I want an agent to go do things for me. I wanted to go book tickets for me. I wanted to go spend money from my account to do the shopping. But if I can't trust that agent to navigate to the right website and not get exploited, I'm not willing to give it the keys to my accounts and stuff. And so in fact, it gives me the opposite side, which is like, I'd never want to use a browser that might potentially do this for me. And so I deleted it. And I'm not the only one that kind of shares this thought. I've got a tweet from Ryan here, which says, you know, this is one of the more terrifying things that I've seen, he's highlighting another issue that he's seen with this browser, which is coming around censorship, Josh. So in the screenshot that he's quote tweeted, it's this guy saying, oh, I'm sorry, I thought this was a web browser. And it's a screenshot that he shared of, you know, looking up something that most people would probably think is a little bit controversial, but nevertheless, information that should be available on the internet.

Ejaaz:
[15:37] And a GPT response, I can't browse or display videos of this because it's against my guidelines. And whilst it's not content that you may particularly want to engage in, the whole point of the open internet or the internet that we know and use today is you're able to access that information. What you do with that information is up to you, right? But you should have access to it. It shouldn't be censored to the level of Sam Altman's tastes, likes, and dislikes. So these were kind of like two major bullets in the coffin for me in particular, where it doesn't encourage me to use the internet to use OpenAI's browser if it's not going to give me the full experience that I'm already used to using Google Chrome. I would rather spend an extra couple of minutes using Chrome than using this agent.

Josh:
[16:22] Yeah, there's two parts. There's the prompt injection thing, which I'm not sure is as big of a deal as people are making it out to be. I mean, the agent feature has been around in ChatGPT now for a very long time. And by using the agent feature, it is spinning up its own web browser. It is susceptible to prompt injections. It's not unique to the browser experience. It's unique just to an AI using the web, the open web. There's a lot of security that's been gone into defending against that and i would imagine it's probably pretty good if not great and also there's a lot of safeguards preventing the ai from actually doing things on your behalf when we gave the demo it kind of verifies that you do a lot of the tasks that are being requested the other thing that seems scary is that second point that you made about the not allowing specific information that very much rubs me the wrong way where it's like i i want a browser to access the open web a browser is a tool and a tool should be neutral. And I want it to be neutral as I'm on the web. I want it to serve me what I want instead of telling me what I can and can't have. And this is kind of kingmaking what works, what doesn't. It starts here. Soon, they won't let you. I think I saw an example where they won't let you access the cloud API via the browser. It's a very slippery slope of creating the sandboxed place that for people who don't care, will probably happily oblige. But I mean, that's like a very scary place to be. And these tools should be open, they should be neutral. And this is very much a step in the wrong direction.

Ejaaz:
[17:44] Yeah, I was gonna say it's all a growing trend of AI labs trying to own the entire vertical stack. So what I mean by this is, if you're an AI lab that created a really good model, it could be Anthropic with Claude, as I have shown on the screen right here, or OpenAI with ChatGPT, they want to own the user wherever they are, right? Using their own equipment, using their own software. That's why OpenAI released the browser. That's why Perplexity released their own browser. And that's why Claude right now has a new desktop app, Josh.

Ejaaz:
[18:16] And this is kind of similar to the new browser in the sense that Claude can kind of like be on your desktop. It can have access to your documents. You can instruct it to do a bunch of things. The people that have been calling praise over this update has mainly been software developers, or at least that's what I've seen so far, because now they don't have to open up kind of like a separate tab with clawed code in it. They could just bring code to their own terminal that they're tapping away on their keyboard or computers at home, which is cool. The other cool part is they have a web plugin now as well, Josh. So similar to the OpenAI Atlas feature where you have an agent that can edit all your docs in real time, you need to use the OpenAI's browser for that. But this one's slightly different in the sense that if you're using Chrome, you can just add Claude as a plugin now, and it's there as a chatbot that can help you edit your docs and doesn't require you to download and open up a new browser. So if anything, I kind of prefer the approach that Claude has taken. I don't think Claude is as good a model for me personally and for the stuff that I do as OpenAI, but it's cool nonetheless. And again, another point that people just want to own the entire stack.

Josh:
[19:26] Yeah, the OpenAI web browser could have been an extension. They just want to lock people in. And that's the sad reality of it. I would have much preferred an extension that's just kind of there as a companion. It's basically my chat GPT app in the browser that sits there ambiently. They went for the whole browser experience. It was just kind of a miss. So browsers in general, we're not huge fans. I think that's safe to say. Maybe we could change the topic. Can we talk about space stuff?

Ejaaz:
[19:48] I really want to talk about space stuff. This looks like an Onion article, Josh. What is this?

Josh:
[19:52] There's this like absolutely outrageously dumb topic that i can't believe nvidia is doing of all companies nvidia is the most valuable company in the world run by jensen huang who is brilliant and for some reason he thinks it's important to allocate resources to sending gpus into outer space this makes absolutely no sense to me you just look at this if you're just looking at this video it's like okay see how large this is that is a.

Ejaaz:
[20:18] 16 square data center

Josh:
[20:20] Yes this is a 16 square kilometer piece of space junk that costs millions of dollars to get into space that they cannot be cooled. So a lot of people think outer space is cold. Outer space is not cold. Outer space has no atmosphere. It has no liquid. It has no way of actually removing heat from these GPUs. So the reality is if you have something hot in space, it stays hot in space like forever. It does not cool because there is no cooling. So where are you going to get your cooling from without a heat sink that is literally 16 square kilometers wide it leaves a lot to be desired the cost of getting mass to orbit, huge and it is definitely far more expensive energy with solar okay that's that's cool but like this is this just doesn't make sense i don't understand why all the resources are not just piled into making the best data centers on earth and why they want to send stuff into outer space do you have any contradicting opinions here like does this seem reasonable.

Ejaaz:
[21:14] I am digging to the depths of my brain to come up with a contradictory take and i'm going to give it to you josh because i share a lot of the opinions that you have but here's my low iq takes because i'm not going to try and argue space infrastructure. I'm not an expert. But Jensen Huang is the richest man in the world. And he has technically the most valuable company in the world. He's at the helm. So he probably knows something that you and I don't when it comes to setting up infrastructure. He knows a thing or two about creating a GPU and scaling data centers in some way. So maybe this is as far-fetched as when Elon started SpaceX and said, I'm going to reduce the cost of going to space by to 128th of what it is today right and it was crazy back then and now it's kind of something that makes sense and people didn't quite understand back then that's one take the other take is i is it easier to dissipate heat when you're in space i feel like if you've got nothing surrounding you you just kind of radiate that that stuff that might be an incredibly dumb take but it's i'm running on fumes here i i can't argue for this in any way it seems ridiculous There's no atmosphere,

Josh:
[22:18] So it wouldn't work. The SpaceX argument, there was like this... Physics-based thesis that if we can create rockets that are reusable we can then lower the cost to orbit and that makes sense with this it's like yo we're putting we're putting gpus in space,

Josh:
[22:38] okay well why like can you please explain to me like what what like how is this.

Ejaaz:
[22:43] Better to lower energy cost by 10x dude and reduce the need for energy consumption on earth that is the opening title this is a

Josh:
[22:52] Very non-optimistic non-accelerationist take we need a like a ton of energy on earth and we need to focus on that problem we can't offload our problem to space to run big solar arrays we need gpus here we need data here now so i'm just like all right maybe this is just a fun little experiment nvidia's running cool that's fine but i i don't have high hopes for this in the future let's move on to andre let's move on because what we're looking at is this guy's name is andre carpathy for people who don't know he is just a godfather of ai he's brilliant he helped work on the tesla autopilot team he helped join open ai and build a lot of their critical infrastructure he is probably one of the most valuable researchers in the world but what he has done instead of taking a multi-billion dollar offer at any of these labs is he just decided to go build an education company and he wants to build a school and he wants to educate people on how AI works and just training people to learn better. So much admiration. He's well respected in the space. He had a recent episode with Dara Kesh Patel. It was a podcast episode. Andre rarely goes on podcasts. So when he speaks, people listen, myself and Ijaz included. And he had a lot of varying takes, a lot of them a bit more doomer than I think people would have liked to have heard. So Ijaz, you have the post open. Do you want to kind of walk us through exactly what he said and why it was a bit controversial. So...

Ejaaz:
[24:16] To emphasize, Andre is really well respected in the AI community and he's become kind of like a godfather, like you said. But this episode got a lot of controversy and attention around it because he kind of broke the illusion that a lot of people have around AI being this world-changing technology that's going to arrive tomorrow. So one of the first points that he makes is AGI, which is this super intelligence, the ultimate form of AI that's going to be way smarter than humans and can help us make a bunch of different discoveries, is actually probably a decade away and is not going to arrive in the next couple of years, as so many of us have already thought. And the reason why this is important is, well, it's for a few reasons. Number one, Andre claims that waiting a decade isn't actually waiting a long time. He said that the progress that we've made in AI so far since GPT-2, which was only like three years ago to where we are today, has been absolutely astounding. And so waiting for an extra decade isn't really that much of a change, which I kind of agree with him on here. It just breaks a lot of people's hearts. That was super excited by the AI 2027 thesis, which was highly popularized, which stated that, you know, we're going to get some form of super intelligence in the next couple of years.

Ejaaz:
[25:32] I will give credit to Andre here. He's held this belief and prediction for a while now. In fact, I think a year and a half ago, which is a long time in the AI world, he said, we're not going to get it for another decade. And he stood by this. And he has a lot of proving points in this episode where he speaks about, there's a bunch of things we need to figure out, like data reconciliation, data creation. He's talking about finding new architectures to build models, the current existing architectures. He criticizes as being not good enough to get to superhuman intelligence and a bunch of other things, which I'm not going to belabor about. But that was number one. Josh, before I move on, do you have any takes on this? Do you agree with him? Is it crazy? Is it far-fetched?

Josh:
[26:11] Yeah. So I think this was controversial, but not because of anything being factually incorrect. I think it's controversial because it is at odds with what the industry requires in order to move forward. All of his takes were wonderful. They had really great examples. They made a lot of sense. A lot of people actually agreed with them, myself included. after i was kind of learning and getting up to date with what he was talking about he was poking holes at things like reinforcement learning which is how we've gotten a lot of the recent improvements how challenging and difficult it is with things like the reward function and how they just don't really have good ways of improving on that yet it's currently the best thing we have but the crux of this i guess the reaction from it was because the entire ai industry is reliant on, massive amounts of capex on building these data centers and as a result they have to convince basically the whole world to get on board and to give them money and to give them infrastructure to build bigger data centers to get close to AGI. And this is at odds with what Andre is saying is like, hey, we will get there, but it is going to take far longer than what you think because X, Y, and Z reasons that he laid out in the episode.

Josh:
[27:14] And if you are a company like Meta or OpenAI or XAI or any of the large labs, this is not what you want reality to be. You want the reality to be, no, we are going to get AGI in the next, it's going to be measured in months instead of years. It's going to be massively profitable. We're going to make a ton of money off of this. And the investors who are giving everything to building these are going to expect a return. In the case that it does take a decade, that means there is going to be a tremendous amount of spending for a very long time with no promise of an increased amount of revenue outside of, I guess, spinning up new products like the browser, like Sora in the case of OpenAI. So these things are at odds, but Andre's takes were very pragmatic. They made a lot of sense. And I think it's seemingly probable that this is correct. So now it's a matter of how much do people care, A. And b is like how how big are these repercussions actually like does it matter if agi takes 10 years are we going to be able to productize and profit off of these models faster probably but it just creates this really interesting conversation that i think some only someone like andre could create because he was the guy in the trenches he has been building these systems he is like very deeply technical in the world of ai and understands how this works much more than most other people.

Ejaaz:
[28:25] I don't really get what people are complaining about when it comes to the AI CapEx side of things. It wasn't too long ago where people didn't believe this tech would be good enough to replace them at their jobs. And now you've got, you know, a bunch of threatening replacements happening over the next like six months, right? Meta just laid off like 600 of their workers, I think, this week. So here's my take on the AI CapEx thing.

Ejaaz:
[28:49] None of these AI labs promised to have these data centers that would scale to superintelligence up in the next six months. It's been pretty clear that Elon Musk's Colossus 2 is going to take a couple of years. All the Stargate efforts from OpenAI is going to take a bunch of years. In fact, they quote decades, right?

Ejaaz:
[29:07] So, or a decade at least. So it's going to take a while to even get the power and energy that we need to train said superintelligence. So that alone tells me it's going to take longer than a couple of years, right? That's number one. Number two, I think that there are going to be other use cases to feed with this energy until we get to super intelligence. You mentioned Sora. I think there'll be a number of next generation AI apps, whatever the next AI version of Facebook is, whatever the next AI version of a chat messenger app is, that will use a lot of this compute in the meantime. It'll train on a lot more people's data, more people that use OpenAI's browser, for example, data will be kind of like congealed into this new thing that we'll use in existing types of products, whether it's phones, whether it's glasses, whether it's new apps. So that's the other side of the thing. And the final point, which Andre repeatedly says in this podcast episode, is he doesn't believe the final architecture or design of an AI model is there yet. Typically, most of the front AI models are transformer-based. He repeatedly says that he thinks there's going to be some different type of architecture. He doesn't have any idea what that might be. That will take us to that next level. So I think it's kind of like a kind of finger in the air. Hopefully we get there. And he doesn't have the clear answer here.

Josh:
[30:26] It's a it's a reorganization of timelines of what people perceive, when we will reach AGI, when we will reach different milestones. A really great episode, highly recommend. But moving on to the next topic, which is Grokipedia.

Josh:
[30:40] This is a word that I'm not sure I ever thought I would say out loud. But Grokipedia is, from my understanding, this is the truth-seeking version of Wikipedia, brought to us by the XAI team. So Ijaz, what is Grokipedia?

Ejaaz:
[30:53] You just summarized it for me. It is literally Wikipedia, but instead of having distributed network of humans or internet workers sourcing the information and writing the articles who have a lot of biases, who have a lot of opinions that they feed into Wikipedia articles, you have what is supposedly meant to be an unbiased, logical, straightforward AI model. In this case, it's Grok sourcing all the information, doing all the analysis, checking all the truth sources, and writing what will hopefully be a Bible of truth for the internet to use.

Ejaaz:
[31:30] I have a few thoughts on this, Josh. So firstly, Grokipedia is meant to be releasing today. So keep an eye out. If you have the Grok web app, apparently the logo and icon has already surfaced, as this tweet suggests. So you might be able to use it maybe after this episode goes live. So some thoughts is, I think this is Elon's personal pet project. I think he's been very vocal in the past around hating Wikipedia because I think they slandered him before and he has a lot of political leanings when it comes to the Wikipedia stuff. He thinks that it's being run by some extreme woke left sort of types. But I think that this is his personal project to kind of combat that misinformation. It's been a kind of personality trait that's very close to Elon. The other more optimistic side of this is I like that he's trying to provide truthful information in as efficient and as smart a way as possible. That's what AI is meant to do. It's why he acquired Twitter and renamed it X and his whole kind of vision behind this. So I'm hoping this is another consumer-level app that we can all enjoy.

Josh:
[32:33] Yeah this is it's it's a refactoring of the open source model that they wanted like open ai i wish named open ai to be open source grok intends to be open source but i think major labs are learning that they it's a technical disadvantage to be open source when you are in this race so grokipedia is very much a way of publishing their research i believe grok 5 the idea is that it's going to be maximally truth-seeking in order to do so it needs to remove the non-truth-seeking things from its data set unfortunately when training on wikipedia a lot of these things just are not true or they are very heavily swayed in one direction and if you are maximally truth-seeking that is a problem so the xai engineers they have to come up with a solution to find truth to seek whatever the closest thing to truth is in a lot of these examples and they created this massive data set that i assume is manifesting itself as grokipedia so now their training set for grok5 becomes a public good for the world and i think that's pretty cool where the idea is that if you are familiar with community notes if you've used x and you've seen people get corrections a lot of that type of check and balance will be integrated into grokipedia so it is not 100% maximally truthful, but it is the closest thing that we will have, hopefully, in theory. And we'll see it when we get a chance to play around with it.

Ejaaz:
[33:44] And that's not all that Grok is cooking up this week. I have a tweet opened up here where Elon says, the X recommendation system, that is the algorithm that fuels everyone's posts and timeline, is evolving very rapidly. We are aiming for deletion of all heuristics within four to six weeks. Grok will literally read every post and watch every video, 100 million of those per day, to match users with content they're most likely to find interesting. So the point he's making here is instead of the algorithms that you and I are very used to, whether we go to YouTube, whether we go to X or whatever social media site, which feeds us or chooses what types of content to surface to our eyeballs, he's going to hand over part of the reins to the user themselves. So let's say you're scrolling on your timeline and Josh, you're a huge car enthusiast. I'm making this up. I don't know if you are or you're not. I know you love Teslas and you want to see more Tesla content and you're not getting enough of that. You can simply go to your Grok AI model and say, hey, like I would like to see more Tesla stuff, maybe around software specifically, software updates. If you could surface any tweets that are relevant to that culture or news trend, that would be super cool. And stepping away from this a second, I think this is the first ever instantiation of major social media company giving the user the reins to do something like this. Now,

Josh:
[35:10] Social media platforms like Instagram.

Ejaaz:
[35:12] You can kind of like signal what kind of content you find interested in. They usually prompt and say, hey, did you like this content? By the way, you can do the same on YouTube signaling with likes, dislikes, with maybe some of the comments that you make, but there's never been a clear way to get access to the algorithm and the information that it's feeding you. I like this evolution. And I think this is ultimately a step that a lot of the social media companies will take because at the end of the day, they just want to surface the right content for you. And if you can help do that by explaining in literal English words to an AI model that this is the thing that you want, it'll be incredibly useful to you. Josh, do you have any similar takes, different takes?

Josh:
[35:50] Yeah, I'm not sure someone like Facebook would want this because they get a lot of benefits from just telling people what they want and serving these like very procured algorithms. I love this as setting a precedent for X. I also think the most interesting part of this is that they will read and watch every video which is 100 million posts per day that's a tremendous amount of data that is new and original that can be used then to train grok 5 models or even the grok 4 model to improve it so the the data capture game i think is is really important and then the shoot your own outgo is amazing i'd love to just if there's a hot topic today today all about cs go skins this is like my little like nerdy fascination where the market tanked from like four and a half billion dollars to two billion dollars overnight i want to learn more about this. So in that case, I'd go on X and I'd type it into my algorithm generator. I'd say, I want to see all of the news about CSGO. And then it will surface those things in a procured way. So I think that is amazing. I would love to be able to dial down certain things, dial up certain things. I'm really excited to try this. It's one of those things where I can't wait for them to try. There's another thing.

Josh:
[36:51] I also can't wait to try, which is our last topic of the week,

Josh:
[36:54] which is DeepSeek and their new paper about OCR. OCR is basically the ability to read visually from a photo or an image. And this seems dumb, but I kind of want to explain it a little simply. So Ijaz, I'm going to use you as my test subject. When you are consuming text or when you are consuming words, like how do you, you read the words with what like you see them with your eyes so you're you're looking at an image and you're perceiving the words then you're putting into your brain but with a large language model when you're reading words you are just getting tokens that are representative of portions of words without any visual cortex so you could imagine ai right now is a blind person and that blind person is like maybe there's like braille and it could feel the braille and it understands the words but it doesn't actually see the page that it's reading and to me it makes sense that ai would more closely emulate the way humans work similar to the way we have full self-driving cars now where if a human can see the road they can make a decision you could train a camera to see the road and to make a decision the same way a human would and it seems like this is just a much more efficient way of training these models and there's so much more throughput possible when you remove the.

Josh:
[38:11] Of a single text-based token. So what the breakthrough is today is that, well, now you can actually create these visual tokens based on visual elements. And it feels a bit like we're getting closer to the final form factor of the reality. It should be like photons in, photons out. Like with humans, we see things and then we can like output things. And it's not restricted to that text field. We have the full vision complex. What do you think?

Ejaaz:
[38:35] Yeah. So if I were to kind of summarize what you just said, Josh, into a really simplistic form, I can basically now feed an AI model images of information that contains maybe a large portion of information that is incredibly hard for the human mind to ingest in a second. And an AI model can just take all that information and suddenly use that to answer any kind of like prompt that you might have, right? And we've mentioned something similar around this called the context window, which is basically how many tokens or characters can I fit into one single prompt to an AI model so that it has all the information it needs to answer the request that I'm making of it. And I think the highest that we've seen so far is maybe 1.5 million, maybe 2 million context window. If you switch to this OCR model and this breakthrough that DeepSeek's made, you can 10x that to 20 million tokens, which is just an insane amount of information. And I'm showing you an example here on this tweet where some guy says, at 4 a.m. Today, I just proved that DeepSeq's OCR model can scan an entire microfiche sheet and not just cells. So he's got some cell information here and retain 100% of the data in seconds. That is just a crazy breakthrough. So you go from like being able to feed an AI model with a ton of information that might take an hour into seconds, which just makes the whole kind of game more efficient, more affordable, and way more powerful.

Josh:
[40:02] Yeah, it's a matter of compression. I'm reading this thing on the post that it's a DeepSeq OCR crushes long documents and division tokens with a staggering 97% decoding precision at 10x compression ratio, meaning it can take all of this data and just compress it into one single image. And it just compresses it further and further and further down. And Andre had a take on this, right? Because I mean, Andre, who mentioned earlier, AI godfather, he is also a big fan.

Ejaaz:
[40:27] Yep he loves this so he actually goes to the extent of saying especially as a computer vision nerd at heart who is temporarily masquerading as a natural language person i find this really funny because andre for so long has been the proponent of llms like we said before he's the godfather of modern day ai for him to say something as bold as you know what this new vision model could be the future is quite prominent, in my opinion. And I like, out of his entire explanation and excitement around this, Josh, he goes right at the end, I now have to fight the urge to make an image input only model for AI. So that would be a model where people just communicate with the AI via images and not words, which would be super weird, but something cool.

Josh:
[41:14] This is so exciting. This feels like we're getting the step function improvement. Like this has the ability to really be a large unlock on that path to reaching AGI or just building much better AI in general. It makes sense. It's like, why are we restrained to language when we have so much more modality? And this is very much a step towards more modality, more compression, more data, more diverse token sets. It's just a really nice progress towards what we want, which is just new efficiency gains, fun new technology.

Josh:
[41:41] And I think that's probably the theme of this week's episode. That's it. That's everything. Those are all the topics we had. A lot of them, but I think we covered them sufficiently. You just any final thoughts before we part ways with the lovely viewers?

Ejaaz:
[41:52] Nope, nope. That is it. And speaking of lovely viewers, there have been a lot more of you this week. I think on one episode alone, we gained a thousand new subscribers. Firstly, welcome. Thank you for joining. We're glad that you enjoy the content. Let us know what you like the most. We had a really exciting video on AI models trading tens of thousands of dollars and making a hell of a lot of money and losing a lot of money, definitely go check that episode out. And for our new and old loyal viewers, just want to remind you that there is the opportunity to get access to OpenAI's Sora app, which is their new Instagram Reels TikTok competitor. It's super cool. It's super exciting. And they've just released a bunch of new features this week. It's still invite only, but the Limitless guys have you sorted. We have a code. And in order to get one. All you need to do is subscribe on YouTube and maybe give us a five-star review or whatever star review you want on any platform that you listen to us and just DM us proof and we will send you a Sora code.

Josh:
[42:51] Listen to me. If you're listening to this right now, you're early. You're very early. We're doing well. You're telling your friends we're growing. We are scaling. It's amazing. So thank you for being here early. We are taking note. We appreciate every comment, every nice word, every share, every like, everything, all the support. So thank you so much. Again, that is going to conclude our week of content this week. We're back next week with some new fresh hot topics that might not even be out yet, but we are going to be here to cover it all the way through. So thank you for watching. It's been another great week and we will see you guys in the next one.

Ejaaz:
[43:19] See you guys.