Exploring the frontiers of Technology and AI
Josh:
[0:00] Welcome back to another episode of the AI Roundup, where we're going to talk
Josh:
[0:03] about the hottest news of the week, starting with the topic that everybody's talking about, Davos. The World Economic Forum is currently being held in Switzerland, where all of the top makers and shakers, politicians, technologists are all meeting together to discuss the future, what that looks like, what problems we're facing, and the solutions we are going to come up with in order to solve those. Now, at the core of a lot of these conversations is the conversation of AI, and all of the top people are there. We have leaders from Microsoft, Anthropic, Google, NVIDIA, Jensen Huang, Elon just got off the stage as we're recording this, and we are going to dive into the specifics about what the common themes are, because there are a few themes as we see all these leaders discuss.
Josh:
[0:44] And Ijaz, that's where I want to start this conversation is with the core topics that these people are focused on, what they're worried about, and the solutions they see on how we're going to work through these next couple of years of navigating the world of AI.
Ejaaz:
[0:55] The first theme is one around job automation specifically. And Dario Modi, CEO and founder of Anthropik, came out with a pretty bold statement. He says software engineering will be automatable in 12 months. And rather than explain it, we should just watch the clip.
Josh:
[1:12] We are now in terms of, you know, the models that write code. I have engineers within Anthropic who say I don't write any code anymore. I just let the model write the code. I edit it. I do the things around it. I think, I don't know, we might be six to 12 months away from when the model is doing most, maybe all of what SWE's do end to end. And then it's a question of how fast does that loop close?
Ejaaz:
[1:38] So obviously, Dario is behind the hit software and coding agent called Code, which has just taken the world by storm. I feel like we've been speaking about it like pretty much every single day. You and I have been toying around with it trying to make a limitless app. The point he's making here is probably one of the most coveted jobs in the technological age has been coding. You know, I hope, you know, kids can code, learn how to code and figure out how to code because that's the future, right? We've been told that for a while. And now we've had this clearly defining point from the man himself saying that this might be a commodity in the future. In fact, you may not want to go to university to learn how to code. You will just use an agent in the future. Where the role will shift to is managing a series of different agents and prompting them in the right way.
Josh:
[2:22] Yeah, so I actually watched the full conversation that was being had in that Dario clip in which he is having a fireside chat with the CEO of Google DeepMind, Demis Asabas. And he kind of outlines for us the top things to look out for in 2026, the three breakthroughs needed for AGI. And it was funny hearing the differences in opinions on these two leaders, because Dario was very much in favor of, we are going to reach AGI next year in 2027. And Demis is like, well, I still think it's probably a 50-50 chance we get to AGI by 2030. And the reasoning was around coding and mathematics, because Claude and Anthropic are so good at that. And I think Google, they're working on a much more broad approach to AGI, which is probably why the timelines look a little bit different. The three breakthroughs needed that Demis outlines is continual learning, world models, and robotics. And he said it is possible that we get all three in 2026, but it is unlikely and improbable that those things are going to happen.
Ejaaz:
[3:17] I love this take personally because it's been two weeks of 2026 so far, and we're already seeing these three things emerge. Claude Cowork, Anthropics latest it's basically called code for non coders was built by Claude code 100% of the code was written by another AI so that's kind of like an example of this continual learning where it's a kind of like self improving and then the world model stuff I mean Josh your favorite company or one of your favorite companies Tesla bakes this into their entire full self-driving stack. So I actually think it's important to take Demis and Dario at their word when they talk about these different kinds of things. I like your point around Google focusing on a bunch of different things, like science, like LLMs, video models, audio models, all that kind of stuff versus Anthropic, which has been dialed in at coding. What I would argue with you, though, is that I think coding models might be the most important models to exist currently because it's just like the bedrock of everything that's going to be created on top of it, right? You can't get video unless you have some kind of like coded embedded algorithm that kind of like produces it, right? So I don't know who to back here, Josh, but my money's probably overweight on Dario for his prediction.
Josh:
[4:27] Okay, cool. I will take the other side. I'll be team Demis because I really do think the world building and understanding of physics is a secondary thing to AGI that we don't quite have. And the common theme that I don't think we've discussed about is that they did actually share something in common, Dario and Demis. And that is their wish for AGI and the progress of AI to slow down. And the interviewer frequently asked them, well, why don't you then? Why don't you guys just agree to work together and slow things down? And the answer is presented in this other clip from Dario that we have, where he mentions that selling GPUs to China is like selling them nukes. And he used the word nuke, which is such a crazy take. But in a way, it's true, because the answer for the reason why they're not able to slow down is because if they slow down just a little bit, then China catches up. And while Demis and Dario can agree to compete on the same playing field, they cannot convince China to do so. And this is an existential race that they do not want to lose and therefore are unable to slow down. And this was a point of contention throughout the whole interview, which is like, hey, you're saying it's going to replace these jobs. You're saying it's going to offload all the coding to agents. You're saying all these things that maybe want to take some time to make sure we get right. Why don't you just slow it down? The answer is you can't. And that was the thinking behind this quote from Dario's presentation, which is like, hey we're we're selling these nvidia gpus to china to help them accelerate you can't expect us to possibly slow down and that was kind of the thinking in the general theme from from this section.
Ejaaz:
[5:55] I i always thought that uh take was naively optimistic right obviously it would be great to do things cautiously and build it in the right way but that's just like never the case and actually in the same um fireside chat demis also said uh he's really good friends with uh all the heads of most of the AI labs, rather, not all, but most. And he goes, if it was just Dario and I, we would slow it down together. If we just held the monopoly of the entire AI model market share, But, and he doesn't name names, but I'm guessing he's alluding to Saval by the open air and stuff like that.
Ejaaz:
[6:28] They don't want to slow down either. So it's not even just the pressure from China. It's the pressure from internal American labs as well. But AI specifically on Earth wasn't the only topic of conversation. Of course, we also have AI in space. Josh and I are now new formed fans. And none other than, of course, Elon, who just got off stage speaking about this kind of stuff, made it pretty clear that he thinks the lowest cost place to put AI will be in space. And that'll be true within three years. So obviously, he's feeling incredibly optimistic about SpaceX, rumored to be launching an IPO of $1 trillion plus this year, which is going to be super exciting, arguably one of the largest IPOs ever.
Ejaaz:
[7:08] I think we're reaching a point, Josh, where the cost of going to space is going to be cheap enough or affordable enough to start training AI models out there. How are we going to do this? Well, we joked around GPUs being in space, but I think Elon's idea is setting up a satellite constellation fueled by Starlink or other such specific satellites where they would beam data at super high speeds between each other. And that's something that is incredibly necessary to train AI models in the first place. You don't just need compute where you would get a lot of energy, in this case, from the sun directly, but you also need to transfer data super quickly. And traditional GPUs can't really do that. So his satellite network is probably the key to do that. but there's a bit of competition coming his way, which wasn't what he's spoken about at Davos.
Josh:
[7:53] Yeah, big time. So it's funny hearing Elon's timelines because that three to four year window is kind of what we were looking at on the earlier episode that we published this week, where he was talking about the AI chip progress from Tesla, where AI9 is going to be the chip that is finally cost effective and physically able to exist in outer space. So it seems like everyone's kind of converging on this timeline of 2029, 2030 is when that tide is going to flip, when the cost per kilogram to orbit gets low enough to actually make this a viable strategy. So for the people building AI on Earth, there are still a couple years of time before it is no longer viable.
Josh:
[8:25] Now, Blue Origin has come out with news this week saying that they want to compete in this space. And not necessarily AI and data centers just yet, but they want to compete directly with Starlink through a new service called TerraWave. Now, TerraWave is basically Blue Origin's answer to Starlink. They want to do pretty much the same thing with a network of tens of thousands of enterprise, data center, government, reliability. Uh kind of like creating a giant mesh network in space the problem is that they don't really have the ability to do this um they're saying the tarot wave architecture consists of 5408 optically interconnected satellites in low earth orbit reaching remote rural and suburban areas where diverse fiber paths are costly technically infeasible or slow to deploy we've heard the story before it's the starlink story now why is this interesting well there's one company on earth that could compete with starlink and it's blue origin because they're really the only company so far that has been able to get rockets into orbit now they're expecting speeds of six terabits per second anywhere on earth which if you're familiar with starling currently blows their stats out of the water so it was i read that and i was like wait a second six terabits per second that seems like a tremendous amount of data and that's that's a theoretical hope for them that's what i'm serious plans to launch this at scale until after the initial release in 2027 q4 2027 q q4.
Ejaaz:
[9:47] They're like two and a half years behind maybe even more josh i'm kind of like this is like an announcement of an announcement that i'm not going to care about even a year from now
Josh:
[9:56] Yeah and that's the thing is again space is so difficult and therefore competition moves so slowly so even the second best to SpaceX is, I mean, when did they launch the first satellite dishes? It's over 10 years. They have like a full 10-year lead before Blue Origin is going to try to catch up and launch these first satellites into space. So it's nice to see someone else trying. But again, the lead is so staggering that there's no world in which SpaceX does not continue to have its complete dominance.
Ejaaz:
[10:23] Question for you. On a previous episode where we spoke about Elon's satellite space constellation, you said that no other company is even close. They're probably even a decade behind and they can't accelerate it because you need all the regulations permits and you need to advance the technology. Seeing this announcement from Blue Origin, do you still believe that's the case?
Josh:
[10:42] Yeah, I still think it's like so incredibly difficult to do this at scale because the problem isn't actually getting rockets to orbit. The problem is getting them to come back and land safely and do so in a way that is fast enough that you could continue to launch massive amounts of payload to orbit and decrease that cost per kilogram. Blue Origin just does not have the infrastructure capable of decreasing the cost per kg down to a level in which it actually needs to, to do this at a sustainable rate at a large scale that they are hoping to do.
Ejaaz:
[11:08] Okay, well, since we're on the topic of space currently, I want to talk about a really important event that's happening much, much sooner. In fact, February 7th, I believe. We're going back to the moon, ladies and gentlemen. The last time that astronauts flew around the moon was in 1968.
Ejaaz:
[11:27] And in a few weeks' time, Artemis 2 is going to be the first crew back to the moon in 2026. Josh, can you walk us through what's going on here? We see the launch plan right in front of us. Um, you know, is this real? Is this fake?
Josh:
[11:42] It's the first time we're going to the moon in over 50 years. This is a big deal. And it's happening on February 6th. So I encourage everyone to watch. We will be talking about it right here on the show. It's an exciting mission. They're planning to send astronauts from Earth, take a loop around the moon, and then come on back home over the course of about 10 days, during which time they will travel at a new speed record of 25,000 miles per hour. And they'll also be at the furthest distance ever from Earth. Now, we've sent humans around the moon before, but this is going at a slightly distanced orbit in which they will be even further by, 500 kilometers from the moon's surface. This is a remarkably exciting mission. Now, there is something important to note about this mission, which is the cost. Now, we just mentioned cost per helium to orbit, reusability, how much that matters. The Artemis II program is going to cost about $4.1 billion. dollars. For reference, a SpaceX Falcon 9 launch is about 67 million dollars. So there is still this remarkable difference in price between these two types of missions. But I think it's awesome to have a government funded project actually doing something novel and new. We're getting new records that haven't been broken in 50 years of time. So while it doesn't feel, it feels like it should be more grand. This is a start. It's a step in the right direction. And it's so exciting to see astronauts actually
Josh:
[12:57] going into space. This is going to be sick.
Ejaaz:
[12:59] Yesterday, my eyes deceived me, Josh. I thought I was seeing an illusion because we actually have some breaking news, AI news specifically, from Apple, which is a sentence I never thought I would say this year or any year going in the future. But Ladies and gentlemen, Apple is reportedly working on an AI-powered wearable pin the size of an AirTag that will be equipped with cameras,
Ejaaz:
[13:21] Microphones, and many other sensors to act as kind of like a third core device. If that sounds simple, that's because OpenAon is kind of working on the same thing. We'll get to that in a second, but details first. The rumor says it'll be the size of an AirTag. It'll be thin, flat, circular, disc-shaped, and made of aluminum and glass shell. It'll have two cameras specifically, one normal lens, one wide lens to capture pretty much everything, and a bunch of mics to hear everything that is consuming to you. So what this sounds like to me, at least from that description,
Ejaaz:
[13:50] Is this is gonna act as kind of like an ambient device, right? So it's just gonna sit there, it's gonna ingest a bunch of data, it's gonna kind of act as your optical lens, it's gonna act as your eyes, it's gonna act as your ears, but it's also gonna do the brain work for you. Presumably, Josh, they're gonna be feeding this into their own foundational model, which is patented or using Google's Gemini 3 model. So I'm excited to see Apple make such a big move. I didn't expect to see it happen so quickly. This gives me a lot of optimism around AI hardware device because Apple is just like the king of building consumer hardware devices. It makes me kind of pessimistic on OpenAI though because Sam Altman has said a few times now that the companies they're competing with isn't Google and it isn't Anthropik, it's Apple. and I'm starting to see what he needs. He wants to, I think, project 40 to 50 million consumer devices sold in 2027. He's gonna launch his consumer device or debut it at the end of this year. If he does do that, he's gonna be going head to head with the king.
Ejaaz:
[14:52] Josh, I know you're like a massive fan of Apple.
Josh:
[14:54] Like, what are your takes on this? Yeah, I'm a massive fan of Apple, but I'm also a more massive fan of Johnny. Because Johnny was the design culture of Apple, and Johnny is the person who is responsible for tackling the creation of this device at OpenAI. And it's funny for me to see the kind of convergence of form factors as we go through this process of figuring out what the next generation of compute looks like for AI-first devices. I mean, it seems like OpenAI and Apple, they're both doing a suite of devices. OpenAI has something that are maybe like earbuds, probably a pin or a small like.
Josh:
[15:24] Thing that you could put on your desk form factor. Apple this year is projected to announce this pin as one thing, a desktop kind of media console as another, possibly a ring doorbell, and also possibly glasses this year. So the convergence on these form factors is starting to happen. We're starting to see that no matter what, it's going to be a suite. There's not going to be a new iPhone. There's going to be a new distribution of smart devices that uses, the new product is the AI. And what's happening is we're just developing these kind of capsules in which it'll be stored in. Now, the interesting thing for the OpenAI hardware is it's basically Johnny Ive's first real product without Steve Jobs hovering over it or without Apple's immune system kind of throttling the way in which it's designed and built. And it's this experiment and it's kind of a good data point to see whether that magic at Apple was from Steve Jobs, the psychotic opinionated person, or if it's actually credited to just the incredible taste and form factor of johnny ive and what we're going to see now is the first opportunity of this person who has designed so many products that we love today doing so without the bureaucracy of a large company without the man who is a crazy genius breathing down his throat and i i don't know we're going to see but what we are getting is probably the best case scenario for the consumer which is two of the best one of the best companies one of the best designers competing head to hand on building this next generation of hardware for AI first. And for that, I am so, so excited.
Ejaaz:
[16:52] I mean, AI pins haven't had a good track record so far, but I'm optimistic about Apple's version. But I have to say, Josh, I think Johnny Ives succeeded because he had Steve Jobs breathing down his neck. He was meticulous and an... A-hole for a reason and i don't know if sam altman can live up to the hype um i don't know a lot of people don't like sam so so maybe
Josh:
[17:10] Yeah no and that's an important part is like you can design the most beautiful product but without the actual software running it it's nothing like without ios the iphone is nothing and johnny he designed the visuals of ios but the core infrastructure the reason why it feels magical is because a very hardcore engineering team built that and if open ai can't come up with that then they will regardless of how beautiful this product is, it simply will not work. And we're about to see that demoed in real time. So this year is going to be so, so exciting because finally we're getting hardware from legitimate companies. This is not a humane AI pin that doesn't work. This is Apple making a pin and it's going to integrate with iOS and it's going to be hopefully an amazing experience.
Ejaaz:
[17:48] Well, listen, I mean, this hardware war isn't going to be singular. You need a software and software ecosystem to compete and win. And seems like Apple's focusing on that as well, introducing or rumored to be introducing Siri 2.0, a new interface that's going to be introduced into iOS 27. Josh, I'm super excited about this because I have to admit, I've had Siri turned off on my iPhones, plural, successively, because it's just been so terrible. What was originally pitched as a smart assistant was anything but smart. But now with the new Gemini model powering it, it's going to aim to basically replace or compete with the likes of Anthropics Claude or ChatGPT. But the best part, the thing that I'm most excited about this,
Ejaaz:
[18:36] Josh, isn't that I'm going to get some smart answer directly on my iPhone. It's going to be connected to all my apps, dude. It's going to be connected to all my notes. It's going to be connected to all my photos. So yeah, it's going to get access to some really personal data, but that's data that Apple already have and that I already consented and gave over to them. Now, if I can get a personalized agent Siri that I can call on, talk to, and say, hey, can you bring up that photo where I'm hugging my mom and we're in this really unique restaurant that has a water fountain in the middle of India?
Ejaaz:
[19:04] It could just find that photo and bring it up or even make a plan an itinerary around that exact same trip i'm most excited about that because it reminds me of another company i'm really bullish on google the reason why google in my opinion is winning so far is because it has the distribution and data instead of you having to download a separate ai app and connect it to another separate app like your gmail you now have gemini in gmail now you're going to have ironically gemini powering siri in your apple iphone and that's two billion plus devices that's just insane
Josh:
[19:34] Yeah this they're hopefully delivering on the promise that was promised to us two years ago with apple intelligence i'm not sure people really fathom the the fumble that was apple intelligence because if they actually delivered on these promises and if they created this all-knowing intelligence that lived locally on your phone it would have been it would have been everything it would have been everywhere the world would have been adept at ai because you would have not had needed to download a blank text box you would have had a million use cases right out of the box because you use your phone every day and ai just would have handled everything in such a more impressive and powerful way so i am praying on everything that they could finally figure out this year and roll this out that way the billions of users that use iphones can start to understand the real power that lies within ai without even needing to realize that it's ai without needing to go to chat gpt and pursue this text
Josh:
[20:21] box it just shows up it feels like magic it makes the life better. I am praying that they could get it done. But we also have more AI news this week, particularly as it relates to Claude. Now, Claude and Anthropic in general have been taking over the world. We recorded an episode on them last week, and we have some small updates, Ija. So what was most interesting to you this week that we didn't discuss on our previous episode, which was walking through Claude Code and Claude Cowork with live demos and how you can actually use it in your day-to-day life?
Ejaaz:
[20:48] Yeah, well, what I'm showing on the screen here is the breaking new product called Claude Cowork, which is meant to basically be Claude Code, but for all the non-coders. So anything that is non-software engineering oriented, you can now have a chat bot and ask it to do stuff for you. You can connect to your desktop or your browser. But Josh, that's not what I want to speak about today specifically. I want to reference it because it's taken the world by storm. And most importantly, it's put a lot of fear and thoughts in some of the top companies in the world that are building software products. The main question they're asking themselves is, well, if I have a coding model that can kind of create software on the fly that is personally attuned to my taste and also my company's taste, then why do I need to be paying all these SaaS companies money to get access to their software? I can just create and run my own. And I don't want to show you any kind of thesis or anything. I'm just going to show you the stock performance over the last week, Josh. I mean, we're in the double digits and not even the low double digits here. We're like losses of up to 17.65%, 16%, 15%. And these are like companies that had a 10 to 15x forward multiple. So it's quite obvious what's happening, people are realizing the effects of some of these coding agents and coding models and realizing they're good enough to replace some of their best software engineers. We mentioned it earlier, Dario mentioned at Davos that within six to 12 months,
Ejaaz:
[22:10] Software engineering is going to be fully automated. So in a world like that, software becomes a commodity. And then the whole structure of investing around SaaS companies, typically the model here is you invest in a SaaS company and it burns a lot of money, but it acquires a lot of users. And the thought is, in a few years' time, we're going to monetize these users. Does not make sense. And it's a risk not worth taking if you could just create software for next to nothing for a prompt.
Josh:
[22:35] Yeah, the world leaders are getting on stage and telling you this is going to happen. The stock market is showing you this is going to happen. Every sign in the world is that this is going to happen. So what are companies doing about it? Well, we have Macroheart, which I think is probably the solution to this problem. The problem being that if you have a SaaS product, that your moat is the data, your moat is the interface and the difficulty in which it was required to onboard people into it, then that is not a real moat because Macroheart is planning to build this software on a per prompt basis, hyper-customized to the end user. The idea is that if you have a task that you need and you could outline it clear enough, you can have that software generated for you and your entire company in less than a day, a few prompts at a fraction of the price. And any SaaS company that revolves around a very specific narrow focus of software will just be rendered just like not useful at all because we'll be able to generate these through these advanced large language models. So that is what we're seeing with Claude Code. And I'm sure as that begins to evolve and continue, it will start to absorb more use cases. And over time, these SaaS companies just going to continue to fall apart. In fact, there's another example from Claude this week that I loved, which kind of affects us, which is video production skills. Like, dude, what the hell is this? Stay in your lane stop coming over here and cloud cloud code now has an agent skill that allows it to produce videos he just have you seen these demos.
Ejaaz:
[23:56] Dude, I just need to show you a demo that I saw that blew my mind. This was a video, a promotional video from Polymarket. And this is something slick. I used to work at Coinbase and there used to be a team of 15, I'm not joking, 15 humans that would take at least a month to create a video that you're seeing on your screen right now. And apparently, according to this dude, he created it in 30 minutes using this new tool that you just referenced, using four to five prompts. It's just insane. Like, how does this work? Is it just code code with video?
Josh:
[24:24] Yeah, and it looks like a full commercial. It's really incredible. And all this is is Cloud Code, but it adds a skill. Now, a skill is a custom kind of data set, a custom prompt that is fed into the system that explains to it how to generate videos in a way that is effective and useful and in the promotional way that you'd want them to look. And now suddenly, because this one extra tool has been installed into Cloud Code, you are able to generate videos that look so, so good. And this is using the same exact model stack. It's not using Opus 5.0 or anything that hasn't existed. It's just expanding on the existing software. And that's what we're going to see over time is not only will the models get better, but the use cases in which people design around them will continue to improve. And this was a really fun example, I thought, of how easy it is now for video editors to actually offload their job to an AI.
Josh:
[25:10] Like it started as writers, as coders, now video editors. Who's going to be next? I don't know. But it's so serious that LM Arena, which is on screen now, has developed an entirely new arena just for testing video models. And now you can go on and you could try it and see which video models are the best. And EJS, we haven't really gotten a new video model in a while. So I'm sure we're due for another mega breakthrough as it relates to video. That's something to look out for in the next couple of weeks to months.
Ejaaz:
[25:33] Josh, question as someone who helps produce this show. Is this a tool that you would use now or try out?
Josh:
[25:39] Yeah, for sponsors in particular, we do custom assets for sponsors. And a lot of times the visual asset element to it takes the longest amount of time. And if we can create small assets that look as good as they do there, then that's a huge quality of life improvement for everyone.
Josh:
[25:54] And the quality of the ad even gets better. So it's a win-win all around.
Ejaaz:
[25:57] Okay, we're moving on to probably one of my favorite companies in the AI space, Google, who just came off one of the most impressive week of AI software launches ever. We made a video, okay, to put this into context, Josh and I made an entire dedicated episode on three AI product launches that Google made last week. And as soon as we hit end record, they released another product, which is something that we're going to cover right now today called personal intelligence.
Ejaaz:
[26:28] This is what we've been predicting Google will make a move on since they created the Gemini model. And that is an AI model that is trained specifically on all the data that you give Google. Now, I'm not just talking about stuff that you type to Gemini and the conversations that you have with it. I'm talking about everything that's in your Google Drive. I'm talking about your Google Photos. I'm talking about your Google Maps geolocation. I'm talking about your Google Docs that you use for work, your Google Sheets. I'm talking about Google search history, your YouTube history, and all the videos that you've watched. Imagine the type of model that, or multimodal model rather, that can be formed around all your tastes. It understands what you want. It'll be a much more intuitive model, and that's what Google's personal intelligence is going to aim to be. You see, them releasing Gemini for Gmail, then releasing Gemini in Google Maps, was just kind of baby steps to build this wholehearted model. And Josh, we've spoken to a bunch of the Google heads that are building this part. We had Robbie Stein from Google Search, who kind of like spoke about this overarching thesis that they have at the company, which is it's not just one team and their AI product, it's the entire company feeding into this one unanimous Google model. And having something that we can use personally, like on our phones or even traverse across to like when we open YouTube or when I open Gmail is something that I've been really yearning. Like how do I get ChatGPT in all the things that I use? We now have the onset.
Josh:
[27:54] Yeah, and there's this trend that I'm seeing with Google that I'm not really seeing in any other companies. And the fact that they're just shipping products that the average person would want to use that would improve their quality of life. It's like, they have all the data, they have all the information on you. Let's just dump this into the LLM and make it supercharged. So now it has all the context. This is essentially what Apple Intelligence promise was, right? When Apple was going to promise to allow you access to all the data on your phone, Google is doing that except for all the info in your G Suite. And it's this really amazing thing. It's going to create hyper-customized intelligence. It's a far superior experience for those that opt in. You do need to opt in. So if you are interested in doing this, it's a manual process. It won't do this automatically. I would advise trying it out. It's very cool so long as you're comfortable with giving your data to Google. And this was only one of two new announcements that came out of Google this week. The second one is that if you are a student or if you have a son or a daughter or someone who is a student...
Josh:
[28:48] Gemini has something for you and this is a testament to who Google is starting to approach in the market which is everyone. You can now create standardized test exams on the Gemini app and I think this was so cool where if you are studying for the SAT this is the first version that's rolling out you say hey I want to take a practice SAT exam you hit enter and then it gives you a full test with questions with the correct answers with what you got wrong and why you got wrong and it creates this ultimate study guide to a student who is working on passing the standardized tests. So Google is just rolling on all cylinders. They are firing away. They are creating really valuable stuff for everyone across the board. And I actually took a practice SAT exam last night to test this. And oh boy, I took the math section. I did good. I don't know how I did good because I mean, well, I did good in the past SAT. On the current SAT, I left a little bit to be desired there.
Ejaaz:
[29:39] Okay. So no going back to school anytime soon,
Josh:
[29:42] Josh. I'm not answering the SAT today. I'll tell you that.
Ejaaz:
[29:44] Okay and then the one final bit from google uh which is a rumor but i have a feeling is going to be true is they might be spinning out their custom gpu arm um for those of you who haven't been caught up yet google hasn't trained their models on any of invidia's gpus they've had their own inbuilt system called a tpu um and we've gone back and forth on this show kind of like kind of guessing whether they might eventually do this and go head-to-head with NVIDIA. If this rumor ends up being true, this might be the most direct head-to-head confirmation that they are doing this, which then makes me think, okay, if Google is going to be the future of AI science, the future of AI LLMs, the future of AI video models, you know, to create on like YouTube and create all these different things, and then they're going to be integrated across every single app that we use, Gmail, Google Maps, and stuff that we spoke about just now, and also sell the infrastructure, the valuation multiple on Google right now would be incredibly underpriced. Now I'm skeptical as to whether they'll be able to pull this off but just a hint that this might be the future that Google is aiming for and it seems like it's the entire pie. They don't want to give they don't want to share with anyone.
Josh:
[30:52] Yeah it's been pretty amazing to see. I think my my hottest take right now is
Josh:
[30:55] probably that XAI and Google are just going to battle it out to the end. Yeah. And those are going to be the two top dogs who are actually in the race for that frontier model and both companies are heading full speed ahead, as are we, as we navigate the frontier of AI and technology and all the crazy stuff that's happening from Davos to rumors at Apple to all the cool new Google features. If you enjoyed this episode, please do not forget to like, subscribe, share it with a friend. That is always very much appreciated. Leave a comment, subscribe to our newsletter, where we write about this twice a week, most times early, where you can get the early jump on the stuff that we're going to talk about in the actual episodes. But with that said, And I think that's everything. So we're going to wrap up the show. Thank you so much for watching. Another amazing week on Limitless, another amazing AI roundup, and we will see you guys in the next one.