Hola Mundo

In this episode, we tackle the mixed reactions to Microsoft's recall feature, which collects user data to train AI models amidst privacy concerns. We'll also explore Apple's ambitious Gray Matter project, which aims to embed generative AI tools into core apps, enhancing functionalities like text summarization and voice memo transcription.

We'll debate the limitations and advantages of Google's AI capabilities, discuss preferences for tech brands, and compare the cautious approach of Apple against the fast-paced advancements of Microsoft and Google. Mauricio will share insights into the challenges Google faces, while Sarah will defend Google's reliability over Chat GPT.

Stay tuned as we dissect the generative AI race, speculate on potential winners, and discuss the integration of AI into everyday tech.

Chapters
00:00 Skeptical of using Google or AI search.
07:52 Google introduces AI-overviews in search results.
15:26 Opus: smart cloud model with big context.
18:49 Anticipation for Google's generative AI features grows.
25:57 Microsoft's OpenAI partnership reshapes AI landscape.
32:19 Generative AI can have infinite information.
38:34 Apple partnering with OpenAI for Siri updates.
43:13 Emoji search feature saves time, boosts efficiency.
45:37 Apple focuses on enhancing user experience, technology.
51:47 Intuitive AI integrated into iOS improves efficiency.
58:07 Microsoft, OpenAI, Apple, LLM partnership speculation.
01:02:23 Smartwatches may be key for generative AI.


Speakers

Sarah Mousa Chikhani is a digital enthusiast who keeps a keen eye on the ever-evolving landscape of search engines and AI-driven platforms. With insightful perspectives on the competition between tech giants like Google and emerging technologies like Chat GPT, Sarah brings a nuanced understanding of their respective challenges and user experiences. She highlights the reliability and extensive user base of Google, juxtaposing it with Chat GPT's innovative, yet sometimes less accurate, predictive responses. Sarah’s commentary encapsulates the complexities and nuances of navigating the digital information space.

Mauricio Moreno Jure-Cejin is deeply immersed in the world of technology, working diligently across different departments with a strong focus on coding. He is particularly captivated by the rapidly evolving landscape of generative AI tools, both those that are currently making waves and those on the horizon. Mauricio sees Google as a key player striving to lead in this innovation race, and he eagerly anticipates the integration of advanced generative AI features within Google Workspace. In his daily workflow, he heavily relies on Google Drive, Google Docs, and Google Meet, which makes him especially excited about the potential enhancements that a generative AI assistant could bring to these tools. With a keen eye on technological advancements, Mauricio continues to contribute his expertise and enthusiasm towards the ever-evolving digital realm.

Alejandro Hynds is a tech enthusiast who has closely observed the shifting landscape of search engines and artificial intelligence. Though Google Search was once his go-to for solving various queries, Alejandro has noticed a gradual decline in its effectiveness. Despite the advancements in AI-driven platforms like ChatGPT, he remains cautious about relying on them entirely due to the potential for inaccurate information. His ongoing quest for reliable information often leads him back to Google, albeit with increasing frustration over its inefficiencies. Alejandro's experiences reflect the broader challenges users face in the evolving digital age, balancing traditional search methods with emerging AI technologies.

Carlos Vela Canul is a tech enthusiast known for his insightful views on the evolution of search engines. He believes that Google's supremacy as the world's leading search engine is due to its ability to provide correct answers most of the time. Carlos acknowledges the potential pitfalls of integrating artificial intelligence into search engines, such as the risk of AI 'hallucinations' leading to incorrect information. Carlos underscores the reliability of Google, contrasting it with AI models like ChatGPT, which can sometimes produce less dependable responses. His perspective highlights the ongoing challenges and opportunities at the intersection of search technology and artificial intelligence.

What is Hola Mundo?

**Podcast Description: Hola Mundo by DaCodes**

Welcome to "Hola Mundo," the official podcast from DaCodes, where we dive deep into the dynamic world of technology, disruptive innovations, startups, and software development. Hosted by industry experts and DaCodes' very own tech enthusiasts, "Hola Mundo" brings you the latest tech news, in-depth discussions on groundbreaking technologies, and insightful conversations with startup founders and tech leaders.

Each episode of "Hola Mundo" is crafted to keep you at the forefront of the ever-evolving tech landscape. We explore how emerging technologies are reshaping industries, delve into the stories behind successful startups, and provide practical insights for software development and agile methodologies. Whether you're a tech professional, a startup enthusiast, or simply passionate about innovation, "Hola Mundo" is your go-to source for staying informed and inspired.

Join us on this exciting journey as we decode the future of technology and innovation. Subscribe now and be part of the conversation that shapes tomorrow's digital world.

Tune in to "Hola Mundo" – Where Tech Meets Innovation.

Mauricio Moreno Jure-Cejin [00:00:00]:
Welcome back to another episode of Olamundo podcast. This is our podcast from Dakotes. We are a software development services company, and today we are going to talk about what is happening with the generative AI race. We are your hosts. My name is Mauricio.

Alejandro Hynds [00:00:21]:
My name is Alejandro.

Sarah Mousa Chikhani [00:00:22]:
I'm Sarah.

Carlos Vela Canul [00:00:24]:
And I'm Carlos.

Mauricio Moreno Jure-Cejin [00:00:26]:
And we all work at the codes in different departments. We are all amazed with what is happening with the generative AI tools that we are all seeing right now and that we haven't yet seen. So we are going to start with Google. I think Google is the company that is trying to get ahead of the game, but everything is going wrong in here. I want to talk. Well, from my point of view, what I am waiting for the most from Google is to have generative AI features in Google Workspace. In my day to day, I use a lot Google Drive, Google Docs, Google Meet, everything from Google workspace. But still, we don't have the, like a cool generative AI assistant inside Google Workspace.

Mauricio Moreno Jure-Cejin [00:01:34]:
And what happened is like, one year ago, more than one year ago, Google announced the generative AI features in a video, a really cool video. You should watch it. We will paste the link in here. And that announcement was on March 2023. But they made a cool video where Generative AI was doing the emails for you and generative AI was doing Google sheets for you, and finance statements and power and presentations with Google Slides. The Google Assistant was doing pictures. So, I mean, you could only tell the assistant to do a presentation about a coffee shop. And then this AI did all these pictures of a coffee shop and of a coffee and of food and everything.

Mauricio Moreno Jure-Cejin [00:02:40]:
So that happened on March 2023. But we still haven't seen the that in the Google workspace. I don't know what you think about that.

Alejandro Hynds [00:02:53]:
I think it's very confusing why Google hasn't been able to win this space. Obviously, Google Search, they had a huge edge there for years, and they seem to be losing that edge in terms of search. I don't think AI has totally replaced Google. I still struggle with things when I'm trying to figure out how to do something. Maybe something that's, like, sensitive, I don't know, maybe change the oil in my car. I think it's much easier to ask chat, GPT, or some other AI. But I never know if they're going to hallucinate an answer and tell me to do something that then can cause damage. So I find myself going back to Google, but it's very inefficient.

Alejandro Hynds [00:03:33]:
I go there and I'm just like, why am I still going through regular Google search to find something that I know can be there? But of course, these things with Google where Google hallucinates and tells you to eat one rock a day or whatever, still make me very wary of using Google or using AI to solve daily tasks or do searches. But it feels like Google should be winning that space and they just haven't been able to. I think there's issues there with the culture internally at Google. Maybe some. Yeah, some. Maybe some people would call it woke issues or things where, like, because they've tried to avoid biases within the Google search or Google AI, they've had tons of issues with. With the model, hallucinating answers that are just on it. Right.

Alejandro Hynds [00:04:21]:
So I'm very confused as to why Google isn't winning, but they clearly aren't.

Sarah Mousa Chikhani [00:04:25]:
I actually disagree because I feel like Google has a bigger challenge than chat GPT. When Chat GPT introduced themselves, it was just like, ask any question and they make you like a. Like a. Well, it's a prediction answer, but Google has a bigger user base. It's so much everything, like, it's the go to website for finding facts. And I feel like chat DPT, sometimes it makes up, like, information. And on the other hand, Google, you have a lot of options and you need to find it online, so it's a bit harder because you need to search for it. But chattypt, sometimes I feel like.

Sarah Mousa Chikhani [00:05:11]:
Or Gemini or Bart. Well, Gemini is the rebrand of Bart. I feel like it makes up answers. I don't know what you think.

Carlos Vela Canul [00:05:23]:
Yes. From my point of view, I believe that one of the things that makes Google the best search engine in the world is that not always, but sometimes it always gives you the right answer for everything. So right now, if you implement AI into the search engine right now, maybe it's going to hallucinate and you're going to have wrong answers. And that's. I understand what Alejandro is saying. You are trying to find some answer. You know that if you look that answer in chat GPT, it could be a wrong answer because it's hallucinating. But in Google, you are expecting to have the right answer.

Carlos Vela Canul [00:06:00]:
So right now, I don't know why. Maybe it's too early to add these kind of features into Google because AI is like moving so fast right now that, for example, Google is trying to catch up companies like OpenAI, Microsoft, and they don't have, I think, the enough expertise in this. In this kind of area with AI. So I believe they need to start moving slower because, you know, if they start doing this, they can lose credibility, you know, in the engine.

Mauricio Moreno Jure-Cejin [00:06:32]:
I have a theory. Yeah, I have a theory. Well, I have two. One is a theory and the other one, it's almost a fact now, but yeah, for one, like from one point of view, like the regulations point of view, Google, it's a lot more on the, on the eyes of everyone. So, yeah, like everyone, Google cannot do anything wrong. Like the thing, it happened when they announced these pictures, like the competitor of Dolly, every Google was doing all these pictures wrong. So people ask Google to generate a picture of the founding fathers of the US and it got like, it made like founding fathers, but as if they were indian, African Americans, Chinese, and I mean, the founding fathers are white, right? So things like that happened with Google. And then what is happening right now? Well, this week it just happened.

Mauricio Moreno Jure-Cejin [00:07:52]:
On the last Google I o, Google announced this, what they call the AI overviews, which is like a very risky move from my opinion. What AI overviews is doing now in the US is when you search something on the Google search box, the first segment of the answer is no longer like the blue links or the sponsored blue links as it has been for more than a decade now. The first segment of the answer is what they call AI overview. And the AI overview is a generative AI generated answer of the question you asked on the Google search box. And that is also becoming a failure or a flop because I don't know, the Internet is full with these guys that are asking Google like how write good nutrient, like a diet for me. Right? And Google says that you should eat stones every day. And a lot of, a lot of stuff like that is happening right now with the AI overviews. So, yeah, so from one side, Google, it needs to fight a lot more with regulations, more than OpenAI and the small new AI companies.

Mauricio Moreno Jure-Cejin [00:09:32]:
And on the other side, and this is my theory, maybe, well, no, it's not a conspiracy theory, but it's my theory is that Google, well, not Google but OpenAI or the fact that chat GPT is still the smartest model. And it's like the model that better understands what you ask it to do better, gives answers better. It seems like it reasons better than all the other models. My theory, my personal theory is that they discovered something, since when they launched GPT-3 or 3.5, they discovered something and they like their black box of their models, they added something or they have something small or big, different. They are doing something different that no one else is doing. And that is the reason that they still haven't. It's not open source, right? Like everyone criticizes OpenAI because it is closed and they don't want to say how their model really works and blah, blah, blah. Right? But I think that is because they really have something different.

Mauricio Moreno Jure-Cejin [00:11:07]:
They are doing something different. That it is. It's the secret for why GPT, it's better than the other ones, right? And so, well, that, that's my theory, but it might be, it might not be true, it might be true, but it is amazing how like Google haven't been, hasn't been able to have a good model or a good, like Bard was a failure. Gemini. I think it was a little bit better, but still, it's not like cha GPT. I want to ask here, Sara, why is she still using Gemini instead of cha GPT? Or I don't know if you are currently using now cha GPT, are you still using Gemini or are you using cha DPT now?

Sarah Mousa Chikhani [00:11:58]:
I'm still using Gemini. I prefer it over chaGpt. I think that Gemini, well, I work in marketing, so I think for the specific things that I look for in these language models, I feel like Gemini produces more interesting and surprising content. I feel like whenever I read something, I can know that it's made with Chat GPT because it has like overused vocabulary. And I don't feel like that with Gemini. I feel like it gives me a certain point of view. And then another big advantage gives me versions from the get go, like to retry, like retry and retry. And Gemini, you ask for something and then I get like three versions.

Sarah Mousa Chikhani [00:12:51]:
And from that I go on, you know, so I feel like it's more friendly in a way. What do you guys think?

Mauricio Moreno Jure-Cejin [00:12:58]:
But are you paying the, like, are you using the free version of Gemini or are you using.

Sarah Mousa Chikhani [00:13:03]:
That's another thing. Like, Gemini gives you a lot of options for free GPT. Everything is like, you need to pay, so you can have like the big advantages. So, no, I use the free version.

Carlos Vela Canul [00:13:15]:
So Sara, you think that Dmini has more accuracy on the responses?

Sarah Mousa Chikhani [00:13:22]:
No, I feel like both of them have, like, as you said, they hallucinate. But I like the responses from Gemini more than like, the tone, the words they use. And it also gives me, like, why are they choosing to write that? That I love that. I love when they give me like an explanation of why, like, you should improve this, you should change this, use an action verb. I love that. And it doesn't give me that. So that's why I use Gemini.

Carlos Vela Canul [00:13:55]:
Okay. Okay, that's cool. I believe that for example, if we compare chat GBT with Gemini, maybe Gemini could have more like information regarding one topic in specific. So it will be interesting to like trying to ask the same questions, to try to chat GPT and to Gemini and to see the responses. You know, maybe you ask something related with technology, chat DBT will have a better answer. But if you ask something, I don't know, related with political, I don't know, Gemini could have the answer. So maybe we need to compare that.

Mauricio Moreno Jure-Cejin [00:14:33]:
Yeah. There are videos of guys like YouTubers I follow. They do that experiment on their YouTube and it's cool. Like it's interesting how, how. I think the last, well, the last one I saw, I think it was like two or three months ago, so. And in that video, this guy concluded that Cha GPT four, like the two bear. The two, the two best models were ChA GPT four and the new cloud model which was Claude Opus, I believe, which is from anthropic, which we are not. I mean we didn't included anthropic in this generative AI race because anthropic is also like a small firm.

Mauricio Moreno Jure-Cejin [00:15:26]:
But yeah, I mean for sure, anthropic is one of the like cloud opus. Their model called Opus is the, it's very, it's really, really, really smart and also anthropic. Like the cloud models have always been the ones that have the biggest context window. But I don't know why. Like they don't do a lot of marketing, they don't do a lot of, it's like more, it's not that mainstream anthropic. I don't know what they are trying to do or what's their, their strategy or their vision for the company, but their tech is amazing. So the last time I saw the video of this test that you are talking about, Carlos, the two best models that gave the best answers were Claude, like the best model from anthropic and the best model from Cha Chi.

Alejandro Hynds [00:16:28]:
I use Claude every time OpenAI goes down, which happens more and more frequently, it seems like. And I'm sure they're using spikes when it, when, when chat chippity's down.

Sarah Mousa Chikhani [00:16:38]:
I actually read an article where they mentioned that it will be interesting future where language models self error check. I feel like that's where we will find like something good from this. Like no, they give you an answer and then they come back and they say that's wrong. Like this is the answer. I feel like that would be interesting to see like a damage from school.

Mauricio Moreno Jure-Cejin [00:17:05]:
Models.

Alejandro Hynds [00:17:08]:
Going back to the Google thing. And I think there's like, they don't have to have, Gemini doesn't have to be amazing, even if it could just help with tasks. And I don't think it does this, but Mauricio, you and I were talking the other day about how hard it is to schedule a meeting with five different people because you have to look through their calendar and find a space. And how does this not happen just automatically? How does it not see that there's a message or an email or a thread that people are talking about a meeting and just propose it and say go. So simple things like that within Google because they have access to your calendar, to your gmail, to your docs, to your sheets. It would be so easy to embed that within the Google workspace products and they just, they haven't done that yet. And it's a little bit frustrating.

Mauricio Moreno Jure-Cejin [00:17:54]:
Yeah. Yeah. Well, Google Calendar has like before, long before generative AI. Google Calendar has a feature of, I don't know if they use AI or they only use like basic programming, but it's a feature called like you can select like three or four people and then it has a feature over there, I think it was in the left that says find a spot, find an empty spot for everyone, something like that. And it's okay. I mean that feature is okay, but it has no context and it has, yeah, it doesn't have anything that generative AI could have. So it doesn't know which times are better or if you shouldn't schedule the call, I don't know, during lunch or whatever.

Alejandro Hynds [00:18:48]:
Right.

Mauricio Moreno Jure-Cejin [00:18:49]:
So, so yeah, I mean, I mean, I think everyone is waiting for Google to start, like to have generative AI features in Google workspace, in Google Calendar, in Gmail. But yeah, I think they need to have the, they need to have the regulators happy and they cannot do anything without testing it before with a small group of users and then testing it again and testing it again. And still, I really think CAD GPT has something like OpenAI has something that they are not saying about how they build their models that Google doesn't have. And that is why also this is another thing I wanted to talk about, like Microsoft, which we're going to talk about Microsoft now, but Microsoft uses OpenAI models like Microsoft said, okay, we are not going to build our own, we are just going to use OpenAI's model. And then supposedly it was a, there was a rumor on the news, I believe, like three, three, two or three weeks ago that, that said that Apple was trying to do a partnership with OpenAI to use chat GPT on Apple products. So, so that also goes online with what I'm thinking. Like, maybe these companies are already discovering that, hey, what OpenAI did is not as simple as it seems. Maybe, maybe they discovered something.

Mauricio Moreno Jure-Cejin [00:20:46]:
Maybe they are doing something special, something different that they will not say, tell the world. And. Yeah, I don't know. I don't know. Maybe. But it's weird that, that Google hasn't been able to do a really good LLM being Google. They have all the information of the Internet structure. So it's really weird.

Mauricio Moreno Jure-Cejin [00:21:19]:
I think your microphone. I think your microphone, Sarah, is like, maybe if you change. If you use the microphone of your computer.

Sarah Mousa Chikhani [00:21:31]:
Okay, give me 1 second.

Carlos Vela Canul [00:21:33]:
Yes, sorry, sorry. So also regarding Google, I was listening to this waveform podcast and they were talking about that Google needs to implement more like useful use cases for the people, not only like a chat or an assistant, because some people were very upset because they have this Google, a system that you can control your home, but Google hasn't integrate that with Gemini. So if you ask Jamie and I, can you turn out the lights in my house? That is not possible right now. So they are like, okay, give me more useful cases that I can use with AI. Not only like chatting with the AI, expecting responses, but also that you can use AI in your life on a daily basis, you know? So maybe that's the goal of Google. They need to find out different ways to improve AI and give people more ways to use it, you know?

Mauricio Moreno Jure-Cejin [00:22:38]:
Yeah, yeah, yeah, yeah. And that's, well, that's what Apple should also do, right. Well, I think it's also important to notice that, well, I'm. I am an Apple, an Apple guy. And I use, like, all my hardware is Apple. And, well, not all my hardware, but my laptop is MacBook Pro. My phone is an iPhone. My watch is an Apple Watch.

Mauricio Moreno Jure-Cejin [00:23:03]:
So I kind of. I am kind of an Apple guy. But Alex here, it's a Microsoft guy. I don't know why, but Alex is a Microsoft guy. And Alex, do you use Office or do you use Google workspace?

Alejandro Hynds [00:23:23]:
Yeah, I use Microsoft because of Microsoft Excel, I think even though Google Sheets has added a ton of features in the last few years and in fact has some things that Microsoft doesn't just because they're better connected to the Internet. So the ability to pull stock prices, the ability to pull currency rates has much more context into the Internet. The Microsoft Excel package, our product is still much better for financial modeling. And everything that other people in the industry send us is always in Excel. So there's still not a good replacement. Apple, I know Hash has excel too, or has a, has a version of Excel that works in the Apple ecosystem, but the commands and some of the functionalities aren't there. So I still use Windows. It's still a pain because obviously Microsoft doesn't have a phone, right.

Alejandro Hynds [00:24:20]:
So my phone is an iPhone and my laptop is windows and they don't speak to each other. So it feels like I'm living in like, I don't know, 2005 or something. But Encodos is always berating me to switch to Mac. But yeah, I mean, I use Microsoft. Maybe in a couple of years I'll start telling Carlos to switch to Windows because they've embedded OpenAI and somehow Windows is now better than Mac. We'll see. But yeah, it is a pain with the Apple ecosystem. My AirPods don't connect to my computer.

Alejandro Hynds [00:24:56]:
But yeah, I think if this Microsoft copilot feature works, maybe it'll turn out that Microsoft was the best option. We'll see.

Carlos Vela Canul [00:25:07]:
Yeah.

Mauricio Moreno Jure-Cejin [00:25:07]:
And that's it. I mean, that's our next topic to talk about. Microsoft build and what Microsoft is doing with generative AI, which I believe from this, from this, from the three big tech companies, Microsoft is, is winning the race. A lot of people say that Microsoft is basically the owner of OpenAI, that Satya Nadella is like the one or the guy who says what OpenAI can do and what open air. Like there are rumors and people saying on the Internet. Basically Satya Nadella is the boss of some Altman. So I think that has a little bit of true. I mean, it's not like that.

Mauricio Moreno Jure-Cejin [00:25:57]:
But basically all open AI models are trained on Azure. They have a type of part, I don't know if Microsoft is really paying any money to open AI to use their models on all Microsoft products or they are, or they are just making like this switch. Like Microsoft gives Azure compute for free to open AI in exchange for using all their models. So I don't know exactly how their relationship is there. But I think that because of that really smart partnership that Satya Nadella did with OpenAI and with Sam, Microsoft is currently winning the generative AI race. So I think it's interesting what you're saying, Alejandro, because maybe a lot of people, not a lot of people, maybe in the world, there is some people that have always been Mac guys. And now because Microsoft has already the generative AI copilot on Windows and now they just announced a feature that I want us to talk about a lot, which is the recall feature, which is basically an AI feature that every 5 seconds your Windows computer, your Windows is taking screenshots of your computer, of what you are doing. So it has all that in the memory.

Mauricio Moreno Jure-Cejin [00:27:35]:
So whenever you want to ask something to your Microsoft Copilot, to your Windows copilot, you, it can search on all those screenshots and then it can know, like, hey, do you remember this presentation I was doing last week about, I don't know, about this client and then the assistant can search all the screenshots and then give you answers. Right. So that's another. So yeah, maybe Microsoft, maybe there's people switching from MacBook to Microsoft hardware now that they have all these generative AI features. I don't know. I mean, it has, I think about that a little bit, but I haven't, like, I won't switch all my ecosystem from Apple to Microsoft just because of it. I am expecting Apple to do something interesting soon. I mean, they are behind, but they should do something soon.

Mauricio Moreno Jure-Cejin [00:28:43]:
And we will get generative AI on our Mac products. But it's getting late.

Sarah Mousa Chikhani [00:28:50]:
What? I love that it's getting late because then we'll have the same situation with Google. You know, you feel like they're rushing it and with Apple, they're taking their time and doing it like little by little. I like that.

Alejandro Hynds [00:29:06]:
Going back to the recall feature you mentioned, Mauricio. And I agree with you, Sara, by the way, that the fact that Apple is much, much more careful with their products is probably the best. Steve Jobs would be rolling in his grave, like they said, if what happened at Google happened at Apple. But going back to your repo, going back to your recall feature, Modi, I do think, I don't know what to think of that. On one hand, I think it'd be awesome if my computer knew exactly what I was doing and I could say, hey, you remember the presentation from last year for our financial performance? Can you tell me what revenue was three years ago? Or a PowerPoint or even just Gmail? I think it'd be awesome if I could ask questions and it had context as to what I was doing. On the other hand, I can see why everyone is concerned. I think screenshots every 5 seconds from everyone's computer, if that gets leaked or hacked or somehow accessed or data gets trained on it, I don't know if that feature popped up in my laptop today and it told me, do you want to opt in or opt out? I don't know what I would do. I'm right in the middle.

Alejandro Hynds [00:30:15]:
I'm, maybe I opt in because it's going to make me more productive and work faster, have a better content.

Mauricio Moreno Jure-Cejin [00:30:23]:
I don't have Microsoft, but it's, I think it's not, it's opt out. Like by default, you have it. So you need to opt out of that feature. So that's also something that it's causing a lot of. Like, it's risky, right? Like, it's not an opt in feature.

Alejandro Hynds [00:30:44]:
I need to look into that because I'm on a Microsoft, maybe Microsoft has this podcast right now as we're recording it. So, yeah, I mean, I don't know what to think. I don't know what to think of that.

Mauricio Moreno Jure-Cejin [00:30:57]:
Yeah, there are mixed sentiments. Like, a lot of people write, like, this is happening right now because the Microsoft build event had just passed. So, yeah, a lot of people are saying that it's really creepy and it's also a hacker's dream.

Carlos Vela Canul [00:31:13]:
Right?

Mauricio Moreno Jure-Cejin [00:31:13]:
Like, wired, Wired, the magazine wrote an article saying that it is like the perfect hackers dream because you can just get, you can just hack that password of your admin and you can get all those screenshots and all that information. And it's also creepy because I was hearing some guy saying that if it takes, like, what are the things that you use in your computer? Sometimes in your computer, you have your credit card number, you have your, your passwords, even of everything else you have. I don't know. You have private information. You have private conversations going on, on your computer screen. So it's really weird. Like, like you, you wouldn't like any robot or any computer or any company to have, like all these pictures of your, all these screenshots of your desktop, right? So let's see. Let's see what happens with that.

Mauricio Moreno Jure-Cejin [00:32:19]:
But it's really, I don't think that, like, and this goes, this is similar to what we talked in a trip that we did, Carlos and Alejandro, that we were talking about generative AI and the information and, and stuff like that. And, and I remember we were talking, like, I, I had an idea that, um, like, the information that generative AI and all these models can have, it's infinite because you, you could just, like, you could just get up, get to a point where you have cameras and sensors all around the world, in all places of the world. And all these cameras and sensors are like, getting information either as video, audio, I don't know, movement, temperature and all that. But you need to fill the world with those sensors, with cameras, with microphones and all this information gets into an LLM and then you will have like you, you, until that point you can say okay, now LLMs have all the information available, possible, that's in the world. So until I mean I don't think we will, we will get anywhere close to that. Like we, we won't be able to have to put a camera and a microphone and temperature and in every part of the, of the world. But um. But yeah, that, that's what, why I'm, why I think that the information that an LLM can have to train and to have more context and to have better answer and to give better answers, it's really infinite.

Mauricio Moreno Jure-Cejin [00:34:21]:
Like, and this thing that Microsoft is going, is doing like this recall feature goes in line with that. Like they are seeing this, they are seeing that for an LLM or an assistant to work better and to give better answers it needs to know more, more and more to know more about you, more about your company, more about your, I don't know, your goals, your dreams, your friends, your, it's not enough to have just like the context that you could, that you can write in a prompt, right. The model or the assistant needs to know more about you, more about your environment, where you live, what are you seeing every, every day, what are you hearing? So I think that's what Microsoft is trying to do with this recall feature. But I don't think like this is the, the path to that. Like screenshots on your computer is not very cool to do.

Alejandro Hynds [00:35:27]:
Yeah, that's a good point. I hadn't thought about that. That if they're running out of content to scrape then just like everything on your computer becoming part of the model is a great way to add information. Right. Because obviously maybe you're reading an article on a newspaper that maybe you're reading the Financial Times and they don't have access to the Financial Times because it's paywalled and protected. But now suddenly they can have your screenshots and it's like a back doorway. It could be a back doorway to train their model on all things on. Maybe you're on the dark web, then they don't have access to that and just a way to train their models on information that is otherwise not publicly available.

Alejandro Hynds [00:36:13]:
So yeah, I mean I think taking a screenshot of computer every 5 seconds adds a ton of context. Maybe not the most ethical thing, but there's a huge incentive to do that.

Mauricio Moreno Jure-Cejin [00:36:28]:
Yeah, I haven't talked about that. Like I didn't think about that. Like, maybe, yeah, like I, well, I don't know how laws work, but yeah, maybe if Microsoft scrapes, like, has all these screenshots that you already gave permission to use and in those screenshots Microsoft can get like, I don't know, a book or maybe an article from the New York Times. And maybe that, maybe if Microsoft gets that information from your screenshots instead of scraping directly the article from the New York Times, maybe that's legal. Maybe that's like, there's no way the New York Times can say to Microsoft, hey, you are training. You are using my information. So, yeah, that, I mean, that's another rabbit's hole. But I think that's also really interesting.

Mauricio Moreno Jure-Cejin [00:37:29]:
Like maybe that's like, that is one of the reasons why Microsoft did this feature, but I think they will have to put that feature down. Like, I'm not seeing, like, I'm not seeing people comfortable with that. Okay, so we can go to our next topic. Apple. So Apple, as of now, like, Apple hasn't done anything about generative AI. Like Siri is still useless in my opinion. Apple said that Siri was going to have to be a lot more, a lot better with generative AI and blah, blah, blah. But that's when the rumors came out that Apple was like, wasn't able to do anything.

Mauricio Moreno Jure-Cejin [00:38:34]:
So they were trying to partner with, with OpenAI to use basically chat deputy models inside Siri. I don't know what happened with that, but we are like, a lot of people is waiting for the June 10 for the WWDC event. Supposedly here is a fragment from an article of Wired, I believe that says something interesting that Gorman reported that the updates are all part of a new strategy called Project Gray Matter. So that's the name of the project that Apple has with Generative AI. And this project gray matter includes AI tools that Apple will inject into core apps like photos, safari messages, and notes, including a smart recap feature that gives you summaries of things like individual texts, emails, notes, documents, and web pages. Additional features could include transcribing voice memos. Apple already introduced the ability to transcribe voice notes with iOS 17, an enhanced spotlight feature that makes searches faster and more reliable and improvements to Safari web search. So yeah, we were talking about, like, I think this is really smart.

Mauricio Moreno Jure-Cejin [00:40:03]:
I think, like, if I were, if I were the owner of Apple or Microsoft or Google or whatever, I think this is what I would do. Generative AI is not this thing that, I don't know, everyone is imagining that generative AI should become like this robot from, like, from the movie I robot. Or this assistant should be like this robot that it's over there and it's helping you and everything. But, but, but that's not really what, what I think should happen. Like, I think this thing, this project gray matter is the right path to go. Like, we already have all these tools, all these computers, all these phones, all these things with a lot of apps. And what needs to happen is all these apps. That is, software needs to have, like, companies need to embed the generative AI models into all these apps that we already have and we already know how to use because that's really important.

Mauricio Moreno Jure-Cejin [00:41:11]:
Like, we already know how to use, like the, I don't know, the safari search or like the photos app from, from, from the iPhone. We already have all the photos, all our photos in our iPhone. Maybe if you use iCloud, if you pay iCloud as I do, I have, like, I don't know, maybe I pay the two terabytes plan from iCloud. So I have like, unlimited photos in my photos app, in my iPhone and in my desktop. And all that information is like, it's like my whole life is there. Right? So, because I don't only have, like, pictures of places and pictures of people, but I also have screenshots of, I don't know, conversations, screenshots of credit cards, screenshots of. So, so only that information in my photos app is, is like gold. So.

Mauricio Moreno Jure-Cejin [00:42:14]:
So, yeah, like, like, we don't, we just need to have like a really good generative AI model in that app and really cool features to explode to use that, that information. Right?

Sarah Mousa Chikhani [00:42:31]:
Yeah. I actually read that Apple is trying to appeal to a user's practical side and they are thinking of a feature. I think this is fun. Listen to it. They're thinking of a feature where you can create custom emojis in real time based on the content of your text message. I feel like that's a better way to use. This is an example, of course. No, but like, focusing on the tools that ordinary consumers use in their daily lives, like a different approach to artificial intelligence that I feel, like I said before, it's on the practical side of the users.

Sarah Mousa Chikhani [00:43:12]:
I think that's better.

Mauricio Moreno Jure-Cejin [00:43:13]:
I don't know, like, I don't know about you, but I spent, when I looking for emojis, I spend like, I don't know, maybe there are some times that I spent maybe five minutes only looking for like, the precise emoji I need to use for this text or this message, I think everyone, everyone has been there. So yeah, I think that feature would be cool and it would like, it will make you more efficient also on your day to day. But yeah, I think that is the companies do not need to invent or create new platforms. New, I don't know, new chats, new software with generative AI. I think they need to integrate the best models they can to the apps that people already know how to use. Because making people learn to use a new app to use, I mean, that is what is happening with chat GPT. I mean, I use chat GPT every day, but still, chat GPT hasn't, like, we are inside a bubble. Like, chat DPT is not a mainstream product.

Mauricio Moreno Jure-Cejin [00:44:28]:
Like, yeah, so it's really hard to create a new product and then making that product become like a mainstream thing or tool.

Carlos Vela Canul [00:44:41]:
So, yeah, and also, Mauricio, it is important to mention that Apple has just released this new m four chip for the iPad. So I believe this chip will be like so powerful that you could be like, Apple could insert AI into that chip. So you don't have to have like Internet to make a research or to add something to Siri. Right. That will be a very powerful tool. And also, for example, I understand that Apple is going to release a new feature regarding the spotlight search in which you can search for a specific word. And Apple is going to have the ability to search that word in all your apps. Right now, the only thing that you can search is the name of the app or you can also search on the Internet with spotlight, but maybe you have some specific tests text, for example, in a page, in the page app.

Carlos Vela Canul [00:45:37]:
If you can search that word will be amazing because it will help you a lot to reduce your time, you know. So I believe with this m four chip in iPad will be great for Apple because they are going to be more able to release new features and also this WWDC for I am, from my understanding, this will be the, like the best WWDC in a long time because they're going to show, I think, a new face for the iOS 18. I heard that they are going to release new features, that you are going to be able to move your icons and all your screen. Right now you have this block and you cannot move your apps. I believe Apple is going to try to make their environments like the Apple vision, like all the designs and all the new features of the iOS, iPad and macOS. They're going to be more similar to Apple Vision right now. And they're trying to implement AI into Apple Vision. But right now I think that that is still complicated, but I believe Apple is going, is going to do great in this WWDC.

Mauricio Moreno Jure-Cejin [00:46:54]:
Yeah. And that thing about the chips, it's interesting. Like we, we will talk about that on the next episode and talk about a lot about Nvidia, but I think that move is like, I think that's the right, the right way. Like, because what is happening now with, I mean, we talked about the screenshots that Microsoft is doing now with the recall feature. And the big problem with that is that all those screenshots right now are not getting stored in your computer. Like those screenshots are getting stored in the cloud, I believe. So what everyone discovered now that is the right way to go with generative AI is okay if the future of generative AI is to have a lot more information, a lot more context, a lot more data to get more, better answers and more intelligent assistance. Okay.

Mauricio Moreno Jure-Cejin [00:47:59]:
What, the thing about the data and the personal data about everyone is that it's private, right. And it's creepy to have, like, no one wants to give older data, older goals, all their, I mean, all their, their lives data to a tech company, right. So maybe we do that today, but in separate parts. Like maybe Amazon knows some stuff about me because I buy a lot of stuff with, with Amazon. Apple knows a lot of stuff about me because I have an iPhone, I have a MacBook. Google knows stuff about me because I use YouTube. Facebook knows stuff about me because Instagram. But, but there's no one company that knows everything about me, right? And so the problem with that is that, so for an assistant to work smarter and to work better and to give better answers, you need to have the most information you can about someone.

Mauricio Moreno Jure-Cejin [00:49:10]:
So it needs to know what I buy in Amazon. It needs to know what I search on Instagram, what I see on YouTube, what I search on Google. It needs to know everything. So, and that information, to have that information on the cloud of someone on the cloud, maybe on Azure or Google Cloud or whatever, these big companies know that it's not going to happen. So they are jumping into these chips. They know that what needs to happen is that our hardware needs to have these chips and needs to have this. The hardware we have on our hands needs to be the one that processes all this information. So, so we don't get the problem of who has information, who is processing the information and also the lag, right? If we don't like Chaju PT now has some lag and gets like saturated with a lot of people.

Mauricio Moreno Jure-Cejin [00:50:15]:
But. But once we get cha GPT or the LLM inside a chip, inside our hardware, inside our phone, our laptop, the lag disappears. Right? I mean, the chip needs to be, like, really powerful. But what do you think about that?

Alejandro Hynds [00:50:36]:
Well, I think particularly with Apple's in that article you mentioned from Wired on project gray matter and how Apple is trying to embed AI into its system, I think that would work really well with that approach. Apple describes this as, I hadn't heard this term before, but it calls its AI intuitive AI instead of generative AI. So basically, yeah, I hadn't heard it either. I saw it in that article, and then I googled intuitive AI, and it's something that Apple invented, or the term was invented Apple. Right. But the idea of intuitive AI, rather than generative is that generative is all about creating content, right? Literally generating text, videos, images that answer a question. Whereas intuitive AI is less about the core output and immediate format and more of, like you said, I need an emoji, but instead of, I forget there's an emoji of that lady dancing with her spanish style balenciana dress. And I don't know what that emojis calls.

Alejandro Hynds [00:51:47]:
I write like dress, valenciana, Spain, until it pops up. So these kind of things where like, it kind of predicts what I'm trying to do and makes me work faster. Maybe I can search within my notes instead of looking for that specific note I wrote three years ago with the recipe to make pie. I just write recipe pie on somewhere on the Google and it pulls it up. So, sorry, somewhere in the iOS app, and it pulls it up. So just embedding intuitive, what they call intuitive AI, into the iOS ecosystem to where I don't necessarily see it, but it's always working in the background. And like you said, obviously powered by the chips within the phone rather than through the cloud, it probably is the future, at least within mobile, within mobile AI, because you can't always rely on users having good Wi Fi, good 4G or five g. And if it happens on your phone, I agree, it's gonna have less lag, it's gonna be more useful.

Alejandro Hynds [00:52:50]:
You don't have to depend on bringing data in from the cloud. So I do think that's the future as well.

Mauricio Moreno Jure-Cejin [00:52:57]:
Yeah, yeah, yeah. So we are running out of time. So let's wrap up with two questions. First question that everyone answer, give an answer, please. Is first, who do you think is currently winning the generative AI race? And then who do you think will win the generative AI race? And we can say that we can only talk about Google, Microsoft and Apple. I'm not including Amazon because Amazon, I mean, Amazon is one, is also a big tech company, but. But Amazon is more, I don't know, like AWS is more like B. Two B.

Mauricio Moreno Jure-Cejin [00:53:41]:
Or maybe it's. It's not that consumer facing. So. So, yeah, we can keep it between Google, Microsoft and Apple.

Carlos Vela Canul [00:53:51]:
Yes. To answer the first question, I believe that currently Microsoft with OpenAI, both are like, winning this race because OpenAI, I believe it is the most effective AI or generative AI tool that exists. But I believe that Google is closer and closer to them. Apple, I don't know until we see the WWDC. I can tell you something about that. But I believe that Microsoft and OpenAI are winning this race. But who is going to win this race at the end? I don't know, because there is this new company, Elon Musk is launching, Xai. So we need to take into account that new company that Elon Musk is creating because he has experience working with OpenAI.

Carlos Vela Canul [00:54:41]:
And maybe. Maybe, I don't know, he could create something bigger. So I don't know. Let's see how it goes. But I believe right now Microsoft and OpenAI are winning the race.

Mauricio Moreno Jure-Cejin [00:54:53]:
Yeah. Okay.

Sarah Mousa Chikhani [00:54:55]:
Sara, I know I've been talking about Gemini because I usually use Gemini, but I also agree with Carlos. I think as in terms of now, open and I, OpenAI and Microsoft are winning. I feel like Google will eventually get to OpenAI's capability or more. But I like the approach of Apple. We'll see. We'll still like. In three days we'll have the conference. But I feel like, I like the way Apple is approaching this technology.

Mauricio Moreno Jure-Cejin [00:55:31]:
And who do you think will win the race?

Sarah Mousa Chikhani [00:55:36]:
I don't know. I think maybe apple. I would like to think, but I don't know because we still have time to see. I have my hopes on Apple.

Carlos Vela Canul [00:55:47]:
Yeah, I believe it's too early to answer that question.

Sarah Mousa Chikhani [00:55:51]:
Exactly.

Carlos Vela Canul [00:55:52]:
That we need to at least five more years to, like, have a better understanding on how AI works. So, yeah, it's only guessing right now.

Sarah Mousa Chikhani [00:56:04]:
Let's do this podcast in two years and we should ask the same question again.

Carlos Vela Canul [00:56:08]:
Question in three years, listeners.

Sarah Mousa Chikhani [00:56:11]:
See you in five years.

Alejandro Hynds [00:56:15]:
I think that. So that'll be June 7, 2029. So just put it on your calendar. I think that OpenAI is obviously running right now. I think it's hard to argue otherwise in terms of who will win the race. I think it comes down to what is the most successful form of AI. If AI is going to be used mostly by enterprises, large organizations that embed AI into their systems, I don't know, whatever. Think about Salesforce or HubSpot or these large software companies that use someone else's model to power AI functions within their own systems.

Alejandro Hynds [00:57:01]:
I think if that happens, probably Microsoft will win because they already have this huge azure servicer which isn't as big as AWS, but they have the better AI system. So I think if it's mostly enterprises embedding AI into products, OpenAI wins and Microsoft wins if it turns out that AI is mostly used by consumers, people who use their iPhones, their Macs, Google workspaces. If it works such that if it's a virtual assistant that wins the race, then I think maybe it's either Google or Apple. I think Google has the better case because of Google workspaces, but they clearly have a culture problem and making things work problem at the moment. So if they can't get that solved then it's a huge loss for them. Microsoft wins. If you access AI through other software providers, Google or Apple should win if you're accessing it through your computer, your workspace. That's my opinion.

Mauricio Moreno Jure-Cejin [00:58:07]:
Yeah. I also think currently Microsoft is swinging with OpenAI, but I don't know, Apple should be able, like if Apple, I don't see if Apple is trying to do their own LLM. I don't see how they are going to get ahead. But I think that if Apple partners with OpenAI, as the news said or the rumor said, maybe they have a chance. But also I don't know why would, or how that works because if OpenAI is already partnering with Microsoft, how is OpenAI being able going to partner with Apple? So I don't know, maybe it all depends on the LLM and the capability of reasoning of the LLM. I think that's really, really important because if the, if the AI doesn't give you a good answer from the first try, you stop using that, right? That is what happens with Alexa and with Siri. Like they are really, really stupid. So you are just wasting your time by asking or talking to them.

Mauricio Moreno Jure-Cejin [00:59:35]:
Well, that said, I think Apple, if Apple gets a really smart model in their phones, in the iPhone, like at the iPhone, I think it's not the phone that most of the world uses, but for sure is the phone that most of the leaders of the world use. So if you get a really smart model in the hardware, in the iPhone and you can just have it on the iOS keyboard and use chat GPT basically from the keyboard of your iPhone and tell, like generate pictures by just typing, like, anywhere you are in the iPhone, any app, you can just generate a picture by typing it or by talking to the. To your iPhone. Ask, like generate a presentation for work. Generate an Excel document. Generate, if you can. If you. If Apple gets to a spot or to a point where you can do that, I think that would be interesting.

Mauricio Moreno Jure-Cejin [01:00:46]:
I think that would be. And that's an advantage Apple has against Microsoft because Microsoft doesn't have a phone, so. And the thing about the phone is that you have a piece of hardware with you 24/7 right. Microsoft also, I don't know if Microsoft has a watch, but I think that, I mean, still, the Apple Watch is not something mainstream, I believe. But, but I think, like, people are getting used more and more to. You have an Apple Watch. So if a lot more people buys an iPhone and gets an Apple Watch and like, then the world is. It's, everyone is use, it has an iPhone and an Apple Watch.

Mauricio Moreno Jure-Cejin [01:01:32]:
Twenty four seven. The amount of information and data that, that Apple will have will be like amazing. So. But yeah, I mean, if apple do, like execute on this, well, they can win. But if they don't execute and if they don't get a right LLM and then they like what, what is happening at Google starts happening at Apple also, then it will be a flop and Microsoft can win. I think Microsoft should start thinking about doing a phone. I mean, I don't know if they, they unveiled a phone before or tried to do a phone, but I think phones will be a big part of. Or watches.

Mauricio Moreno Jure-Cejin [01:02:23]:
Smartwatches will be a big part for generative AI assistance because, I mean, this pin stuff that humane got, it was a flop. But no one is going to put a pin on. Right. And the glasses also, like, glasses might be something cool, but, like, people that need to use glasses pay really expensive eye operations just to stop using glasses. So I don't see people going around people that we don't use glasses. I don't think we are going to have glasses all the time just because they have AI. So maybe the smartwatch will be the hardware for generative AI, but. Well, yeah, so that's my conclusion.

Mauricio Moreno Jure-Cejin [01:03:20]:
So let's finish now. Sara, Carlos, Alex, do you want to add something more?

Alejandro Hynds [01:03:30]:
I think my only point on the last thing you said, I bet if there were glasses that gave you a ton of additional information around you, you would wear glasses. Especially like if you think something like when we, when we went to Phoenix and we all did the vision pro demo. Yeah, like something like that. But that doesn't weigh one and a half kilos, but, you know, something that weighs one.

Mauricio Moreno Jure-Cejin [01:03:54]:
I don't know if you watched dragon Ball.

Alejandro Hynds [01:03:57]:
Yes, I did.

Mauricio Moreno Jure-Cejin [01:03:58]:
Yeah, maybe like the ones that did that. Yeah.

Alejandro Hynds [01:04:01]:
The thing that shows the key in the dragon Ball. I bet you would want. You would, you would do something. You would you use that just because, like, the amount of information that you could get is awesome. Sometimes I find myself wearing my AirPods even if I'm not listening to anything, just in case I need to, you know, ask my phone something or turn on a timer or answer a call, I'll literally have them in my ears just in case. So. And that's not particularly comfortable. So I think, yeah, like, something that simulates that but also gives you visual information.

Alejandro Hynds [01:04:33]:
I bet that people would use that. So that's my parting thoughts.

Carlos Vela Canul [01:04:38]:
Yeah. My conclusion is that right now, AI is still a growing technology. Like three, four years ago, we didn't know anything about generative AI. So right now we need to keep close to AI to see how it's evolving. But I believe in the next couple of years, we are going to start seeing, like, very interesting and very spectacular things regarding AI. So, yeah, the idea is to continue using these kind of tools to see how can we implement it in our, like, in our lives. But, yeah, I believe that AI is still growing, is still emerging. We're going to see new startups.

Carlos Vela Canul [01:05:23]:
I believe most of the new startup right now that are getting funded are from AI. So, yeah, I believe that AI is going to be the future in a couple of years.

Mauricio Moreno Jure-Cejin [01:05:36]:
Yeah. So, great. This is, this is it for this episode. Thank you, listeners. See you on our next episode. We will go deep into Nvidia, this company that's making all these chips for AI. And now it is, yeah, it has a bigger valuation than Apple now. So thank you very much.

Mauricio Moreno Jure-Cejin [01:06:01]:
Bye bye. Bye, guys. Bye.