Venture Step

Summary

In this episode, Dalton discusses two major topics: Alpha Fold 3 and Chat GPT 4.0. Alpha Fold 3 is a protein structure prediction model that uses machine learning to outperform physics-based models in drug research and DNA research. It has significant improvements over its predecessor, Alpha Fold 2, and has the potential to accelerate drug discovery and testing. Chat GPT 4.0 is a multimodal AI model that allows users to interact with chatbots using voice, vision, and screen sharing capabilities. Dalton highlights the importance of integrating AI into the human world and the commercialization opportunities for these models.

Keywords

Alpha Fold 3, Chat GPT 4.0, protein structure prediction, drug discovery, DNA research, machine learning, AI model, multimodal, commercialization

Takeaways

Alpha Fold 3 is a protein structure prediction model that outperforms physics-based models in drug research and DNA research.
Chat GPT 4.0 is a multimodal AI model that enables users to interact with chatbots using voice, vision, and screen sharing capabilities.
Integrating AI into the human world allows for more natural and interactive experiences.
These models have significant commercialization opportunities in the fields of drug discovery, pharmaceutical licensing, and strategic partnerships.

Sound Bites

"Alpha Fold 3 becomes the first model that has beat a physics-based model, which is huge."
"Alpha Fold 3 can predict dynamics that would normally take weeks or even months to do testing on in a couple hours."
"Google has solidified itself as the leader in drug discovery using AI."

Chapters

00:00 Introduction and Overview
01:24 Alpha Fold 3: Revolutionizing Drug Discovery
15:06 Chat GPT 4.0: The Future of AI Interaction

Show Links
https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/
https://golgi.sandbox.google.com/
https://openai.com/index/hello-gpt-4o/

Creators & Guests

Host
Dalton Anderson
I like to explore and build stuff.

What is Venture Step?

Venture Step Podcast: Dive into the boundless journey of entrepreneurship and the richness of life with "Venture Step Podcast," where we unravel the essence of creating, innovating, and living freely. This show is your gateway to exploring the multifaceted world of entrepreneurship, not just as a career path but as a lifestyle that embraces life's full spectrum of experiences. Each episode of "Venture Step Podcast" invites you to explore new horizons, challenge conventional wisdom, and discover the unlimited potential within and around you.

Dalton (00:00)
Welcome to Vichy Step Podcasts where we discuss entrepreneurship, industry trends, and the occasional book view. Today we'll be discussing Alphal 3, a protein structure prediction model that will help accelerate drug discovery and assist in scientific breakthroughs. Then we'll be discussing OpenAI's Chat GPT 4 .0, a recent announcement as of today, which will be a multimodal

AI model that will allow the user to interact with chat tbt with vision voice, sharing your screen like a teams meeting and also the normal tax capabilities. Before we dive in, I'm Dalton. I've got a bit of a max background in programming, data science and insurance offline. You can find me running, building my side business or loss in a good book.

You can listen to the podcast in video or audio format on YouTube. If audio is more your thing, you can find the podcasts on Apple podcasts, Spotify, or wherever else you get your podcasts. Today we'll be discussing Alpha Fold 3 and Chat GPT -4 -0. Alpha Fold 3 is an iteration of Alpha Fold 2. Alpha Fold 2 was submitted in 2021 as a research paper.

It was a notable paper to say the least with over 15 ,000 citations, which if you're not familiar with research papers, over a couple hundred is a lot. So 15 ,000 is huge. So it was a very popular paper and had a breakthrough in the approach of doing drug research or DNA research or these.

kind of drug interactions or drug testing was they're using machine learning instead of a physics -based model. At the time, physics -based models were the standard and Alpha2 kind of questioned the standard with their results. Their results were better than the, sorry, not Alpha2. Alpha2 had good results but hadn't outperformed a physics -based model.

but AlphaFold 2 was good at many different areas of this research.

The physics -based models were good at one thing, like, okay, this lipid interacts with X things, and this is how this is done.

Whereas Alpha two could do, you know, lipids, it could do proteins, you do DNA structures. So it can, it was able to do many things, but just not as good as a physics based model with Alpha three that has changed. And so Alpha three becomes the first model that has beat a physics based model, which is huge.

with this whole AI thing going on. This is something else. This is, this is cool. If you think so, no, honestly though, I think it's pretty cool because the information is available for public use, but not necessarily for commercial use. So it's open source. You can check it out. And, and if you're curious and you want to do your own studies, you can where you get that knowledge. I don't know, but it's there.

for you. OutFold 3 has significant improvements from OutFold 2. OutFold 2 to OutFold 3, many of the improvements were 50 % or above in all areas. And now it can interact or predict interactions with lipids. How that all looks and feels on their website, I have no idea. I did a demo before, a pre -demo on this.

I did a pre demo before I did this podcast and you're supposed to be able to see all these things and how they interact on this 3D structure that's generated. I have no idea what I'm looking at, but it looks really cool and it's cool that I can click around and I can, I can spin the thing around and look at it and zoom in, zoom out. Other than that, I don't know.

but it's able to handle more complex systems. It has a new architecture, which I thought was interesting. They switched to a diffusion -based architecture, which is different from their transformer -based architecture approach. And the previous model, it was using kind of the, it would map out the points. Like the way that it was explained how the previous model worked was,

think about a social network and you kind of just map out everyone who's friends with everyone. And then you have this huge web diagram. And then from that huge web diagram of all your friends and your friends of friends and their friends of friends all linked together in this social network web, then you would create your 3d structure graph. Now use it to diffusion. So it kind of slowly iterates upon itself like a neural network.

and then progressively gets better with each round of the layers and then generates this.

3D structure using pairs instead of mapping out all the interactions, it just pairs them up. And this approach is supposed to be one, more accurate, two, faster. So with it being more accurate and faster, it's supposed to be able to handle more complex interactions.

With that comes, you know, they have this increased scalability, comes faster processing of the structures. And so apparently it can predict dynamics that would normally take weeks or even months to do testing on in a couple hours. The testing that I did, and I did simple tests cause I just did what they gave me as examples. Like when you open up one of those AI systems, kind of gives you example problems.

Like what if a zebra was wearing a clown? Like generate this image. It's kind of what I did. I forgot which one I did. I did a protein RNA and it kind of generated the structure for me. But.

I just did those examples, what I was saying, and the example generated, I think like in 10 minutes. So it takes some time. I don't know how long it would take for a really long, complex structure.

on the high level stuff. The...

Improvements to Alpha Fold 3, the release of this, the only concern that we have is it's owned solely by Isomorphic Labs. So Google is putting in a lot of value on this product. And so they spun up a company called Isomorphic Labs, which has sole ownership of Alpha Fold 3. Google is known and one of the key advocates, including Meta,

for open sourcing their algorithms, their products, a lot of their stuff is open source, their languages they use, same thing with Meta.

And so when they don't open source something.

it is because they find high value in this product. And so they're going to go commercially with this. And so if it's, if it's that valuable, then I think, I think it's worth upwards of billions of dollars. Cause I understand that they have kind of an internal dialogue at the company with the product managers and developers that they,

they kill off because a lot of people were getting upset with Google and they said like, listen, you know, if a product isn't worth, if we can't predict that it'll be worth over 10 billion, we don't necessarily have interest. It might be a great startup company or some other company, but it's not a company of Google. And so Google works on these big problems.

and they spread out their resources and kind of have these bets. And of these bets that hit like outfold, there is hundreds that don't hit at all. But outfold is gonna, they're swinging for the fence and they're hitting it big time, especially with outfold two and then outfold three. These things will help accelerate new drug discoveries. It will help.

expedite drug testing and it will enhance the discovery of the biotechnology where they're designed, you know, new molecules and materials for these certain projects. And if we just talk about just the value creation, they, you know, they'll give drug companies and I don't know if it's Google starts making drug companies or they just

they just start working on licensing Alpha Fold 3 or if they plan on maybe having some partnerships with leading drug companies. I don't know what they're going to do. They haven't announced any plans yet. Well, we do know that Isomorphic Labs owns DeepMind.

This product will give accelerated timelines. It will improve drug safety. It will allow drug discovery. We'll have an enhanced understanding of biological systems.

It shifts the dynamic from physics -based models being the leading industry standard for drug discovery and testing to now machine learning models.

It positions Google and DeepMind to very favorably in becoming the leadership in AI for drug discovery. So Google has solidified itself as the leader in drug discovery using AI.

They have very good commercialization opportunities with isomorphic labs. They can license it to pharmaceutical companies. They could do it for research institutions, biotechs. And then so they could generate revenue there. They could also do the strategic partnerships. They could partner with key pharmaceutical companies to further drug discovery and development.

There's a lot of different routes that they could take.

But from the paper and from other people were saying, they're predicting, and this is just predictions, this is not solid information, but they say like, reduce drug discovery costs by 50, 50 plus percent, accelerating drug development timelines by two to five years.

And so I think that that would translate into saving billions of dollars consistently for these large companies where there, if you times that by per drug that they discover and then times that by the drug development timeline of what it normally is and reducing it by that much, I think that that's worth a lot of money, but who knows? So.

Let's get into it. So the next thing I'm going to do is I'm going to save. Shit, I'm saying shape. Share my screen. I'm all caught up in this alpha fold, alpha fold. I'll fold this, I'll fold that. Alpha fold server. Let's go over here. Sharing my screen now. Okay. So I already have a pre -edited or already sent in the DNA sequence of this. And so we're going to.

Save our, we're gonna continue, preview our job. We're gonna submit our job here. And so this is gonna run in the background and we get on this website. We're in beta right now. I just, I just ran a job for everyone. So I've got nine jobs remaining and why that job's running. I think it already ran. Open results. Wow. That was, that was really fast. That was really faster than the other one. Okay. So this is, this one's less complex than the other one. That's why.

Okay, so whatever this is, I asked the internet for a DNA example, and this is what I have. So if you guys are listening to the podcast, we're in the live demo section. I went on Alpha Fold Server. Alpha Fold Server allows you to input these sequences. And so I put in the DNA sequence for, I guess this.

this part of DNA and then it generates this 3D model that you can scroll around on and zoom in, zoom out. And it kind of, it tells you the confidence of the generation. And right now I've got very high. no, I've just got confidence on two pieces and then I have low confidence on majority of it. But this one was really small. So that's why it was quick.

but this other job I have the protein RNA ion PD8 AW3. Open this up. This is really cool. This thing is crazy. It something else. there's a lot going on on this and apparently you're supposed to be able to tell like what is what. I have no idea. I just think it looks cool. If you can't see it, it kind of looks like this long strand of continuous

folding springy stuff with different colors and the different colors mean the confidence of which AlphaFold has on the 3D structure it generated. And then it generates the sequences of the RNA, the protein and the ion. So it's pretty cool. I'm gonna stop sharing. Okay, so now we're gonna transition over to ChatGBT.

So ChatUBT 4 .0 is this new model that was announced today in a live demo. And I think it was kind of a precursor of an interview that Sam Altman had with

the All In podcast group, which I think was on Friday of last week or Saturday, something like that. It was recent. And they were asking them, okay, when is Chat GPT -5 coming out? When is this coming out? And he was like, well, honestly, I think we need to reevaluate how we're doing these models. I think that instead of having these large models, we should just iteratively do an iterative approach to.

these releases. So it's not chat GBT four, chat GBT five, chat GBT six. It is chat GBT. And that's our model and we slowly upgraded. And so then he was asked about, okay, well, what about,

AI interaction, like when AI gets super advanced, how would you want it to be? And there is kind of this independent AI approach. There is this

AI agent where the AI would agent would just do whatever you want and then go on its own and have no visibility of what it's doing. And maybe it interacts with companies with APIs or order stuff on its own. It goes on the website, logs in, does its thing. And then there's also this other approach that Sam was talking about where

the AI is integrated with you on your phone or your computer and can see what you see and can navigate the human world because the world is built for humans and it makes more sense for you to be able to see what the AI is doing. It makes the user more comfortable, more interactivity.

And there's a lot of information that's lost if you can't visualize it. And the example that Sam gave was, okay, what if you asked, what if you asked chat GPT to order you an Uber and then it does it via API. Like how do you know how far away the driver is when you're ordering it? Cause that changes. Okay. Do I order it now or do I order it in 20 minutes?

What are the options for pricing? What cars should I get? Do I need an XL or is the price is similar? So I'll just get an XL or what?

Where is the driver on the map when you order the Uber? And so there's all this information that would be a pain to send back and forth when the user, the human in this case, can just look at their phone and see everything. All that information is visual. You could see the driver on the map. You could see the different routes. You could see the, all the different prices. And so instead of building on

an additional world for this AI to interact within the human world, Sam was suggesting, okay, just make the AI be able to live and interact in the human world versus building a separate structure for them to interact.

So I think that makes sense and that kind of follows what was demoed today with ChatGPT zero. ChatGPT zero is an app enhancement where you can talk to ChatGPT zero. You can talk to the app or ChatGPT and then you...

can do things like live translation. You can talk about your feelings or in the demo they had a video capability or not a video, but you could flip your camera and you could show in real time chat, chbt what you're doing. And they solved a linear math equation together and chat chbt helped.

the user through solving the equation step by step. I think it was like,

Something like four or three X plus one equals four. That was the equation. And then it showed the AI, judge your T, hey, I need help solving this equation. How can I go about doing that? And then it would say, you know, have you thought about removing the constants and putting them all on one side? And then he's like, okay, well, I think I have to subtract.

one from each side. And then she's like, Yeah, yeah, of course, that's the right idea. And then they do that. And then they're like, Okay, now I've just got three x and three, what do I do now? And they're like, Okay, well, you know, what's the opposite of multiplication? And they're like, adding and they're like, close, not really, you want to do division. And so they divide it, get the x equals three.

once it does that, then she's like, wow, you got it. You're awesome.

Being able to do that on the fly is huge, huge, it's crazy. And then they also did something where they reversed the camera and put it on his face and they said, hey, can you tell me my emotions today? How I'm feeling and the guy's smiling. I mean, it's pretty easy, but it has to read the, know that it's a human face, know that, you know, when your eyes are kind of closed and you've got a big smile on your face that you're really happy and do that real time.

And it did make some mistakes like, you know, at one point it was trying to say, wow, you look great. And it was like a piece of wood that he showed the camera and it had a glimpse of that. And then it started talking like, wow, you look great, Charles or whoever. And he's like, no, that's that's not me. That's just a piece of wood. And then he's like, I got I got too excited there. now I see you. Stuff like that.

which is really cool. I think it's great. And then they had, then they demoed a,

desktop app, which where in real time with your voice, you could interact with this AI agent while sharing your screen. And so he was asking it programming questions and troubleshooting what to do with the script and how could they change the shape of the graph.

They use, I think, a foo method with the smoothing or running average smoothing, smoothing of the running average for temperatures, which is normal. They normally do like a five weeks running average. And then you smooth it out so it's not as jumpy.

Anyways, he was demoing it with this script and asking AI, the AI, XYZ about the script and it was answering it and it was scrolling to a new part of the script and it was explaining the script like verbatim, like really good stuff on the fly. It has a human -like voice.

the ability to interact with vision and your voice. I think that people feel and I think that having voice capabilities for these AI agents is huge. When I was using Google's Gemini online, it was able to use voice, but it's not on my phone. It's really nice where it feels more natural.

But to be able to do that on your phone, your computer, and not only can you do that, you can also be like, Hey, what is this? And it will be able to tell you, you turn on your camera and it'll tell you what's going on or how should I solve this thing is huge. It's, it's great that this is the way it's going because that's, that's how it's going right now with the wearables and what Metta is doing and

And I think there's some conversations with Apple, with OpenAI and Google integrating one of their models. I think they're both fighting for that position. Who knows who it is? I'm not sure. But it has this natural voice and this approachable feeling of being able to interact with something, sharing your screen. People do this nonstop all the day, every day with people they work remote with or when they're in school.

It's something that I think is very big for.

open the eye and it kind of...

I say this, it reveals like where they want to go and they want, they want to be.

a piece of your phone, they want a piece of your computer. And so they want to be everywhere. And they want to be accessible wherever you are. But I don't necessarily think that they want to make their own device. But they can have this extension of your device that you're paying for. But I think that if they're going to get more advanced models, they'll have to figure out either.

reducing the models like how Meta does it, where they have a reduced model for their wearables or in a reduced model for, you know, the some stuff that they do on WhatsApp or things like that.

They will have to figure out the model sizes because chat GBT four is huge. And I don't know if it's efficient for them to be running that constantly with their hardware because it gets really expensive. And that's the main reason why they don't have a free chat GBT four is because of the expenses of running the requests constantly from users.

as the models are much more complex. It's, you know, I think,

like over a trillion parameters. I think it's like four, 500 billion models or a little bit more. It's not very clear because they don't announce it, but from what they said that the difference between three and four people have assumed and AI researchers have confirmed that it's like around that area, which is too much to put on your phone, too much space.

They also talked about allowing for the first time custom chatbots for free users.

which is something that was only available to paid users and paid users have to pay 20 bucks a month. I think that if you're questioning whether you should do it or not, you should consider how much you're paying on things that don't save you time. I don't know if you, it depends on what you're doing, but if you have something in your life where you're constantly doing it and it needs to be in the same format, you can use an AI bot to do this.

I use MetaAI daily to do questions in my personal life or to answer things. I also use it to help me make IT tickets.

format my notes. It does a lot. And then I can ask you coding questions. Meta is free. Open AI is good too. I just appreciate Meta. I just think Meta is right now is the best for a free version, but I also have a paid subscription to open AI because I just have a lot of historical stuff that I've worked on for a long time and I don't want to lose it. So I'm kind of holding on to the paid version. So.

We'll see if it's worthy of a paid version in late summer. I think that's when they're going to announce ChatGP5. So who knows?

Conclusion, we discussed Alpha Fold 3, we talked about CHATTPT 4 .0, we talked about the potential of Alpha Fold 3 and how it's gonna change and expedite drug discovery and testing, and then we briefly talked about CHATTPT 4 .0 and what that will be doing for...

users and what's coming to users in a couple of weeks. I don't have that much information about it as stated that this was something that was announced and demoed like six hours ago, five hours ago. So there isn't much information besides what they demoed and I haven't been able to try it out myself, but we will be doing that on the podcast. A hundred percent. No question. So I encourage everyone to go on a full server.

I'll put the link in the show notes. It's going to come up as like google .i .sandbox .google .com. I think that they changed the domain name. It was normally alphafoldserver .com. If you click on it, if you type in alphafoldserver .com on Google, that's the link. It's the first link where you could use the link in the show links on the show notes that I'll have in there. And there's also a link to alphafold3 .com.

and like the blog of what was discussed and kind of a link to the research paper if you want to read it. There is also a link to OpenAI, ChatGPT 4 .0 and their demo. They've got a lot of videos demoing like the capabilities of what they're talking about with like live translation and all these other things. There is also a comparison chart for ChatGPT 4 .0.

that I'll try to get in the show notes. I don't know if Spotify, YouTube, they allow you to put in a JPEG in the show notes. I'm not sure, but I encourage everyone to stay involved, stay learning and interact with the show. And if you enjoy this content, give it, give us subscribe or like anything like that helps the algorithm. And if you really, really enjoy, you can, you can, you can.

go on and support it on Facebook or YouTube or not Facebook, Facebook, Instagram, or YouTube. That is something I've recently been doing as I am trying to learn. And that's one of the things we'll be talking about in the coming episodes is.

building a community on Facebook and Instagram. So I'm doing a course by Facebook. I guess meta now I'm doing a meta course on the Facebook blueprint. And so there's a whole bunch of different courses that you can do. And it talks about how to proper marketing and, you know, growing a community on Facebook. And I think it'd be something good to do.

Cause I want to learn it and it's be beneficial for me to share that with the group. And so that will be coming. I'll check it out. I think that's what I want to talk about in the coming weeks. And so I'm going to do some discovery. And if you hear about it in an episode, that's it's because it was useful. That being said, have a great day, night or wherever you are in the world. Have a great day. Awesome. Goodbye.

Bye.