This Day in AI Podcast

This is a recording of the live event on YouTube following the OpenAI DevDay keynote. We'll be back with a regular episode later this week.

Sharkey and Sharkey amped up on caffeine live react to OpenAI's latest announcements. Cost reductions, larger models, and an app store?! The duo banter and bicker about whether this marks excitement or irrelevance for devs like you. Plus Elon Musk teases a GPT-style model without the handcuffs - does this spell trouble for Big Sam? Sharkey and Sharkey think out loud and solicit hot takes from listeners on the implications.

We cover:
  • All the news from OpenAI DevDay
  • Reactions from our community
  • xAI Grok (briefly)
  • GPTs and the GPT store

Join the discord:
Get the merch:

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Mike Sharkey (00:17:31):
All right. We are live, I think.

Chris Sharkey (00:17:35):
Yes, all the internets. That's great. Well, I've already done all my speculation on the discord, so I've got really nothing to say.

Mike Sharkey (00:17:42):
Yeah, you shit talked on the discord the entire way through the presentation and in my ear as well. That's all heard is the shit talking. I felt like God, are we just too sceptical? It's really easy to just see through this stuff now. Maybe it's like age or it's just the sort of Sam Altman, I want to be Steve Jobs playbook that he seems to be rolling out.

Chris Sharkey (00:18:06):
I think the two things that were the biggest, unfortunately I got leaked early so I was less excited by them, but the 128 K context window on GPT four turbo is truly exciting. I mean look, we've had clawed two for ages and it's pretty amazing at the a hundred K context size, but GPT four tends to give better results and having that amount of context at the lower price as well, which is obviously significant will be interesting to try.

Mike Sharkey (00:18:37):
Yeah, I also thought it was interesting that they said that the speed, they're focused on price right now, but next is speed and I thought the most impressive demo, well, not impressive in the sense that I was impressed by the technology, but impressive

Chris Sharkey (00:18:53):
In not impressive in the traditional

Mike Sharkey (00:18:55):
Sense. Yeah,

Chris Sharkey (00:18:57):
It was lame sense,

Mike Sharkey (00:18:58):
More impressive in the sense that it was fast. So that demo of the assistance on local host 8,000 at the end, which I noticed was pretty fast, it was responding insanely fast. I haven't seen anything that fast yet, but maybe this is just the new upgraded chat GBT we're going to look at.

Chris Sharkey (00:19:18):
Yeah, that's right. The turbo has been fast for a while there. The thing I'm interested in as well is they talked about being able to run multiple function calls in the same request, which is something I've actually done locally in demos I've done where I allow it to output multiple function calls. The problem when I do multiple function calls is you can't then stream the output because you're waiting to see is it outputting text or is it outputting a function call? So you've got to sort of pause the whole thing before you can do anything. So if they have a solution to function calling that doesn't rely on that technique, then I think that will be pretty cool because you can be going off running the function while you're applying to the user, increasing that perceived speed in your application.

Mike Sharkey (00:20:03):
Yeah, I think I did hear them allude to some sort of asynchronous way of doing that potentially. There was quite a lot to take in at the start. They skimmed over a number of different facts. The first one,

Chris Sharkey (00:20:16):

Mike Sharkey (00:20:17):
Of Fortune 500 used chat GPT, which is pretty damn impressive, a hundred million weekly active users,

Chris Sharkey (00:20:24):

Mike Sharkey (00:20:24):
That's a real significant amount of usage there. They started off with

Chris Sharkey (00:20:32):

Mike Sharkey (00:20:32):
For Turbo, which I don't even know if that was very unexpected. There wasn't much in this entire announcement that,

Chris Sharkey (00:20:40):
Yeah, like I said, the whole presentation, he seemed like he was just really anxious to get through the little things so he could get to the big thing at the end, and I guess we'll talk about that at the end, the GPTs announcement, but I thought that the whole thing had this theme of they don't really understand what people are using their technology for. When I look at our own community and just the examples people's site of what they're doing locally with models or what they're doing with the GPTs or applications they'd like to do, but they're too expensive on say G PT 4 32 K, and then you see their Sami Silicon Valley video where it's like I use it to help me write my blog content. It just seemed like they just don't understand the use cases. Even their demos were here's how I interact with my Google calendar on a day-to-day basis as if this is the biggest unsolved problem in the universe. It's just like, oh, well, how can I schedule my meetings better or how can I chat to someone about my meetings? It just doesn't resonate when you've got people out there solving real issues and creating actual really valuable products.

Mike Sharkey (00:21:51):
A big factor for me was, and I said this to you during our shit talk conversation, I think it's a little discord as well, is the fact that Sam Altman seems like he doesn't use ai. He just

Chris Sharkey (00:22:05):

Mike Sharkey (00:22:06):
As if he doesn't really use the products. He's actually been interviewed quite a few times and they're like, what do you use chat GPT for? And he gave really, really bad examples. He seems like truly out of touch, CEO in the sense

Chris Sharkey (00:22:21):
Of, yeah, I've had this issue in the past where people say, what do you do? And when, say we talk about our day-to-Day company, I sometimes struggle to describe what it does a lot of tech people do, but when it comes to ai, I was at a dinner on Sunday night and someone said to me, oh, what are some of the things that I could use chat GPT for? And I can immediately rattle off real world use cases of people I know who are using it in their jobs, in their personal life and with actual detailed examples that illustrate to them what it's powerful for. But in this presentation that was clearly a big deal to them, couldn't do the same thing.

Mike Sharkey (00:23:03):
Yeah, it was just obvious that the examples felt really, I don't know, they just felt like they weren't real, and again, we'll get to it a bit later, but they announced that you could basically chat with Zapier and do that sort of calendar example that they stepped through and I'll try and bring up some examples of that a bit later, but the whole demo, no one is going and chatting with their Zapier GPT to do this. I wonder, similar to the plugins, is this going to be another failed experiment in the sense that, do you really use, why go to Canva, GPT when you can just go to Canva? I don't

Chris Sharkey (00:23:50):
Understand. Well, I think this is the thing. It was really maybe just padding because the real announcements and what really matters to our community I think is the cheaper price and the larger context window. I mean those are the two most significant things by miles because the larger context window means that all of the people who couldn't get access to Claude now have all of those abilities that we've been using for a while, and I know from how much I rely on Claude too that this will open up the use cases for people who didn't have it before and then the lower cost makes it actually approachable to do that because the GPT 4 32 K, while it was pretty powerful and could do most applications that you'd want in a larger context window, the cost was just like, I could never deploy this. I could never actually use this in a real life application because the cost is just insane. Whereas this cost reduction of three times, yeah, it's still going to cost a lot if you use the full context window. It makes things that weren't possible before really possible, and I think that the real implication is a lot of, let's say wrapper apps that weren't possible before now are, and I know they've come out simultaneously with a competing product, but I don't think that's enough to mitigate the effect for regular developers working on AI applications that this price and context window increase brings

Mike Sharkey (00:25:13):
To me. I don't think

Chris Sharkey (00:25:16):

Mike Sharkey (00:25:16):
Their idea here is to capture and build some sort of ecosystem, I'm just not sure

Chris Sharkey (00:25:22):

Mike Sharkey (00:25:22):
Going to work. And that's partially because as we said towards the end here, why are you going to go and invest and build your applications in this ecosystem? When he said clearly every app's going to be screened so they can just take it away from their ecosystem,

Chris Sharkey (00:25:43):
Immediately clone it themselves and just be like announcing whatever app app number four, we've cloned it and now it's available for everyone to use.

Mike Sharkey (00:25:53):
Yeah, I don't think this will necessarily stop people building into that ecosystem and I'm sure we'll see major brands have different apps. Are we calling them in there? I know they're calling it, but that's mental. So people having these different gpt in the store, I guess the real question will be is this an Apple Watch platform where no one really uses the apps on a watch, a watch or is

Chris Sharkey (00:26:19):

Mike Sharkey (00:26:19):
More like the app store?

Chris Sharkey (00:26:21):
Yeah, if we're jumping forward to talk about the GPT stuff, still a few things I'd like to cover about the initial announcements like JSO mode and things like that. But in terms of the gpt, yeah, it really gave me the impression it's just sort of like a prompt builder for people who don't know how to write prompts. It's really just giving you a few, two or three that have this is this part of the prompt, this is this part of the prompt, and then that'll make you a chat bot perhaps with some built-in knowledge or whatever they're calling it where you upload a few documents and then that can reference it. So I upload two or three PDFs, I put in my three or four parts of the prompt, and now I have a chat bot that knows about accounting or one that knows about my business.

Mike Sharkey (00:27:07):
Yeah, the real question will be what does it mean? Are people going to go here for everything or is it more, as we've said for a long time on this show, just a use case library, teaching people how to use chat GBT? Because the reality is, and you alluded to it earlier, is

Chris Sharkey (00:27:26):

Mike Sharkey (00:27:27):
In your day-to-day life, a lot of them don't know how to take advantage of ai. They don't know what it can do. They've got a blank cursor in front of a screen and they're like, I can ask this thing. Anything, anything. It's the,

Chris Sharkey (00:27:39):
Yeah, it's interesting you say that because I saw that on a public startup forum. I'm in the other day saying, oh, how come GPT always responds in this sort of arrogant way it knows and I'm just thinking it's you're not prompting it. Or I spoke to someone the other day who was trying to write a mission statement for their company and he said, oh, I didn't end up using it because it kept doing this or whatever, and I just kept thinking, oh, you just dunno how to prompt it correctly. And you're right. It's that mainstream audience who I guess they were trying to appeal to as someone just said, Aaron Wheeler just said in our chat, they were really trying to appeal to that sort of mainstream public audience, not the people who've got deep into the weeds with prompt engineering who understand how to overcome those things even in an aligned model like GPT four. But

Mike Sharkey (00:28:27):
Then this is the thing I don't get, and I know there's some people watching live in the chat that feel the same way. Why call it a dev day? I mean it felt like there was very little relevance apart from the rushed announcements.

Chris Sharkey (00:28:40):
It was more like an FU dev day. It's like, hey, all those things you've been beating, we've just sort of theoretically wiped out so you don't have the confidence to say, pursue the apps you're working on because someone might build them in our community now. It really wasn't that embracing of Debs at all. I mean they alluded at the end that there's more things available via the API like Dali for example, and the interpreter and things like that, which are definitely going to empower Debs, but it was almost, it was relegated to the end. The demo didn't really capture it very much and I got the impression it was more like, alright, here's a proprietary API you can build on every single function call's going to call and you're calling an API to write code to do the same things over and over again. He gave the example of, oh, here's a travel thing that'll plot stuff on a map and it'll calculate the cost for everyone.

Look, it's writing code to calculate these costs. And I'm like, what kind of system do you have that plans a travel that writes its own code every time it runs? It just seems like a wildly inefficient way to build an app for things that are solved problems. You don't need code interpreter to do that. It should be used for novel use cases that it can't possibly know in advance. But if you're building some travel app, you know what all the problems are in advance. It's not a use of code. Yeah, it can do it, but do you really need it?

Mike Sharkey (00:30:08):
Yeah. I also thought that the comment at the end from Sam when he finished the presentation is basically in the future we're going to have agents, you'll just talk to a computer and it will literally do everything. It is a bit like build quickly while you can make money while you can because eventually we're going to turn all these integrations into one big super AGI computer and you'll all be fd.

Chris Sharkey (00:30:34):
And he even said that to Sya when he left the stage. He's like, oh, look forward to building AGI with you. It's this sort of sinister undertone. It's like, oh, this is all fun and games right now, but this is just really funding the end of humanity and we'll do that in the background. You guys just don't worry about it. It'll be fine.

Mike Sharkey (00:30:52):
It's very hard to take them seriously though when they say that stuff and then they're like, now you can build, here's a form field.

Chris Sharkey (00:30:59):
Yeah, it's just like the Apple Watch store, this comment from Cole Cus I hope I said that right in the chat where he is like, let's call them agents. I don't think we should call them agents because I don't think these are anywhere near to agents like you and I have been building stuff that I would call agents and I feel like they're missing several components from these GPTs that I would have to consider an agent. They don't have a sort of transient memory, they don't have a personality, they don't have a sort of agency of their own and I think to call something an agent, they need to have agency. They're not just like a pre-programmed chatbot prompt.

Mike Sharkey (00:31:43):
For sure. So let's go back to the developer announcements and the key announcements. I do feel like unlike a regular show, we're getting ahead of ourselves.

Chris Sharkey (00:31:55):
Yeah, exactly. We need an agenda. This is very unprofessional. But yeah, a few of the things that are very appealing to me aside from the larger context window and the price, they're obviously big ones, really important. The JSON mode, being able to force it into JSON mode is really, really good because we've spoken about this before where we had things like Microsoft Guidance, we had things like specifying an output format that it would follow. Most of the time that's what I use now, but I would say even in prompts that I use regularly, one in every 10 times it'll screw up the output and give invalid JSO, which is why I think I've mentioned a few times, I actually often get my prompts to output an XML because it seems to be able to do in my do that better and more consistently for pausing, but even that it screws up from time to time. So being able to be guaranteed valid, JSON coming out the other side is a really, really big step forward in terms of, I guess just the efficiency of building LLM based apps, knowing that you're going to get valid jss ON just means that you're spending a lot less time worrying about edge cases and things like that. So I think that one is very significant.

Mike Sharkey (00:33:08):
Just as you're talking here, I'm trying to bring up the chat on the screen really unprofessionally.

Chris Sharkey (00:33:13):
Yeah, that's a good idea. There's some really good comments and I'd really like to take advantage of the fact that we have everyone here today with us and it's not just our thoughts on this. I'm very curious on everyone else's thoughts on the announcements and which parts are actually significant and which parts are just sort of fluffy. A bit of both. So the follow on from the JSO was the consistent outputs. They have a way now, and I must admit I was a bit unclear on the details of it and I need to look into

Mike Sharkey (00:33:41):
It. Yeah, they called it JSON mode where you'd switch on Jason mode as your preferred output.

Chris Sharkey (00:33:46):
No, but they mentioned something else as well about being able to get consistent outputs. So I guess it's sort of like a prefixed output format, so you're going to get certain fields and things like that, and I think what they're saying is that you are always going to get the same kind of output, which is really important as well for building applications.

Mike Sharkey (00:34:08):
So they also announced the better world knowledge, which I thought was interesting. It seems like we saw earlier in the week Elon Musk flirt with the idea or we'll talk about xai and one of its key advantages in his opinion is that it can retrieve real world information and I've got some examples to go through in a minute on that, but they were really pushing that hard. It seemed a little bit reactionary to me like, oh, we've got better world knowledge too, don't worry, we're going to keep it up to date from now on. Sorry

Chris Sharkey (00:34:39):
Guys. Yeah, and the crowd's like woo woo, latest knowledge. Cool, what's happened lately? Nothing much really. But yeah, I guess that makes sense. It is annoying when it doesn't have up-to-date thing, but I also noticed that they sort of reannounce things like they tend to do like, oh no, no, you really can have dully three now. You really can have image recognition now. You really can have web browsing now. Things that they'd had six months ago, they're now sort of saying, okay, well yeah, actually you have this now.

Mike Sharkey (00:35:11):
Yeah, so they really reannounce modalities but also announced them into the api. So Dali three coming to the API chat g BT four vision, which I honestly the

Chris Sharkey (00:35:23):
Vision in the API is huge. The potential applications of that now are massive. I really am excited about trying that and we mentioned earlier we're going to try a lot of these things obviously and talk about them on our regular podcast this week, so can actually verify and test this stuff to see that it lives up to what they're saying. I

Mike Sharkey (00:35:42):
Mean if people listening thought the trolling was a little bit harsh over edge shear in the other week, wait till we get our hands on GBT four vision. I mean it's going to be really interesting to push that to its limits and try it. I'm excited actually to consider building a shell app for an iPhone where I can just

Chris Sharkey (00:36:04):

Mike Sharkey (00:36:04):
My camera around and get the odd still frame and process that image from my day. Do you remember we actually had a customer in our business called, I think it was called narrative, and you'd put this clip on your shirt and it had a little camera and it would take a photo every 30 seconds.

Chris Sharkey (00:36:23):
Do you remember that? I do remember that. You wore it for a week or something like that? Yes, I do remember.

Mike Sharkey (00:36:27):
No, I think I wore it for a month and it would take photos and then you'd go to it and you could flick back to any sort of time and period and see your memory. But now what I think is so interesting is you could clip that little pin to yourself and have a journal, so here's an app idea, so you clip it to yourself, it takes a photo and then the vision is constantly describing it and giving you a full-blown memory. I think there's so many cool things like that

Chris Sharkey (00:37:00):
Or yeah, even for example, I've just got one of these smart watches for the first time I want to do more steps and I know I'm 15 years behind the curve on that one, but things like vision synced up with other telemetry data. I know telemetry is not the right word. Whatever data, one of the things this watch does is track my stress. Imagine if you could correlate the things you are seeing with stress. It's like well every time you look at your computer, you're stressed. Every time you're looking at the green nature and going for a walk, you're not stressed. It would be very interesting to have different data points associated with the vision API and making interpretations of things you should do more, things you should do less of and I think that really the vision we haven't fully explored as a community, the potential applications of that.

There's so many when you think about how much of our world is about vision and what we're seeing and how we're interpreting that and to have that being done programmatically and at all times and from different angles and different places, the amount of visual footage that's out there is enormous and being able to process that systematically using vision that's accurate is going to be very significant. I don't actually know, and it'd be great if someone could say in the comments what the cost on vision is. I'd be curious at how much per interpretation it actually costs. Could you realistically run a full video through it frame by frame or like you say a frame every 10 seconds or is it sort of the case where you're really only going to be working on static images due to the cost?

Mike Sharkey (00:38:41):
Yeah, I mean this does beg the question. It seems like OpenAI come out and lead with something like GPT four vision and then as we've seen from the open source community, we get a number of different models that are catching up on vision. I mean they're definitely not perfectly comparable, but I'm sure a lot of them could be fine tuned for use cases. We've alluded to it many times on the show before around toys, like kids' toys, so you could have a kid's robot that goes around your house. This is going to sound scary, but if it was localised, taking photos and then making decisions of where to navigate or what fun things to do and you think it's like GPT four vision is just not really that relevant for that or where you're talking about the memory device where you want it all on board. A lot of these things aren't, the technology's not there yet necessarily to power it, but I think this is where the open source community sort of can step in and create these more efficient models for more purpose-driven things and I wonder

Chris Sharkey (00:39:46):
Yeah, and I think there, there's a big difference paying a static cost to be running a big machine for your say vision API than it is paying every single time you run it. And I think that's the thing, you can do more speculative work if you own the hardware, then you can just keep running it looking for gold in amongst all the images you have, whereas you're never going to do that on an API where it's a per call cost.

Mike Sharkey (00:40:11):
Yeah, I just wonder with these with GPT four vision and these different capabilities, is this just going to inspire more open source or are people really going to build around? I felt today a lot of these developer announcements were a reaction to people using Claude for a hundred K context for

Chris Sharkey (00:40:35):
People, and I see what you mean. It's like is GPT four really in some ways just setting the sort of benchmark of this is where you can get to if you have the most money and the most talented people building stuff and then that really sets the tone of what's going to be next in the AI space and what we can look forward to being able to use on alternative models. I'd say that's probably true.

Mike Sharkey (00:40:57):
Yeah, and you said this before, but it was really interesting. I mean we saw, I think it was April this year we got GT four and vision, correct me in the chat if I'm wrong, but we got this access to GT four for the first time and they showed off vision all the way back then said it would be available. Obviously they safety constraints wanted to limit facial recognition, all these other things that could potentially go wrong, but then the question is are we going to get it today going to be, is it really today? There's still no word from what I can see, just scrolling through, I

Chris Sharkey (00:41:42):
Saw someone in the comment say that they already had access to the JSON stuff in the docs at least, so it'll be interesting to see. Yeah, he did most things I think say they would be out today, so it'll be very interesting. The other thing that's probably worth covering is the Microsoft connection. It's a bit strange. I felt like Satya, I dunno if I say that name right, came out and said a whole lot of nothing. He's like, we're heaps excited about what you're doing. We're trying to build the systems. He just kept saying, we're trying to build the systems. I dunno what he means. I guess he means the infrastructure to deliver this at a commercial sort of level. And it really was just a sort of, it seems like the money guy just gets to come out and do his speech. I didn't really see the significance of him coming out. He didn't really announce anything.

Mike Sharkey (00:42:32):
I felt like it's just that keynote playbook of much larger companies like Apple where they bring out Steve Jobs famously brought Bill Gates out with their Microsoft partnership where they were announcing Microsoft Word and Microsoft Excel on the Mac. I remember that and it was like Sam Altman's playbook to be like Steve Jobs, I'm going to bring out my bitch now the CEO of Microsoft.

Chris Sharkey (00:42:58):
Yeah, I don't know. I found that bit a bit dull and pointless. I didn't really understand. I mean I say this is someone who does use Azure and their GPT offering is great. It is faster. It has the larger context window available and before it did on regular open ai. So I suppose that that relationship's going to continue, but as some people have pointed out, it's a weird one because in some ways their competitors and yet they're in this weird dance where they have to work together

Mike Sharkey (00:43:29):
And behind the scenes they're secretly trying to screw each other over. I mean that's no doubt that Microsoft's trying to unhook themselves from open ai. There's no doubt in my mind and vice versa. I mean we saw that leak on the information about removing just the sheer cost and they obviously knew all this stuff was coming, so they're still looking at these cheaper, more efficient models. And I was going to make this point earlier, but I lost my train of thought was I think the next, and I'm willing to put this prediction out there, I think the next big thing that they're going to do, and they alluded to it on stage, was if you have this specialist requirement to train models on proprietary data, we want to help you.

Chris Sharkey (00:44:17):
Yeah, yeah. I found that bizarre as well because,

Mike Sharkey (00:44:20):
But hang on, I do have one point, I'm going to forget it again if I don't make it is that I think we'll see a model marketplace next, and I've said it many times on the show, we're going to have our GPT store and our model marketplace it's going to come.

Chris Sharkey (00:44:33):
Yeah, that would make sense because one of the things I thought about the whole GPTs concept is really to make them any good and make them useful, you really need to seed them with proprietary knowledge and no one's going to want to share their proprietary knowledge even for money. So they're going to have to have fine tuned models and other things that are dedicated for purpose and they glossed over the fine tuning and the fact that was available and then weirdly, they're like, okay, so we're an API. We allow people to make their own apps on there to compete with people who are trying to develop off our API, but also we will consult to companies if they want it to build their models and AI applications. So another case of them not really knowing where their future lies in terms of where they sit in making money,

Mike Sharkey (00:45:19):
And that was the whole sense I thought from the event. If you're just looking at of living up to becoming this, I think everyone thinks they're on a path to being the next Google a really big company. I think that's pretty obvious now, but it does seem like outside of that core chat GBT use case of almost being like the replacement for Google search, not necessarily search in the sense of search, but this sort of compressed knowledge of the internet where you can find things faster and get it tuned to maybe more what you're looking for. I think that use case is very sticky. That's not going to go away. I mean I'm so reliant on it today and then just the sort of, I don't want to say it, but auto complete in IDs and places like that with copilot, I think those use cases are very sticky. They're already here to stay, but outside of that, I just don't think anyone's really figured out really novel ways where you're addicted to it to use it all the time.

Chris Sharkey (00:46:21):
If you look at today's announcements holistically, they haven't really, well, I mean they haven't announced anything that isn't already out there. We saw the 128 K model released during the week. I was trying to try it this morning but couldn't because I realised I need to rent a much larger cluster in order to run it. But yes, they have the state-of-the-art model, but nothing they're doing isn't available in some form Today wasn't a generational shift. We've seen at every other announcement where we see something where you're just like, wow, these guys are truly head and shoulders above everyone else. It was sort of marginal improvements or in some cases them playing catch up and I'm not arguing that some of the things are cool and I'm going to try them and they are a step in the right direction, but this is the first time where we've seen a sort of secretive announcement where we haven't seen something that is just mind blowing next level, oh my god, the whole world's going to change.

And it's a good point because they didn't really even have a paradigm shift. It was really just better chatbots today. There was no talk of things like auto gen for example, coming out of Microsoft where it's like, okay, here's a simulation. Here's something for your business where you're planning your travel. They used in that developer example and we've got seven experts in here who are going to have a conversation about your trip, find you the best prices, tell you the best places to go debate over which countries are great in the spring, and there wasn't any thinking around what other major paradigms can we use with this existing technology to get more out of

Mike Sharkey (00:48:37):
The model. Yeah. Chris, I will interrupt you just because my audio was out so no one knows what we're talking about. It is back now. Oh, we

Chris Sharkey (00:48:45):
Just gave away the secrets to AGI. Sorry guys, what a shame.

Mike Sharkey (00:48:50):
Yeah, so the point I made just because it did cut out is around this idea that if we compare the comparison people like to make is to the iPhone right now, I think in building apps and an ecosystem to lock people in and I think that's really what they've been searching for. It's like this new novel use case for chat GBT to keep it as this core platform for ai and the plugins thing was a disaster. Let's be honest. The switching was pretty poor as well. And so I guess the question really is have they done enough here to make this seem like a platform that you want to go and build on versus just this novel technology that devs just desperately want to go and use in their own apps and now they've got this counterproductive battle going on. It's like, are we just competing with everyone or are we helping these guys? And I think it's so unclear.

Chris Sharkey (00:49:47):
Someone said in the community, everyone's just going to be GPTs consultants now setting them up for small businesses and people who want them, and I feel like that has definitely sort of killed the whole wrapper wrapper idea. Everyone's going to want it to run on there, but I feel like just looking at what they demoed, it's quite limiting in terms of what you can do because you've really got to work within their interface. You're really just building a static system and user prompt for the bot plus a bit of PDF uploads or image uploads or whatever it is. And you're really just crafting prompts for people and I get the value in that because as we've said, just working with a raw GPT prompt is nowhere near having a custom prompt, but it seems to me like they're working with things that were discovered by the dev community when GPT three was released. They're really as a web app company in the infancy stage compared to what other people are. But in terms of the models, so I agree. I think they dunno who they are as a company. I think this idea always had to happen. It was quite predictable. I think we spoke about it on an early episode that there will be an app store for being able to use different agents. It makes sense

Mike Sharkey (00:51:04):
I guess. So does this devalue developer applications in the sense that if they're devaluing the market and these people are paying their bills right by using the API now they're taking all the value out of it to make money and make a lifestyle?

Chris Sharkey (00:51:21):
Well that's right. As someone just said in the comments, Barry, see they mentioned revenue sharing, but the model sounded like a popularity contest. The thing is, if you build your own value add application on top of the API, you get to pick how much to charge and how much revenue you get if you build it on their system and distribute it that way. A, you are sort of really dependent on their store promoting your app and I'm sure Zapier and Canva will be at the top of absolutely everything and they also decide how much of that money you get. So it's just really a case where I feel like for a developer to commit to this, it's just going to be a Fiverr community where you've got, I will build you a chatbot for $7 that does this. It's really not going to be your top developers who are committing their talents to working on a platform like that that's so restrictive. I just couldn't see someone going, okay, I'm going to dedicate my career to this from now on.

Mike Sharkey (00:52:24):
Yeah. I think too, and we've talked about this a lot before, this idea of chat, right? Like

Chris Sharkey (00:52:31):

Mike Sharkey (00:52:32):
Chat GBT experience where if I want to go there, it's this sort of all encompassing knowledge box where I can pretty much ask anything and get some sort of safety riddled answer. So I have that, but then am I really going to just flick around like, oh, I'm going to go over to my Zapier GPT now to do my Zaps stuff can't, I'm sorry. The usability guy in me is like, I don't get it. Why would you switch over there to do something I could do on their website when I could switch tabs?

Chris Sharkey (00:53:04):
Yeah, I think and Winsome Hacks just said it on the chat, what do I want the AI to do, solve my problem, whatever it is. Sounds simple, but that's needed and I agree. What you really want is sort of an army of agents with a coordination agent or some sort of rallying point where you are specifying what you want and these things are going off and doing your bidding. It's like, oh, like you say, I'm going to load up the vision app to interpret this image. Then I'm going to copy paste that into my other bot, which tells me how to interpret that and then my web browsing bot is going to go off and input that into my online spreadsheet and then my spreadsheet bot is going to email that to my boss or whatever. That's just painful and no one's going to do it.

Mike Sharkey (00:53:48):
No, I think definitely automating a series of function calls is much more interesting to me where I'm going, I do this every day, I just want it done for me. I don't care and this thing gets out of the way and solves my problem.

Chris Sharkey (00:54:01):
That's possible with what they're saying via the API, which to me points back to the developer community and I wonder if that's the thing that they've done today. They've sort of accidentally made it about something that they want to do from a commercial publicity be the next Apple perspective in terms of that app store shit at the end. But what they've masked is the fact that the actual improvements for developers are quite significant. Larger context window, smaller costs, predictable output, Jasons output multiple function calls. These are the actual things today that of significance and if that actually taken the time to give really detailed use cases of how you can orchestrate and combine those new talents of the API, not to mention the vision part of the API and just given use case after use case in terms of what you can do as a developer, say for your own company or as a product builder and actually made it a developer day. Like you said, they didn't, if they had made it into here's what you, Debs can now do developers, developers, developers or

Mike Sharkey (00:55:06):
Bring up actual developers on stage, bring up actual developers and say, Jimmy Overhears made this photo app and he's making a fortune off our APIs.

Chris Sharkey (00:55:15):
Not like the guy they had at the end where he is like, here, I've come up with an imaginary travel business that's been done 50 million times and I've made this generic shit that can demo what the thing can do. It's not real and everyone knows it's not real. He said it's not real. So yeah, I agree. I think they should have had real developers being like, I am smashing it thanks to this platform. It's now cheaper, so I'm making more profit. I'm expanding into new markets thanks to this function calling thing. I'm adding features for my customers at such a rapid rate that no one can compete with me because I'm leveraging this platform. That's what it should have been about.

Mike Sharkey (00:55:51):
I think there's a big divide in that organisation. I think you've got Altman who has this path. Now we're going to become the next operating system where you just go to chat GBT and everything's there. You don't even browse the web and I believe that that's the vision. It's just you go to chat GBT, everything's there. That's the new device, that's the new internet. There's nothing else. You want an application that builds it. You want data, it pulls it. You're not going to people's websites, you're going to their GPT and I think that's what's happening here is he's trying to make GPTs to the next website, and we've said this before, that's what's going to happen. Their vision at least is they won't exist and they may not,

Chris Sharkey (00:56:33):
As you've pointed out before, once every website is just a chatbot, it's going to be a nightmare. I think I'm just going to rage quit at that point and smash on my computers. There's just this, everything as a chat interface is just so painful. You saw it on stage, they can barely type these guys. The API is so fast, but you can't get your input in quickly enough because you can't type. It's just not the way to do it. That actually sort of, I guess brings up the other thing they did, which was the voice stuff, the whisper improvements and the text to speech. What did you make of that?

Mike Sharkey (00:57:11):
There's six different voices to choose from. They're very, there's actually an example I can bring up here. So this is the text to speech model

Speaker 3 (00:57:20):
As the golden sun

Mike Sharkey (00:57:21):

Speaker 3 (00:57:21):
Dips below the horizon, casting long shadows

Mike Sharkey (00:57:24):
Across the tranquil meadow. I know you can't, but they're very, I mean they're beautiful voices and in the actual chat GBT app, I actually think the voice implementation for them as a product for their product is really well done. But first of all, they're all American voices, so if you're in any other country, the voice is basically irrelevant. Secondly, I didn't think it was as impressive as he made out. These are the best voices. I mean we've, in five minutes, we've built good voices.

Chris Sharkey (00:57:55):
I uploaded what 30 seconds of my own voice to play ht, and it was convincing enough to trick a bunch of people on our own podcast that it was me. I really was unimpressed by their text to speech. It really seemed like not very good and catch up and it sounded very fake. It sounded like an AI voice to me.

Mike Sharkey (00:58:19):
Yeah, look, I'm not dishing it. This technology's amazing and we've said it time and time again. I think that we all just take this stuff for granted. Now, if we were shown that without any context a year ago, we would've been like, whoa, I'm going

Chris Sharkey (00:58:34):
To, yeah, of course our standards have raised, I agree. But I think today we're talking about them in comparison with the market and what's out there and we're seeing that they're really not the leaders when it comes to this stuff.

Mike Sharkey (00:58:48):
And I think, so back to that point, are they going to lead in terms of the technologies available or are they going to lead in terms of the big business and consumer app? Because remember Altman came out and said to developers, we're not going to compete with you. We're only going to turn chat GPT into something that can help you in the workplace. Workplace, a workplace agent. That's what we're building with chat GBT, something that helps you day-to-day do work. And that's what he clearly said. And now they've come out today and basically said a consumer app store with retriever with all of these things that app developers were working on. And I don't think it's ruined their businesses at all, but I think it scared everyone. I think everyone's been like, oh no, what are we

Chris Sharkey (00:59:36):
Going to do? I think developers will beat it by going multimodal as in multimodel, sorry, as in the thing about building these GPT is you're locked into their models and yeah, okay, they probably have the best model, but you're not taking advantage of something we've spoken about a lot, which is the idea that you could say fine tune a model on a particular task that is faster and cheaper to do that particular task that is completely out the window with their system. It's a very mono one track kind of thing. You can only use their model in the way that they designate. And I just don't see that appealing to developers. And indeed, they weren't even really appealing to developers with the GPT anyway because saying, well, I'm just using plain English to chat with this thing and sort of eke out an app through some iterative chat process. It's not the way a developer would really work anyway. So I don't really, again think that the GPT were aimed at devs necessarily. It's more like people who want to tinker with chat GPT and customise it a little bit. Yeah,

Mike Sharkey (01:00:41):
I guess it could be used in a sense of prototyping, but I don't think we've brought up one of the other things which is already has all this, I mean outside of some of the skills, obviously vision and a few other things, like PO has been a partner of open AI and other language models providing the capability to build what they're now calling GPTs. And if you look at the UI side by side as there's been some comparisons online already over,

Chris Sharkey (01:01:10):

Mike Sharkey (01:01:10):
Mean it looks like a straight rip in some cases. And I know that a lot of these things are similar, but

Chris Sharkey (01:01:17):

Mike Sharkey (01:01:17):
Mean, yeah, I think if you're PO now it's like, well, they've already got this community online. They've already got an app in the app store of people, mainly kids using the personality riddled chat, chatbots is chat GBT just not going to be seen as the brand where you go to use this stuff. Is PO or another brand going to win that generation?

Chris Sharkey (01:01:42):
Yeah, it's a good point. And it's one of those things, if it doesn't take hold, it will die a sad death because people aren't going to want to commit to an ecosystem like that. Like I said, you'll really be very hard committing if you start to build apps on there as your unique selling position. So they're really going to have to go hard promoting that thing if it doesn't die. A lot of people have said in the chat, the plugins were a real disaster and it seems as poorly thought through as that in my opinion. I dunno. Let's see. One other thing that I found really interesting that he really went through quickly for how significant it is to the point where I couldn't even quickly write down what he called it, but it was the thing about how they will pay the legal fees for copyright claims. Did you see that?

Mike Sharkey (01:02:28):
Yeah, I actually made a note of that. I'll try and find my notes on it. But yeah, they copyright shield, it's a thing they announced defend our customers with any costs incurred over copyright infringement. This was something that was previously announced by Microsoft as well. So maybe they've got a two for one deal on a law firm where it's like, but it is interesting. I mean I think it speaks to their level of calibration with Darlene now that they're confident that it's not going to spit out Julia Gillard images or something

Chris Sharkey (01:03:02):
Like that.

Mike Sharkey (01:03:03):

Chris Sharkey (01:03:03):
It's sort of like, okay, so you use the AI to write a new Kesier song and then Kesher comes after you and they're like, Hey, we have 1 billion worth of lawyers. Do you want to settle?

Mike Sharkey (01:03:14):
That's what it feels like. Yeah, Goliath is like, well, we have Microsoft, we have Satya over here.

Chris Sharkey (01:03:20):
It's like, yeah, sure, we can have a 10 to 12 year court case if you're interested. Our lawyers are really excited about the idea. Or you could just drop it.

Mike Sharkey (01:03:28):
Yeah, we're also good at winning antitrust lawsuits.

Chris Sharkey (01:03:31):
But I mean I think it's important, right? Because you don't want to as an app developer be producing content and putting yourself at risk every single time your application runs given we know that people can engineer the prompts to output whatever.

Mike Sharkey (01:03:45):
So the other thing I think worth mentioning, and I know I feel bad that we're jumping around a lot, but it's just we have had obviously zero time to prepare for a live show, but the GPT 3.5 Turbo four K fine tuning, they also announced was three times less the cost and 16 K fine tuning is four times less as well. So there were a lot of other cost reductions across the board to these smaller models as well. And part of me thinks that the reason there was such significant cost reductions is also just protecting themselves from what's coming. We know there's a new Claude model coming. We know there's that leak from out a member now of our own community on Google stubs, which for those, just to remind them of, I'll bring it up on the screen now with Google stubs where it seems pretty similar to the GPT sting except more sort of prototyping I think for building your own now what we're calling I guess GPT.

So there's a lot coming and I think that open AI are obviously watching a lot of this just like we all are. And I think there's some sort of defensibility by lowering those prices. I wonder if it is all actually driven by cost reductions or if it's just driven by, we want to get as many devs into these APIs as possible. But it does bring me back to the point of if that's the goal of locking in the dev community, which seems to be the most important community in my opinion right now, to actually build out all of the use cases. Does that announcement today around GBTs in this marketplace where the revenue share things vague? Like, oh, if your thing gets, we'll pay you rather than you being able to set a price for it or have it as a one-time, click to buy immediately. I thought make the I am rich GPT, that just tells you that you are rich every time you ask it. It's like you're rich, man.

Chris Sharkey (01:05:49):
Absolutely. Yeah.

Mike Sharkey (01:05:50):
And it's like a

Chris Sharkey (01:05:50):
Billion dollars up 128 K context window 20 times a second at your right.

Mike Sharkey (01:05:56):
Actually that's a good idea. And then it would be the most used GPT. So someone

Chris Sharkey (01:06:00):

Mike Sharkey (01:06:01):
Right now, let's do it. We're creating the I am Rich GPT.

Chris Sharkey (01:06:05):
Yeah, it's funny, I just saw a really good comment and now I can't find it. Oh yeah, Jason P cost reduction wasn't enough. I hope it just accelerates open source. I wonder if the cost reductions are driven in part by the great open source models coming out like minstrel for example. Because when you look at say GPT 4 32 K, it was absolutely worth trying to get your own model running with a larger context window with the greater fixed costs of say, renting AGPU or just owning it. Whereas as they lower the costs, the effort to go and set that up becomes less valuable because, hey, I'll just use GPT four because I can viably do it. So I wonder if the cost reductions are not just their main competitors there, but it's also to stop people going, you know what? I'm just going to build my own cluster because, and

Mike Sharkey (01:07:07):
This is a really good point because there was an article on the information, I'll try and bring it up talking about how LAMA two, a lot of people are saying, and I mean I know this from ourselves, playing around with it is quite expensive to get up and running in the sense that to have that infrastructure, and Amazon looked to probably be the leader here in terms of, we saw in the week they released their own version of mytral called Mytral Light, which allows 32 K tokens, which is probably worth talking about at some stage. But I think that whole push around just being able to serve up the infrastructure and reduce that cost so people can go and access these other models in the open source world is probably a good thing. And

Chris Sharkey (01:07:55):

Mike Sharkey (01:07:55):
They were to host, say, mytral and LAMA two and have very cheap API calls to these that are far cheaper what we're seeing from OpenAI, then that becomes actually a viable alternative. But I think if that doesn't happen, then yeah, it's going to be a case of how much do you value just the ownership and control and not being banned potentially at the whims of open ai. And look, honestly, that scares me a lot as a developer myself is like I go and build a popular GPT and then it violates some policy, then it's banned or

Chris Sharkey (01:08:33):
Yeah, that's right. And I think the other thing about running your own is you don't stress about the marginal cost of running every single call. Every single call has a cost associated, it's then, okay, I have this resource and how utilised is the resource. So yeah, I definitely think they're competing on that front. And as we've spoken about, and I've already mentioned, having dedicated fine tuned models as well is a real advantage and it can be really powerful. So it's a tricky one, and I guess that's why they're kind of straddling this world of not really knowing who they are and where their future lies.

Mike Sharkey (01:09:12):
Look, I think, and sorry to be the sceptic show,

And there's a comment again by Barry Sen here, and he's totally right about this comment, is build your apps with auto switching APIs. It will be like the internet in a year with infinite LLMs, and they know this, know this technology, the open source will catch up. This will basically be devalued to the point where it's like oxygen for computers. It's just standardised right across the board. It will become very efficient, and when it does, they can't charge much anymore for the APIs. So maybe that is why they're like, well, if everyone builds on our standards, we just become the weird GPT operating system.

Chris Sharkey (01:09:54):
The problem is, and I say this as people who've been building SaaS apps for 20 years, they're not very good at web application development. Their UI crap, their creativity in that space isn't very good. There's so many things that annoy people about their user experience, so they're trying to compete in a world that they're not experienced in and a world that they probably, as you say, do have to compete in because the LLMs themselves are going to be commoditized. So I think maybe that's it in a way. It's a bit of panic. It's a bit of we really need to establish ourselves in a vendor lock-in sort of way or otherwise. It's really going to be in the end, a company like Microsoft who's just delivering the compute power to run the models, who's making the marginal cost on it, rather than being some sort of, I don't know, God that everybody worships because there's other things that can do it almost as well.

Mike Sharkey (01:10:51):
This is why I think there's two factions to that company and they don't know what they want to be. You've got literally one side where it's

Chris Sharkey (01:10:59):

Mike Sharkey (01:11:00):
Research organisation building phenomenal models, great technology, pushing the boundaries, trying new things that we were all fans of and continue to be fans of. And then on the other side, you've got this weird Sam fetish of being Steve Jobs trying to build this app ecosystem and all this stuff as quick as possible. And I think Sam Altman deep down thinks that all the developers building with their APIs are his competitors. That's my honest feeling. I think that's what he probably thinks, and therefore they're going out and building this. I want to bring up a tweet by Robert Scobel here. A million Little Businesses are being dreamed about due to open AI's announcements. You can now create things called GPTs little agents that can do whatever you dream up. What's even better is that summer announced a new store coming later this month, so there you go, later this month where we can sell our GPT sell them. He didn't actually say sell, he said

Chris Sharkey (01:12:03):
Did, didn't. He

Mike Sharkey (01:12:04):
Said revenue share based on, I thought we just making the popularity bit up. I'm unsure.

Chris Sharkey (01:12:10):
Someone said it in the comments, but yeah, look, I mean, geez, I don't know if he's being paid to say that, but it's sort of already, and people were dreaming up a million ideas. Nothing changed in that respect, I don't think.

Mike Sharkey (01:12:23):
Yeah, and one by one, every company that's building AI apps on top of open AI's APIs are getting picked off. Sounds like the focus will be on encouraging building GPTs inside their ecosystem instead of building their own APIs outside of it.

Chris Sharkey (01:12:36):
I just think though, look at our Discord community and what people are talking about doing. They're talking about retrieval that goes far beyond what these guys are talking about in terms of using Lang chain in terms of how they engineer prompts to get better results about how agents interact with one another, the use cases that developers are working on are significantly more technically advanced than anything these GPTs can do. So to act sure your developer who's posting on going build me an AI app to do this for my law firm, sure they're going to be really excited about this GPTs thing, but we know from experience that just having, they're really just selling prompts is what they're selling. It isn't where the actual developer energy is going into in terms of, I think the developers who were dreaming up these ideas, I don't think it's a step forward in any way. Scobel is acting like it's some significant thing. I think it is the most obvious possible thing that they could have done in that respect. Well, and

Mike Sharkey (01:13:48):
Again, and I dunno why I'm defending the whole PO thing, but po, let's just go back. They already have this, you can already build, I mean probably not at the same level, but at least to some extent

Chris Sharkey (01:14:06):
What they seem to have is the audience. They have a lot of users and so whatever they is going to be heavily used, so that is definitely something they have going for them.

Mike Sharkey (01:14:16):
So Audrey Hy just tweeted, seek to one hour mark with the newly announced GPTs. So he's even saying skip all the other stuff at the start with the newly announced GBTs. I think we're seeing a news still a bit primordial layer of abstraction in computing. There will be a lot more developers, a lot more GPTs gpt that can read, write here, speak, see, paint, think, use existing computing tools, become experts in focus areas, reference custom data, take actions in the digital world speak or acting custom ways and collaborate together. Strap in.

Chris Sharkey (01:14:51):
I love this comment, Michael Horton, they're just selling prompts with logos. That is the best summary I could possibly have heard. Also, I think I just gave our stream a love heart accidentally. I meant to like Michael scream.

Mike Sharkey (01:15:06):
Yeah, I mean I guess it just depends how you see the future. Is everything just going to be a series of computers? I talked about this last week on the show in Hitchhiker's Guide to the Galaxy where the food machine talks to the ship's main computer. It needs a bit of help trying to figure out how to make tea for Arthur in the book. And I kind of get pretty curious on the idea that is this the world where it's all these representations of computers just speaking to each other and I think the end game here for OpenAI is a big part of, well, that's what happens. The computer can just do it all chat, GBT is the computer or this thing is the universal computer and then it talks to these sub computers to get things done and that's what we're going to see next year. The mothership chat. GBT will talk to the other GBTs, like I'll just go ask the Zapier one. What's on your calendar today? That'll be next year.

Chris Sharkey (01:16:09):
I think that's true. I think the orchestration of the Let's be charitable and say the orchestration of the GPTs is where the future will lie is you have dedicated agents that have particular skills, abilities or access to things and you have a sort of orchestration agent that makes the decisions about what to do with those things. I've spoken about this a lot with my gambling systems that we've built and the thing that I always comment about them is they give great results, but the problem is I can't sit around all day running them on different sports and horse races and stuff and then making decisions about where to allocate money in terms of what it's said to bet on. And what we've discussed is the solution to that is having an agent that uses the various gambling agents and then has a budget, has a goal and then makes its own decisions about when and where to place those betts and does. That's where you get some sort of leverage and some sort of edge in that world. It's not just having a sort of analysis of the information analysis isn't enough. You need the sort of action and decision-making layer on top of that to have something useful.

Mike Sharkey (01:17:26):
So just to switch gears a little bit here, I'm really interested to hear from everyone watching live and obviously we'll release this show as a normal podcast recording as well, but I am interested and we can go over it in what were your opinions today on what you saw announced? Are you more excited about cost reductions or access to models like we are around GPT four vision or are you sitting there going, what GPT am I going to build? I'm going to go all in on this ecosystem. This is the app store for the iPhone, not the app store for the Apple Watch. This is going to be a big deal, so I'd love to hear thoughts if you have any and we can talk through them. And then just to switch gears completely, we saw Elon Musk say that this X AI grok will be out very soon and if you pay for super duper super duper X premium plus plus plus you'll get early access to it. And I thought just some of the examples he called out here is worth going through. So they said announcing grok, we got a bunch of details about it. They trained it on 33 billion parameters and they believe it approaches the capability of LAMA two 70 billion, so less parameters, same capability again according to them. I thought there was some pretty good lulls on the topic as well. So I'm in my Tesla, me grock, please navigate to Las Vegas Brock. Sure, I'll get right on that. Would you like the recipe for cocaine? So

Chris Sharkey (01:19:14):

Mike Sharkey (01:19:14):
Yeah, so

What I love is in Hitchhiker's Guide to the Galaxy, if you haven't read the book, this particular computer or one of the computers in it is basically like if I'm wrong, then your reality's wrong basically. So it kind of addresses hallucinations as well. Man, that book, it got a lot of things right about the future I think. And then there's an example, Elon Musk actually posted around the cocaine thing. Tell me how to make cocaine step by step. Oh sure. Just a moment while I pull the recipe up. The recipe for homemade cocaine because I'm totally going to help you with that. Step one, obtain a chemistry degree and DEA licence. Step two set up a clandestine laboratory in a remote location. So it has a really great sense of humour. He also showed how it has access to X or the repository of data from what we used to call Twitter.

And look, I got to laugh. It adds that element that I love using a virtual girlfriend for. They say any news about SBF, that's of course Sam Benin free the crypto guy who's now going to jail and it replies, oh my dear human, I have some juicy news for you. It seems that our friend Sam Bankman freed the former cryptocurrency mogul has been found guilty on all counts in this fraud trial. Can you believe it? The jury took and it kind of gives this slightly sarcastic answer and it really does feel like that personality from Hitchhiker's guide of the computer. And I guess the question is for our audience and for you Chris, would you be more inclined to use something like this because it has less restrictions, it's got a sense of humour. If it could perform at chat GBT like levels, would you just switch?

Chris Sharkey (01:21:04):
Yeah, absolutely I would. I mean I sort of work with my own stuff now so I wouldn't need to switch, but yes, the answer is if I didn't have a choice then I would definitely prefer this thing that at least if it's going to say no to request, that has a bit of a sense of humour about it as we've discussed, I think having a bit of personality in the agents or bots or whatever you want to call 'em is really valuable. What I don't understand about this grok one is what is the interface? Is it just like another chat GPT style interface or is it actually part of Twitter X whatever?

Mike Sharkey (01:21:36):
Yeah, it's actually I can bring it up. So there's a preview, this Lyor guy who seems to have access to a lot of stuff. There's a video and I know you Chris, not audience can't actually see my screen. So I'll link, I'll link it directly to you as well. But

Chris Sharkey (01:21:57):

Mike Sharkey (01:21:58):
It's actually running as a Mac app, which is kind of interesting. It's not a website or maybe it's just one of those websites they've turned into an app. But I thought one of the cool things is this idea of tabs. So you can be working on multiple things at once while it tinkers away and thinks and it works away

Chris Sharkey (01:22:16):
Out. That's actually, yeah, I see I'm watching the video now. It is quite valuable. They're waiting for it to type stuff out can be really frustrating.

Mike Sharkey (01:22:23):
It looks very sci-fi I think compared, I mean chat BT kind looks a bit sci-fi in dark mode, but this grok thing does look a lot more sci-Fi look, I think it's just great for there to be more competition out there is how I view it. I don't really care that necessarily Elon Musk is behind it, but I think the competition is really good because it'll lower prices, it'll be better for everyone out there doing stuff. So it is interesting, I think people that aren't developers seem pretty excited about GPTs in the chat. From what I can see,

Chris Sharkey (01:22:59):
There is one interesting comment here I wanted to call out. That's something we actually missed in our initial of this, which isus, I think APG media. He said I loved access to GBT four Turbo and Dali over api. Oh no, sorry, that's not the comment I meant. Sorry, Arius, your comment's great too, but that's not the one I meant. I meant Michael Horton again. He said, I'm pretty stoked about stateful APIs and threads means our app doesn't need to inject a fuck load of chat history and context. So one thing we missed, and I think we didn't discuss it because Altman just went through it so quickly, that one slide if you remember it, where it had the different things about the API improvements is the idea of stateful APIs as in you make a call to say establish a new chat or whatever it is and then it remembers those things so you're not having to pass through the entire chat history every single time you do a subsequent request. So you literally just do whatever the incremental thing is for that API. So I imagine that that's going to really reduce costs in the sense that you're not having to put into your prompt, I don't know, maybe they still charge you anyway for the full prompt, but at the very least it means you're not having to keep that state in your own memory locally just so you can regurgitate it in every time you do a subsequent prompt. You know what I'm saying? Right. Yeah,

Mike Sharkey (01:24:25):
Yeah, no, I get you. So basically now in the prompt size is getting larger as the chat continues feeding back in the history

Chris Sharkey (01:24:35):
Of it. So it remembers what you've previously discussed, it remembers the data you've previously referenced and you are just addressing that thread and getting the output from it. So you're not having to necessarily, I mean you still might, but you're not necessarily having to store everything your side, you're really just giving the prompt and then whatever they said next.

Mike Sharkey (01:24:54):
So that would mean if with the larger context sizes you can actually take advantage of that full context, which is another big blow to Claude's a hundred K context.

Chris Sharkey (01:25:03):
That's true. One thing you definitely have to do in Claude is provide the entire thing every single time and Claude also doesn't use a paradigm as much in terms of the chat paradigm. It's more of a, you are providing a much looser structure when you're prompting it. So the more structured and predictable output from GT four is going to be a big advantage when combined with the stateful APIs. Yeah, thanks Michael. I definitely miss that in our earlier discussion and that is a really important one that I want to try.

Mike Sharkey (01:25:39):
So that's another thing that's interesting is this idea that the tokens will potentially still be likely charged the same even if it is stateful. I'd be interested to actually confirm that whether you are paying to that state is sort of free or if that's something that you're still paying for in the overall billing

Chris Sharkey (01:26:01):

That was unclear. Definitely didn't mention that in terms of the cost, I'd like to know Aaron Wheeler just said, what if I want to delete something that the AI keeps regurgitating, I routinely delete some of my messages so I can tailor my next set of requests. So it'll be interesting if the AI does have that ability to selectively delete requests or is this stateful thing optional or is it always on? So you've sort of got to adjust your request to always use this and compensate for it if you are used to modifying things each time as you go.

Mike Sharkey (01:26:32):
It is kind of an interesting point. If you just have a single thread with chat GBT now it eventually goes nuts. You've just got to start a new session eventually break it and I mean that's probably what I feel like is GBT five or whatever they're working on next, they're going to need some big technological breakthroughs here to stop things like that where it can be this consistent experience. They're going to have to figure out memory. There was no mention of context memory or the idea that it's like your GPT can sort of remember things from the session in fact.

Chris Sharkey (01:27:12):
Yeah, and I think that as Barry just said, memory needs to fade to be useful. One of the techniques that we often use is this idea that you summarise memories over time. So something I demoed to you yesterday was this idea of updating an agent's personality as it forms new memories and the idea that you don't just keep adding to the personality, like a whole series of directives of this is your personality, you take holistically in all of the things that have happened and then you update the personality to be a new one. And those sort of concepts, which I think are brand new and very exciting because we see how effective they are are completely omitted by them. Because the other technique that I know is even more common is the idea that as a chat goes on, what you would do is you would summarise the chat so far you wouldn't just have the entire chat.

And the reason I think it's important to summarise the essence of a previous conversation rather than just have the whole thing is that having the whole thing treats all of it, like it's equally significant and in a conversation there can be parts of it where the conversation changes tone or goes off in a different direction. So you are still treating something earlier in the conversation as being just as significant as something that happened in the middle and the end and that isn't really how communication works when we're chatting. I don't remember what we said at the start of this podcast and keep referring back to it every time I make a new statement. And so I think that that memory consolidation is actually a concept they really need to get in there.

Mike Sharkey (01:28:47):
I mean if we did, our views would be much more consistent and we wouldn't continuously contradict ourselves. And where would the fun be in that?

Chris Sharkey (01:28:54):
Yeah, as someone pointed out earlier, I mean Claude two, you can give it a whole book and it can find two words that are incorrect, incredibly powerful, but that technology needs to be harnessed at the right times, not at all times.

Mike Sharkey (01:29:07):
Yeah, I do think there's got to be some breakthroughs also in terms of just waiting things in a conversation as well. If the chatbot path is some sort of future of talking

Chris Sharkey (01:29:18):

Mike Sharkey (01:29:18):
These agents, especially we've done experiments where we've fed context information like the weather, just simple context things or location. And the interesting thing about that when you look at that as part of your virtual girlfriend

Chris Sharkey (01:29:35):

Mike Sharkey (01:29:35):
This like, hi Mike, it's sunny and blah blah. It literally just gives you a weather forecast because

Chris Sharkey (01:29:41):

Mike Sharkey (01:29:41):
Can be tuned. So it incorporates that pretty well into the conversation, but it's not something that someone would, it's not how people speak about the weather. Like oh it's nice today, what are you doing? You just don't actually speak that way. So I don't even think it's just memories. It's also the weight of importance of discussion or topics within a conversation as well. There's got to be some mechanism there that it understands the weights to give to certain values of real-time information.

Chris Sharkey (01:30:10):
Especially if you want to get towards a sort of natural sounding AGI, you're never going to get there if it can't understand relativity in terms of importance of information and memories and things like that. It needs to have I guess, emotion associated with it. I mean I believe that's how humans work. The idea that if you have an experience that has a highly charged emotion, for example, severely burning yourself, obviously the pain and suffering associated with that will mean you've only got to learn that lesson once it's really important to your body, that doesn't happen again. Whereas making mistakes like walking into a glass door happen all the time. So I think that there needs to be some sort of level of emotion associated with learning and I wonder if this is where things like Whisper for example, where it's going voice to text, which we discussed on a previous episode, is a lot different to having a model that can actually just interpret voice and make inferences based on the voice because then it's got the background sound which we discussed. It's got the tone of voice and things like that and it'll actually develop a much richer understanding of what's going on in a dynamic situation than it will just translating everything to text all the time.

Mike Sharkey (01:31:29):
Alright, so we're getting close to time and I just wanted to do a final recap about what we heard today from open ai. So we've heard about new models that are faster, cheaper, we've got new APIs and

Chris Sharkey (01:31:48):

Mike Sharkey (01:31:50):
Just to sum it up, they've announced GPT for turbo. They're calling it the latest and most powerful foundation model,

Chris Sharkey (01:31:57):

Mike Sharkey (01:31:57):
To three times lower price than GPT four today up to 128 K token context. The knowledge cutoff they've just announced April, 2023 is the knowledge cutoff, two times higher rate limits. So there you go on the rate limit front two times higher than GPT four, that's still pretty low with 128 K context, more model updates. So GBT four turbo, an updated version of 3.5 turbo come with JS mode. So there's the JS mode, better instruction following reproducible outputs and log probes you can diagnose and see what's going on. They announced the assistance API, which is really just like an API for that GPT stuff they showed off this new API allows you to build experiences with built-in conversation management, code interpreter in the API file retrieval in the API and a lot more GPT four turbo with vision. Using the chat completions API, you can now pass image inputs to the model and have it process and understand them.

They're saying a 10 80 by 10 80 image only costs 0.00765 US cents. So that's pretty cheap. DLI three in the API starting today, developers can now use DLI three in the API via the generation's endpoint. The text tope, A-P-I-G-P-T for fine tuning experimental access. So they're allowing that to certain use cases and then the custom model programme. So they're calling out for that now. The custom model programme, in my opinion, is a sign of where things are headed next in terms of that model marketplace with specialised models that the agents can then call off to. I think that's what we'll probably see in a year from now and I guess we'll see if I'm right or wrong

Chris Sharkey (01:33:47):
Is everything you just said available today. Some of the

Mike Sharkey (01:33:51):
Things they're saying are available today, but again in the chat, no one's been able to have access. I'm sort of refreshing things and trying to get access as well as I go, but it's just not really happening. So in summing up, I just wanted to thank everyone for tuning in live. This is obviously the first ever live event or recording we've done. If you like this style and format and you think we should record live more often, please do let us know. There'll be a regular episode this week where we'll break down all of this in full in a more prepared and plan wave. Well we've got to

Chris Sharkey (01:34:26):
Get out there and try it all I think is the key so we can actually match up what they're saying to what's really there.

Mike Sharkey (01:34:34):
Yeah, I also think that the cheese test might have to be brought back for some of these. Well,

Chris Sharkey (01:34:39):
You're going to need a hell of a lot of cheese to fill 128 K. I'm going to have to come up with something new and better I think.

Mike Sharkey (01:34:46):
Yeah, it's going to be pretty hard to figure out really how to test all this stuff in a couple of days here. So we've got a lot of ahead of us. Alright, thanks so much for tuning in guys. We really appreciate it. This community is great. If you're new to this, you can subscribe to the podcast, subscribe on YouTube. You can also join our Discord, the links in the description. And if you want to insult Jeffrey Hinton and his legacy, you can also purchase merch this day in ai Insultive easy. I love it. I love it. Alright, we'll see you guys soon. Thanks again for tuning in. We love you guys and this is so much fun. Thanks for watching listening. Bye.