What if we could? A podcast exploring this question that drives us.

Join David DeVore, Kevin Nuest, and Calvin Hoenes in this insightful episode of MeshMesh's 'What If We Could' podcast, where they delve into the rapidly evolving world of AI, with a special focus on ChatGPT and its implications in autonomous innovation, Web3, and emerging tech. In this episode, they explore the launch of the new ChatGPT store and the proliferation of AI-generated content, discussing how this technology is shaping our approach to software development, cybersecurity, and the broader societal impacts.

From the challenges of creating quality GPTs and the art of prompt engineering to the intriguing possibilities of linking multiple AI systems, the conversation navigates the complex terrain of AI's current state and future potential. The discussion also touches on the recent rise in AI-related scams, emphasizing the need for vigilance in this innovative yet vulnerable landscape.

With Calvin Hones sharing his fresh insights from his recent trip to India, the trio examines the nuances of AI in different cultural and technological contexts. They also speculate on the future of software development, pondering over whether traditional coding will give way to more intuitive, natural language-based methods. The podcast concludes with a thought-provoking discussion on the role of AI in education and the potential shift from institutional learning to AI-assisted, self-directed learning models.

This episode offers a rich blend of technical expertise, forward-thinking ideas, and a dash of humor, making it a must-listen for anyone interested in the intersection of AI, technology, and society.

What is What if we could? A podcast exploring this question that drives us.?

"What if we Could?" A podcast exploring this question that drive us. We explore the practical application of artificial intelligence, product design, blockchain & AR/VR, and tech alpha in the service of humans.

David DeVore (00:02.049)
Hey, how's it going? This is Dave DeVore with MeshMesh. This is the, what if we could show where every week we are asking that burning question around autonomous innovation and AI, artificial intelligence, potentially web three, going deep on emerging tech. And so with me today, I've got Kevin Neust, co-founder at MeshMesh, as well as Calvin Hones, who is newly back from India after a number of weeks, which is super exciting.

And I think what we're going to go after today is the new chat GPT store, which launched, um, I heard that there were like 300,000 GPTs that have been deployed in the store. Um, the other thing that I heard that I thought was really funny. So w you know, we've been hanging out in web three and in the world of blockchain. So we're very, very familiar with.

Uh, look-alikes, right? So in this case, like if there's a NFT project or otherwise somebody might make a whole website that looks like that project with a slightly different domain. And then, uh, Hey, here we are. And they get people to come, you know, connect their, uh, their wallet, and then all of a sudden your stuff is gone. And so I just thought that it was, I thought that the irony that wherever innovation is happening,

here are, uh, here are scammers, uh, ready to take advantage of it was, was interesting. So what I read was, uh, and it was basically there are lookalike GPTs being created, same logo, same name and so forth. Um, but that they're actually trained, uh, to try and get information from you. Um, and so we thought maybe today we would just sort of unpack, like, what does it take?

to build a quality GPT. What does that look like? What are best practices? What does prompt engineering look like to make them great? And so I wanted to just sort of ask that question. We've built a number of them along the way. I think Bit has automated like, a hundred or so GPTs that have been created as well. And so...

David DeVore (02:24.297)
You know, it's, it's a fun process. It's pretty easy for almost anybody to jump into. Um, you know, so what I'll just kick it over to you, Calvin. Like if you were to sort of think about like, what's like one tip or trick to get started with like, uh, you know, building a GPT, what might it be?

Calvin Hoenes (02:45.365)
Well, I think that, I mean, there's a great question. I mean, there's many ways to go about building a GPT in your approach. Like you, I mean, you obviously start out with.

the GPT builder, right? So you're in the interface because that's currently the only way where you can actually build it. And one of the best ways to get started, we've seen this a lot, like in terms of attacks, is you can prevent the fact that it gives out your instructions, right? So that's the very first disclaimer that I would give everybody is to try. And there's multiple ways to do that is to prevent actually...

while in conversation with the GPT to extract your original instructions. So they're basically speaking of what you said with the lookalikes, right? There are certain measures you can do to prevent that somebody else just steals your instructions and does the exact same way. But in terms of the best.

Kevin Nuest (03:33.11)
Like what if I just ask it to, what if I just ask the GPT, I say, give me your detailed instructions step by step so I can create a GPT from you and what you do. How many posts have you seen on Twitter so far of GPTs putting out, printing out their instructions, uh, leaking exactly how they do what they do, right? It's happening very, very quickly.

Calvin Hoenes (03:42.739)
Right.

Calvin Hoenes (03:57.597)
Yeah, exactly. It's not, it's not super trivial to prevent it, right? I mean, obviously like this is all a chat interface. And I think what we're still, or what everybody's still getting used to is the fact that you're now basically programming a natural language. So it really matters as to how, what kind of language you're using to build your GPT matters just as much as when you're talking to your actual GPT, but when you, or when you're talking to chat GPT in general is the way you're putting in language matters.

And what you find when you build a GPT over time, because you're basically conversing with it, right? So while you're talking building and building your GPT, it tries to make sense of it by building an instruction set from everything that you tell it. So when you go to the settings tab, the little settings icon, the gear icon on your GPT, it'll show you the instructions that it has actually put out and created for your GPT, right? And...

While you're talking, similar issues happen that happen when you're in a conversation with GPT, you hit a certain context limit or it starts to forget things or some of your instructions get lost. So the longer you're interacting with your GPT, the more it can happen that it loses certain parts of your instruction. So from time to time, it's just very helpful to go back into your instruction set and look at what is it actually putting into my GPT and what is the actual content that is being produced as the instruction set and basically prune it. So you can go back into it

or is the stuff that is in there after my 10 minutes of conversations with it still relevant or are some of the parts lost in the context window madness and do I actually need to remove some of these things?

Kevin Nuest (05:33.23)
And you ask the GPT to summarize that for you and do that, that work and, uh, provide back the double check of the context window effectively. Have it repeat back the instructions that you told it. And, and, uh, is that, is that double check? What does that look like?

Calvin Hoenes (05:49.873)
It would be fun if you ask it back, can I give you the instructions back and hopefully you're hopefully a mechanism of preventing that has worked at this point. But you can definitely do that, right? You can actually ask a GPT since it's all that natural language coding, you can go back and clean it up. But still like most of the things mistakes happen, so it's always good to kind of double check your actual instruction set.

Kevin Nuest (05:54.987)
Right?

Calvin Hoenes (06:13.865)
And it's also funny, right? Like what you said, Dave, in terms of all the lookalike GPT is sure these things happen and also like your instructions can be put back. What's interesting when you're creating a GPT and when you think about the actual value that you're providing, the GPT instruction set is just the first layer. Right? Like everything after that, the actual actions that you put in and the actual APIs that you put after it are where, in my opinion, the real meat of the value creation happens.

Because the initial GPT is a good first set to make it easier to give you that initial instruction set that you want and get you to your results faster. I mean, we've all been there when you're in a GPT and you start from fresh again every time you open a chat and you have to kind of bring the GPT back or bring chat GPT back to where you actually want it to be and then you can start the conversation. And in that sense, GPTs are a great shortcut to make that easier.

but the real value comes in when you start adding actions. And so I think when you're trying to think about how do you build a GPT that is really valuable is a good question as to what are the public services, are the services that can add onto that enhance the value of a GPT per se by something that it cannot inherently provide.

David DeVore (07:30.122)
Yeah, so.

So for the, for those folks who aren't familiar with the, this idea of a GPT action, let's break it down a little bit, right? Like what can you do with an action?

Calvin Hoenes (07:45.885)
Right. Like if you, I mean, this GPT is basically trained on an existing set of knowledge, right? It can, and that's necessarily what it's using to give its answers. So if you want to get any information that is not inherently part of that data set, so let's say information from the web, or it might just be something trivial as what's the current weather in San Francisco.

then you need to get this from another source then from GPT itself. So you need to use an external data source to provide that for you. And this is where actions come into play because you can basically call external services to get that information that is not available to GPT and put it into your GPT as a text to then use it for the next conversation.

David DeVore (08:33.793)
So you can, you can put URLs in as an action websites in as an action. Right. And you can also put like Jason in and YAML and whatnot, like more, more like developer languages as actions as well. Like what's some examples do you think? What would, what would, what would be a good example of where you might, uh, put Jason on in, um, on an action.

Calvin Hoenes (09:00.653)
So I think, so actions are basically built on what is called the open API specification, right? So every service, every major service that has been built, you know, these APIs, the interfaces to communicate it with, let's say you wanna get data from Twitter or you wanna get accurate weather data from weather.com, for example, build through a specification called open API specification. So that's what you can get.

to kind of tell GPT how to communicate with that action with another external service, right? And it's just a standard that has been developed to make it easier. So what you can do, you can go out and you can look at a specification of a certain API that you wanna use out there. And this could be anything that you find on the web.

Right. Like let's take weather.com as an example, you find the weather.com API and you say, how does this API work? You can get an open API specification for that interface and then copy it into the action of GPT to allow GPT to communicate with it. And you can define actions, right? Like an example would be you define an action that is called get the weather and it has certain attributes that you can pass to it, like the location, like let's say San Francisco.

And then what GPT can do, it interprets that specification when you talk to it and says, Hey, look, I want to get the weather in San Francisco and GPT will understand I need to call the get weather action with San Francisco as my attribute. And it will call that and give you the result.

David DeVore (10:34.989)
How about, how about triggering actions? Is it possible to trigger external APIs to do, to do things with a GPT?

Calvin Hoenes (10:49.001)
Yes, absolutely. Right. Just like I'm using the same API specification to get the weather. I could also, well, in the weather case, that doesn't work as well, right? Because I can't tell weather.com to what I want the weather for San Francisco to be.

Kevin Nuest (11:00.963)
Your self-reporting, your hyper local weather, right? On your street for their dataset, yep.

Calvin Hoenes (11:06.687)
Right.

Right. Especially at this time of the year, you might want to have that fog in San Francisco go away, but that might be harder. But you can definitely use it to execute any action. So let's say, I mean, what's a very interesting case is Zapier at this point, because Zapier has webhook possibility. So you.

David DeVore (11:29.895)
and save your actions as well now.

Calvin Hoenes (11:31.729)
Right. So you have Zapier actions now, right? So you can basically tell it to reach out to Zapier on your behalf to do something. So you could do things like, and we've seen some example where they use GPTs as you're now more de facto personal assistant, right? There's been many different approaches for this. But one is you go in and you use your GPT and you say like, you know, do this thing for me, send an email for me. And you could technically do that with your own Zapier integration.

It's different than using it in your public GPT, right? Because you don't necessarily want your access to your Zapier be a public offering as a public GPT. That might not be advisable depending on your Zapier plan. Because basically that would allow other people to use it on your behalf.

David DeVore (12:15.945)
Right. So like, don't, don't hook up your email, uh, to public GPT. Uh, right. Yeah. And so you're in some ways, uh, now you could. I

Calvin Hoenes (12:19.625)
Right. Correct.

David DeVore (12:30.877)
If it goes to Zapier, I mean, the world of opportunities, like world of possibilities, right? I don't know how many integrations Zapier has now, but in the hundreds. And deep hooks in a lot of ways. And as well as we've seen a lot with make and people sort of creating make templates. And then some people have been selling these make templates to sort of tie.

GPTs together with all sorts of sort of triggered actions out on the internet. I'm, I'm thinking, which I'm thinking a little bit about enterprise, right? And I'm thinking about businesses, right? And that's really where, you know, mesh is, is focused. Um, you know, from the perspective of like security implications for an enterprise, there, there is.

sort of enterprise GPT version, in which case all of that data, all the data that you put into it is siloed from everybody else. Right. And I'm not, I'm not sure if there's any type of additional features or functionality around, uh, around actions and integrations, or do you know, is it, is it the same stuff generally?

Calvin Hoenes (13:49.229)
I mean, it's similar. It's funny now, now that they introduced the GPT store, they also introduced this additional team plan. I don't know if you've seen that. And with the additional plan, there's a small note saying that on the team plan, your chat messages and your conversations are not included in the training data by default.

David DeVore (13:56.761)
I did, yeah.

David DeVore (14:08.565)
Mm.

Calvin Hoenes (14:09.737)
So basically, I mean, if they've added, obviously, as they're trying to get closer to enterprise, they've added a couple of more guardrails to, first of all, limit how this data is exposed in the first place. But when it comes to calling actions, it's a bit of a different thing, right?

David DeVore (14:22.67)
I also heard...

Yeah. I also heard that the team plan right now, like GPTs, they're created in the team plan. Can't be published to the store.

You know, and I've actually saw some people complaining because they ran out to get the team. And it's like us, like, it's like, you want to collaborate with the team. You want to like, Hey, Calvin, can you look this thing over? Is it, you know, am I doing anything wrong or like, Hey, could you add this action or, or whatnot and like quickly on a business side, you are, um, you know, you want to collaborate, so it makes sense, the team plan, but at the same time, they've not enabled teams to produce GPTs. You're still on an individual plan to produce GPTs and.

It all, I guess it also opens up the possibility. As soon as you have teams, you also open up the possibility back to scams, right? Now you have teams of scammers who are coming in here and like, and going after this stuff, right? Um, it'll, it'll be interesting to see how they, how they separate the, you know, the, the good from the bad and the, you know, from, from the ugly, right. Um, and how they.

Calvin Hoenes (15:16.929)
Hahaha

David DeVore (15:33.573)
how they actually figure out what is real and what's not along the way.

Kevin Nuest (15:41.77)
Yeah, it sounds like, you know, if you, if you have open AI, the benefit of the doubt, they tilted towards security by not allowing team plans to, uh, publish publicly their GPTs, you could call that security. Like, Hey, teams are just getting started. They're going to create some GPTs. They want to use them internally and it would be an accident if they published it publicly. So let's just not allow them to right now. That's that's given them the benefit of the doubt that was conscious. Uh,

David DeVore (16:09.844)
Mm-hmm.

Kevin Nuest (16:10.198)
The other side of that is maybe it's just a feature request they haven't got around to yet. It's just a day one. They don't have it unlocked yet. Uh, and then I think it's probably closer to the middle where it's what do you do with what you said, Dave, what do you do with controls at the team level for GPT is being created? So let's throw this in a giant enterprise organization that they've got team plans turned on and there's, uh, 10,000 employees at said giant.

Calvin Hoenes (16:14.471)
Also fair.

Kevin Nuest (16:40.11)
corporate company, one rogue employee, one intern, right? One contractor says, I'm gonna spin up some GPTs and use, I'm not gonna disparage any big brand names on this call here, but use big brand name as the publisher to the store as the trust factor. And so then somebody jumps right in and says, oh wow, big brand released a GPT, check this out. Goes gangbusters on,

David DeVore (17:02.137)
Yeah.

Kevin Nuest (17:09.642)
on Twitter and is out there with thousands of people using it before the company can retract it and then going back to those actions, right? As a potential security, uh, loophole, what information could someone collect quickly on a large group of people trading on some trust of a big brand? So that's the, that's going to be, uh, that's the scare. That's, that's probably a big, scary one for, for big brands to think about, uh, the permission levels, right?

David DeVore (17:21.827)
Mm. It's true.

David DeVore (17:39.057)
Yeah. And so we're quickly into like business verification, which is hard. I mean, we've seen that hard, you know, business and brand verification, which is really hard. We saw Twitter try and do it. They're still trying to do it. It's and it's, it's not easy because, you know, maybe you do it off of an email address.

You know, but how do you know that the person with the email address that the organization has permission to represent the brand, right? So, um, so it's like, you know, you're quickly at a place where you are, you know, doing, doing verification and deals with like the executive office or, or not at all, right. And verifying, you know, all sorts of like KYC, you know, information around who the organization is. So it, which.

So it's going to be interesting. It's going to be super interesting as this stuff rolls out. I mean, the other thing too, is that I think it's going to be interesting is like, especially in organizations. And we know like these large organizations want a lot of control. They want to be able to control not only with what people do, what people see, but they also want to be able to control, um, how people act. Right.

And it's hard to do with a prompt. Right. So, I mean, we, I know we have another episode coming where we start talking about, you know, we, we start talking about what the AI UX looks like in the future. Right. But it probably does not look like prompts for inside of a brand. It probably does not look like wide open prompts, right? It looks like a lot more.

It looks like a lot more use case specific inside of our organization for what a specific role can potentially do. You know, is it like big yes, no buttons, you know, as opposed to being able to, as opposed to being able to, you know, just prompt anything into the, that they want, right. Um, so it'll be, it'll be.

Calvin Hoenes (19:39.451)
Mm-hmm.

Calvin Hoenes (19:49.685)
Natural language is also such a freaking vast space. We've seen that story before when the Chevy example of the chatbot on the website, it's incredibly hard to limit it to a brand's needs.

I mean, OpenAI does an incredible job at sort of limiting it from the ethical standpoint, right, all the moral issues. I think a lot of these things have gotten a ton better, but limiting natural language space is not a trivial thing to do. And so, like you said, right, like exposing that through a chat interface also opens up the amount of interactions you can do with it.

David DeVore (20:30.973)
Yeah, it'll be interesting to see how many brands start to adopt their own siloed LLMs.

Right. And main for that specific reasons, just so that way they can really tightly control what it is, what it can do, what it's trained on. I think open AI has some of those capabilities, but at the same time, I don't know what the enterprise license is, uh, on open AI, um, at the same time there's so many good.

open source LLMs that are popping up. And, you know, there's going to be a lot of competition for smaller language models that do very specific things that are a lot cheaper. Uh, as opposed to just one big large behemoth, right?

Calvin Hoenes (21:27.465)
Yeah, you get the benefit of cheaper training too. There's a lot of it in terms of advancements.

Kevin Nuest (21:27.667)
and

Kevin Nuest (21:33.054)
Yeah. And what if those enterprises want to air gap their, you know, certain LLMs from the internet where it's, it just doesn't have access to the internet. So Mistral, for example, uh, someone wrapped it in a mobile app. I downloaded it on my phone. I have the, I have the Mistral model. L LLM running on my iPhone 13 pro, right? I can, I can go, uh, to big Ben state park where my cell phone reception doesn't work.

And I have effectively the knowledge of the world in my pocket, right? 98% of the questions I would want to ask, I'd have the answer in an offline mode. So what does that mean for enterprises when they start stitching these together, both LLMs and smaller models, and they say, hey, these just can't interact with the internet because they don't need to, right? And then there's a one controlled gate that has human check and balance of what goes in and out potentially.

Uh, to, to really make sure that those controls are tightened versus trying to control, you know, tighten down the controls on the, what will become hundreds of models running in their organization. Right. We're talking about chat GPT right now, as if it's like the only thing anybody is ever going to use and there's not going to be hundreds of models chained together, uh, doing their work and coordinating.

David DeVore (22:55.561)
Yeah, it's true.

Calvin Hoenes (22:55.829)
Right. And then only reaching out to in the cases where you do need external information, right? Or where you need like up-to-date and recent information and then let the reasoning actually be air-gapped. And then going back to like what we said with actions, right? You can get the recent data from outside. You can do that with basically any other model and then have the reasoning be completely air-gapped.

David DeVore (23:17.821)
Yeah, so here's a here's an interesting question. Can you tie together multiple GPTs with actions?

Calvin Hoenes (23:26.301)
you cannot, well, maybe the definition of a GPT, right? So the GPT as it sits right now in the store where you can use it is separate from what OpenAI calls assistance. Assistance are essentially GPTs, but they are not accessible through the GPT store and not through JetGPTs interface. So what you create, what is now in the GPT store and the GPTs are through the GPT builder and can only be built and created and interfaced with

through the chat GPT interface. And then on the other hand, there are assistants which have a similar capability but are accessible through the OpenAI API. And so what you would need currently to interface from one GPT to another GPT is basically an API call. And since the GPTs are in public, you can't do that yet. What you could do, however, you could build your own assistance. And now we're going into deeper dev terrain, right? But you could basically build your own assistance

build an API server for it so they can talk to the assistants and then build one GPT that can talk to these assistants. But it's not really like you have all those GPTs together right now talking to each other. Like you would need to build your own system. And I mean, there's a lot of the systems that we've also been building where you have multiple agents working together, multiple assistants working together, but it's not possible through the GPT interface as of today.

David DeVore (24:35.87)
Right. Yeah.

David DeVore (24:45.646)
Mm-hmm.

David DeVore (24:53.525)
So like crystal ball, do we think that GPT is going to make it possible for people to tie GPT's together in inside the interface? So like obviously in, um, we can do it, uh, we put our nerd hats on and we'll tie some stuff together. Um, do you think it's on their roadmap to allow people to

begin to use multiple GPTs orchestrated together.

Calvin Hoenes (25:24.749)
I mean, it is looking into the crystal ball, but my personal take is no, to be honest, because I think it's not directly in their line of sight in terms of what they're achieving, right? I think a lot of the benefit of GPTs to OpenAI currently is that it's a little bit like when you're a rider.

And you have that blank paper problem, right? The blank slate where you don't know how to start. And I think what they've seen with what OpenAI has seen a lot with chat GPT is that a lot of people have that initial hurdle. They have this massive wide open infinite language model in front of them, but they don't know how to start. And so GPTs are a very, very helpful place or a very helpful mechanism to ease that adoption for them.

Right. Because now you can say, I have a very specific problem and I need a machine to solve this for me. Like, let's take one of the Python mentor GPTs that we have brought out. Right. So you take that and I say, I want to learn how to use Python. So sure, I can go in, I can use that GPT and I immediately know what I'm solving for, which is a lot easier than having, you know, that blank piece of paper to start with. So I think for the foreseeable future that is in their line of sight, what they're trying to solve. And they'll probably have enough problems to solve with the store.

Again, for the foreseeable future to deal with before they go and think about hooking them together.

Kevin Nuest (26:48.182)
fix some.

Kevin Nuest (26:51.566)
I think something that I haven't seen talked about really much yet is what they really get from the store. They just got tens of thousands of people to help them with their new user acquisition and their retention of their core service, right? We all just collectively gave workflows and use cases and helped remove blank pieces of paper.

from a lot, a lot, a lot of workflows, like any workflow. Like imagine something, you can search the store and you can find a GPT to get started and get past the blank page of the first prompt, right? And that's gonna drive so many new people into GPT and OpenAI where they can actually one, one they'll discover an entry point into it and two, they can actually get value out of it. How many people have you talked to?

David DeVore (27:31.337)
first prompts. Yeah.

Calvin Hoenes (27:32.882)
Right.

Kevin Nuest (27:48.802)
that have said, yeah, I tried GPT, but no, I didn't really getting value out of it, chatting with it for like 10 minutes and thought it was a cool parlor trick and, and moved on not realizing or not knowing how to get started getting the value out of it. I think that, and, uh, no, unless I missed it, right? Like no pricing, no, no rev share, uh, model announcement yet on the horizon of how that works with store creators, right?

Calvin Hoenes (28:15.349)
And still they had 300,000 things, you know, GPTs released. So you have to think about that, right? There are no conditions out there yet, but still people are jumping on it. You know, they've done their train very well.

Kevin Nuest (28:20.769)
Yeah.

David DeVore (28:27.125)
found it fascinating to look at what's trending, um, in the store, right? So, and I'll just read off sort of like the, some of the top ones here. So there's, there's an AI research assistant trained on 200 million academic papers, which I know academia is. Yeah. And you know, um, there's two at the top, which are like.

Calvin Hoenes (28:32.79)
Mm.

Calvin Hoenes (28:47.985)
Is it the consensus? Hmm.

David DeVore (28:56.617)
You know, PDF, uh, assistance, you know, ask your PDF, you know, to like basically summarize it for you. Right. There's there's a, uh, there's a, probably three or four that are like logo creator, image generator, um, you know, video GPT, uh, designer GPT. There's Grimoire.

Calvin Hoenes (29:07.191)
Mm.

David DeVore (29:23.517)
which is a, which is a, which is a coding GPT, which is number four, which is interesting, um, you know, it was web pilot, which is like, you know, you know, browse, write API offerings. It's also really interesting to see, see who the creators are for some of the sort of stuff, right. And it's like, I didn't even know.

Calvin Hoenes (29:39.413)
Mm.

David DeVore (29:44.605)
You know, SEO dot AI and you know, scholar AI.io and you know, there's a lot, there's a lot of, you know, businesses and, you know, hacker and, and like tinkerers as we like to call them, um, you know, popping up and, and really interesting to see which ones are arising in the top. Um, but to your point, like there's, you know, featured as all trails, which all trails is a well known app, right? I mean, um, traditional app, right?

So it's really going I

Calvin Hoenes (30:15.505)
And I think Canva has been up the list pretty high too. Like that was one of the few brand examples. Yeah.

David DeVore (30:18.541)
Canva has been up there. It got kicked off of the top 12. I don't think it's on the top 12 anymore. Um, yeah, Canva is up there, you know, but it's true. I mean, any given category, you know, programming, productivity, research analysis lifestyle. I mean, it is just chock full. Like I it's like my.

I was, I was, I was talking, um, I was talking about like college, uh, like, and having, having a college fund for my kids. And I'm like, I don't even know whether it's necessary. Like, do they just need a GPT account and like, like for 20, $20 a month, if they're self-directed, everything that we possibly need at our fingertips right now to learn anything that you want to learn. Um, and it's

Kevin Nuest (31:01.186)
How many?

Calvin Hoenes (31:05.655)
Right.

Calvin Hoenes (31:12.865)
Yeah.

David DeVore (31:13.641)
It's kind of been that way with the internet for a while with, you know, with, you know, with, with YouTube and, and Google, but like, and now, now it's like, it's, it feels so consolidated and quick, you don't need to go spend all day searching around. You just need to know how to ask the right questions. Um, so it's going to, it's going to get really interesting.

Calvin Hoenes (31:33.333)
Yeah, absolutely. And I think, and I think that's, that's precisely what they're trying to make better, right, in terms of asking the right questions, like to going back to what we said before, I think, I think what, what OpenAI has probably realized, and I've gotten that feedback from a lot of people that I've talked to. They said, yeah, I use JetGPT, but it wasn't really great. And to 90% of those people that you talk to, I was like, did you use GPT for?

Kevin Nuest (31:36.823)
Let open.

Calvin Hoenes (32:01.469)
You know, they all use GPT 3.5. So because you have to jump over that initial hurdle of paying the 20 bucks, right, to get into it. And like now to use the GPT, you have to get into the hurdle to really use it. But most people have only gotten into it and basically used the, you know, the 90, what is it, 86 IQ equivalent versus the 155 IQ equivalent of the solution and just jumped off. Right, exactly.

David DeVore (32:22.89)
Yeah.

Kevin Nuest (32:26.65)
Literally Albert Einstein, Albert Einstein IQ difference, right? Versus like a D in math in high school or Albert Einstein talking to you. Like think about that. That's, that's yeah, of course I wouldn't.

Calvin Hoenes (32:31.158)
Yeah.

David DeVore (32:37.665)
That was that. So I'm like a 3.5.

Calvin Hoenes (32:38.915)
Yeah, it's a massive. And I mean, that's why most of the people just jumped out of it because they're like, okay, they tried it, they tested it and they're like, it's not that great. And so I think now having all those specific pain points that you're really trying to solve is a great way to sell you into it and actually pay for the subscription and then actually get the benefit out of it. So I think with that, we're probably going to see a lot more.

Kevin Nuest (32:42.999)
Hahaha

Calvin Hoenes (33:05.277)
adoption, because people will actually start seeing the real benefits of what a GPT can do versus just a base version that doesn't give you anything really.

Kevin Nuest (33:16.226)
So does, does it repeat this back to you, Calvin does releasing the GPT store and letting people use GPTs allow open AI to delay bumping the free plan up to GPT for them and it's, it's an entry point right now. The GPTs are going to let, are going to let people get a taste of it and then they'll pay, uh, and they still have in the chamber.

The bullet to say, Hey, now everybody gets GPT four for their free account, right? It probably comes closer to when 4.5 or five comes around, man, that in itself is a watershed that is probably also underrated putting the four in everybody's hands as the free trial. And then people, all those people you just talked about flipping and going, Oh, now I get it. Now I understand what this is. And then, and then it'll be walking them into, Oh, you like that.

How about GPT 4.5? You want to talk to two Albert Einstein's at once? We got that for you.

Calvin Hoenes (34:18.766)
Yeah. Yeah, it's a very good point. I mean, imagine just how insane the knowledge gap is now between people who pay the 20 bucks a month versus people who don't. It's like, you know, like what you said before with the Albert Einstein example. It's like, imagine you would basically get access to 10% of the internet versus you get 100% of the internet. What difference that makes and what you can do in your day to day is, you know, hard to fathom.

David DeVore (34:47.945)
Yeah, there's I mean this. Go ahead.

Kevin Nuest (34:48.462)
See you next time.

So Dave, you talked about your kids, you know, college fund and, and what does that look like in a world where they have access to AI, right? I want to pose this question. I felt like this, I heard this and I felt like this when I went to college, it was you go to college to learn how to learn, like that's the best thing that you can come away with, uh, besides some credentialing, but the value is the learn how to learn in addition to having a blast doing it, of course, over four, maybe five years. And.

David DeVore (35:10.765)
It's true.

Kevin Nuest (35:20.35)
So now Calvin, you said it's not, it's actually how to ask the best questions, how to ask great questions, which is a step function up. If you, you know, we know it in our professional careers doing this two decades plus each, the, being able to ask the right questions is the way you get to the best outcomes and answers and not even having the answers, right? Asking the questions to teams, to problems is what unlocks that. Now I'm thinking.

Nope. There are people that are 10 and 20 years in their career before they figure out how to do that. Can your kids do this at like 16, 17, 18, and they're asking questions like seasoned professionals 45, 50 years old, right? And then what does that mean when they understand how to do that and their AI enabled for the next 20 and 30 and 50 and probably live for a hundred years in a working scenario, like, holy crap.

David DeVore (36:14.621)
Yeah, it's going to be interesting. I saw, um, I know. I don't know the reference who came out with it today. And they were saying up to 50% of, um, and entry level jobs, um, or not like, uh, knowledge based jobs are, are going to be replaced over the next five years. Right. Um, which is kind of a terrifying number.

Uh, cause it's, and especially as we, we are starting to see, you know, Jen's, you know, that is, that's not us, right? That's like, you know, my kids for sure. Right. Um, and it's like, man, what are the, what are the options? Like coffee shops? Uh, I, it'll be interesting. You know, the one thing that I think the digital natives have is they have minds having played games.

from very early on, they understand the process of try something, it doesn't work, try something again, it doesn't work. And like, fight, like finding their way to the end point. Like I'm, it'll be interesting to see whether that translates to language. Right. So far, like the, the visual motor and mind skills are developing.

I'm not sure, but that it's, I'm not sure if it's going to translate to language because that is the interaction that's missing inside of a lot of games. You're not having, you're not having conversations. You're not articulating. Uh, you know, and you might not even be, you know, unless you're in school learning how to spell, learning how to write, um, you know,

I don't know that, that you're really going to be prepared to, um, prompt with the Lego with, with sort of the, the level of quality, uh, that'll get you there. So it's interesting because I can see how on one level kids are way more prepared than we are in terms of their brain development and working their way through games on another level on a lot of times. I think they're underdeveloped in terms of their ability to communicate.

David DeVore (38:33.209)
through written and verbal means.

Calvin Hoenes (38:35.861)
Yeah, and not to speak of social skills, right? I think, like, right, what you were saying, like, I mean, okay, does, do we, do we still need a college? And the more and more I think about it, and do we still need schools? I think that the primary reason we probably do need schools in an age of like full-on artificial intelligence is so that we learn how to.

David DeVore (38:39.157)
not to speak of social skills right

Calvin Hoenes (38:56.833)
deal with other humans, you know, it's like training to not hit each other, right? You know, all the basic social skills, like if you think about it, that's what being in school really delivers, right? Because now you can get all the knowledge everywhere. I can sit in front of a chat GPT and get what I want. What I cannot learn is that, you know, there's another person with another need and that might conflict with my needs and that might require compromise. And how do we do that without, you know, hitting them on the head? Basically.

David DeVore (39:00.537)
True.

David DeVore (39:24.981)
Yeah, 100%.

Kevin Nuest (39:26.118)
Kind of though too, right Calvin? I know we've got some different flavors of growing up and school experience living in different geographies. From my perspective, US-based, very small town-based as well, it gave, you know, IRL school gave some forced social skills building, but the vast majority of the school experience

Calvin Hoenes (39:34.105)
Hmm.

Kevin Nuest (39:55.05)
was actually built to separate, to control, to limit interaction between students and building those skills effectively. So only through proximity did you have to, to out of like almost a survival style, right? Versus an opportunity to thrive and build those and strengthen those. I know my partner is a Montessori teacher.

Uh, she's lucky enough to be able to translate her experiences and offer something I think is a lot better at building empathy and those social interactions, uh, something like that. I think is then at that point, I would say, yeah, like that's, it is literally the most valuable thing from my perspective that those children get out of a Montessori style experience. The slim down, uh, how would a public school fix this version? I,

revert back to my dogs and going like, well, they just need to go to the dog park and socialize. So that's the rudimentary way of what's public school version look like where you take your kids to the place where they can socialize together and you remove the education part. The education part's not needed. We got that, AI's got that, internet's got that. We just need to put you all in proximity and be nice and play nice for a while, right? Build those skills.

Calvin Hoenes (41:00.556)
Right.

David DeVore (41:17.917)
Yeah, it's going to be, it's going to be super. I mean, one of the things that we've, you know, we've talked about is like, um, prompting is the new code, right. And, uh, that, you know, the, the engineering of the future is not going to look like engineering of the past. Right. And so, um, it'll be really interesting at, as that practice.

starts to figure itself out. And we're like very, very early in on like, you know, like, like prompting, you know, applications to come into existence. And I'm I think I'm out of anything out of AI having sort of built software for so long and whatnot. That's probably the thing that I'm most excited about is like, you know, being able being able to just go from idea to prototype.

And there's already apps out there where, you know, prompt a website to come in, you know, to, to be written on the fly. Right. Um, and I think it's going to be really interesting, uh, as, as that comes along, but then the other thing that starts to come right behind that is AI. Writing AI, right. AI prompting AI. And we've, we've also been exploring that as well. Um,

And so, you know, so anyways, that that's the stuff that I'm most excited about. I think that I have a I have a friend who also has little kids and he started very, very early on he was sort of on the same tip. He's like, man, college is not the way. And he started very, very early on like teaching his kids some programming and how to find the answers that they need and whatnot.

Calvin Hoenes (42:45.547)
It's very... Okay.

David DeVore (43:11.537)
and really sees the future as like, you know, self-taught and less institutional.

Calvin Hoenes (43:24.393)
It's interesting when you think about what you said with the example of now using natural language to code. I wanna put out an analogy, which I think is interesting to think about. So you are bound to lose a certain level of detail as you go up an abstraction layer, right? We're basically going from code, we're going one extraction layer higher, which is natural language to describe the problem that you're solving, the app that you're building, the website that you're building, whatever it is. So...

you're bound to lose a certain level of detail. And it's a similar example that when you're going from like, when you used to code stuff in assembly, where you basically gave jump, jump left, right instructions in your code, and then you went up to an app and an actual interface. There were similar arguments back in the days where you said like, yeah, but you can't be as precise with it as you could be in assembly.

And the reality was that in 90% of the cases, it didn't matter. Right. The precision that you use for it just didn't matter. And so now we're at a very similar point in time where, you know, a lot of the developers say, yeah, but I can't do this and that I can't do this precision level. You know, I can't do the 98%. But the reality is that you only need to cover 90 to get to 98% of the cases that really matter.

Kevin Nuest (44:24.543)
Doesn't matter.

Calvin Hoenes (44:46.313)
And this is the point where we lose in our level of going from abstraction level to next. Yes, we invariably lose detail, but we accept that because we say it doesn't matter. It makes us so much faster. But it's just very interesting to think about that. And now I think about the implication for code. What you can see is that everybody says, yeah, my website is special. My app is special, but the reality is they are mostly the same with minor changes.

And so as we go up that abstraction tree, it's like, how much level of detail are we willing to lose? And it's an interesting thing to see to which level it will play out with natural language to replace coding in that abstraction layer. Where do we hit boundaries? Which...

Where does it stop? How much detail do we actually want to lose in the application? And my hunch is that we will be very fine losing most of that detail. We will just be saying, yeah, I just need an app that is a table that allows me to store this and that information. Whether this is a very specific form of a table and whether it uses an underlying programming language like Python or JavaScript to do it, do I really care? No. It gives me an extra 10.

Kevin Nuest (45:40.014)
We're gonna lose most of it.

David DeVore (45:57.12)
Yeah.

Calvin Hoenes (45:59.869)
milliseconds of performance upgrade, at that moment I don't care. I might care if I'm scaling to a million people, right, as a different gameplay. So that's why I'm saying it's interesting if you go to big enterprises, will that apply as well? Or not? Will it apply to all these software solutions where scale matters and the 10 milliseconds matter? But like I said, in 98% of the cases, it just won't matter because no one cares about the 10 milliseconds.

David DeVore (46:26.313)
I saw, I saw this, this meme, um, from a developer and he was basically like, yeah, everybody's saying that AI is going to put, you know, developers out of a job. I don't think so. Cause that would require the client to actually describe what they want. Right. And, and so, and so you still, and it's true, like

Calvin Hoenes (46:50.072)
He's not terribly wrong with it.

David DeVore (46:51.477)
He's not terribly wrong because like even in, in it's like, you, you know, we, we see this all the time. Like if you're trying to build something for somebody and you know, they're trying to imagine what they want and like getting people to give instructions for exactly what they want is really hard, you know, and, and it's like you, in order to

actually get there. It's like, we almost have to take them through a design sprint and like go deep on like, you know, very specific questions around, you know, the, you know, this thing or that thing, or where's this going? Or where do you want to be six months from now or a year from now? And so it's, it's interesting because I think part of the role that I think I'm excited about from an engineering perspective is like, it's puts engineers a little bit more in the role of, um,

architects and less code monkeys. Right. And it, which is really where engineers ought to thrive, right. Is, is sort of at the level of, at the level of thinking about like systems thinking and, and being able to put together complex problems into a solution, that's really where the future and business requirements, right.

Calvin Hoenes (48:08.969)
Yeah, and business requirements, right? Like that's where it inches into product and business dev and business development in general, right? Like this is, yeah, absolutely.

David DeVore (48:15.669)
Right.

Kevin Nuest (48:17.13)
I think it'll be a mix of that in the enterprise. It's a twofold increase in proliferation. Hit that in a second. And then the other is I think that the, the generalists that can. Solve business problems and drive towards outcomes are the ones that survive and grow and thrive in organizations. And so it's actually the changing of the roles and responsibilities and the minimum requirements for jobs. The proliferation part. Think about.

I think this analogy holds up a, uh, a Photoshop designer, right? Sometimes you really need a really great designer to get into Photoshop with a lot of layers, a lot of brushes, do a bunch of stuff I can't do and come back with some amazing graphics a lot of time. I just need a pinch and zoom and crop and filter on my phone. Right. And so is that proliferated the amount of content that was created and edited? No.

thousand million folded over the last decade. I think that is a similar, uh, that'll be similar to what will happen in. Uh, software, especially internal software. So fifth today, 50% of software is internal in the company that they use to then deliver the output, the outcome for the business. So now all the little things that they are, uh, they wish were automated in their workflows, that they wish they had a tool to help them with.

they can speak those into existence and they'll be like small and tiny. So the GPT store is like our first look into what that looks like without UX UI is on top of it, which will be required, will be mandatory, but it'll be more about speaking your UX UI is into existence and like, that's good enough. I needed, I needed, as you said, Calvin, what are most apps? It's a table to put some data in and display it back out. Right. And then give it to me when I asked certain questions of it. And that'll be this giant proliferation of those.

And then we'll still need those generalists that are absolutely able to translate business, uh, requirements into those outcomes. So I think it's going to really be this giant, it's going to be the bell bar approach effectively.

David DeVore (50:25.673)
Yeah, we we've talked about too, like we know about enterprise, especially enterprises, like they want to own their tech. And so, um, at, I really believe that as the cost of building software goes down, the less likely people are to renew their, their SAS, their, you know, expensive SAS solutions.

Because they'd rather just build it and own it and control it themselves. Um, and we've seen that with, we've seen that with, with organizations that we've worked with where they're, they, they want to own it, they've got a DevOps team. They've, you know, they're there. If they, if they don't own it, they're going to figure out how to build it. And they're very picky about what they're applying licensing to, because it ends up both being a operational cost, but also.

You know, change management, both getting into it and then getting out of it and so forth. So it's going to be really, you know, it's going to be a break, really, um, massive shift in terms of how software gets built, how it gets used, um, inside of organizations, how, how the economics of software work. I mean, it's going to be, it's going to be fun.

Kevin Nuest (51:43.62)
Can we call it? SAS is dead. 2024 is the beginning of the end for SAS, right?

David DeVore (51:50.349)
I wasn't going to go there. Um, but

Calvin Hoenes (51:52.712)
Yeah, have you looked at the have you looked at the have you seen the ones.com manifesto have you seen that as an example for Jason Freed? Yeah. Now 37 signals. Yeah.

Kevin Nuest (52:00.518)
Yeah, from a co-founder of Basecamp. Yeah.

David DeVore (52:04.989)
Yeah. And they've are, they are actually put, then they put something out. They've actually, I think they put out a product that you just buy once. Pay for it once it's hosted. Man. I hate, I always hate it when I got to say like, yeah, Bitcoin ski was right, but this is what he was talking about a year ago and I are, I argued with him. I'm like, no way. And there I am like eating my words. I'm like, damn it, man. He's right. Early.

Calvin Hoenes (52:14.313)
Yeah, that was the whole idea.

Kevin Nuest (52:22.006)
He was right. He was right.

Calvin Hoenes (52:24.659)
Yeah.

Calvin Hoenes (52:30.625)
Yeah, there's this one sentence where it says you should own that shit by now, perpetually early. Yeah, but it's very true. Also the models with the pay once, pay as you go. I think we're going to see a lot of these reoccurring and come back again in 2024. I would double down on that.

David DeVore (52:34.241)
But right.

Kevin Nuest (52:35.97)
perpetually early.

David DeVore (52:42.933)
Yeah, this has been awesome.

Kevin Nuest (52:56.222)
The mix, the middle mix of the, the ones I heard him, um, you know, do an interview about it when I first heard it, I was like, that's the biggest problem I have with it, not as a, um, the best part is paying once for the software. The biggest problem I have is thinking that enterprise organizations can host their own, let's say Slack instance, right. And, and run the ops for that. We we've seen challenges in organizations. I've been in organizations where that's been a challenge.

David DeVore (52:56.234)
100%.

Kevin Nuest (53:24.546)
But if you assume that that's a skill they have to build and they solve that, then that becomes what tech is at that organization that they have to have ops. The middle ground would be, well, there's a paper usage version that is the hosted version, right? That's probably the middle ground that will pop up and it'll look like every paper message. Like how many messages did your organization send? Awesome, and it's gonna still be a 10th of the cost of the SaaS version, right?

David DeVore (53:38.848)
No.

Calvin Hoenes (53:41.761)
Yep.

Calvin Hoenes (53:55.549)
And I think whoever's going to bring out a very efficient maintainer, GPT, AI, is definitely going to be in for a win there. Because as soon as you have a pay once, and it's going to be in an enterprise hand, and the software will get rusty. Who are you going to call? Unless they develop the internal capability to solve the software that they haven't built. Definitely a problem that needs to be solved.

David DeVore (53:56.057)
percent.

David DeVore (54:13.129)
You get a call.

Kevin Nuest (54:24.558)
Calvin can't help but put his security hat on. Go, yeah, but who's gonna hot fix the security holes? Ha ha ha.

Calvin Hoenes (54:30.971)
You just gotta update the version number please. There should be an AI for that. Okay, sounds like a good weekend project.

David DeVore (54:36.509)
Yeah, AI is gonna be an AI for that.

David DeVore (54:42.609)
Yeah, this has been awesome. We have, we, we've diverted away from GPTs. Um, but you know, I mean, culturally, I mean, open AI has done something. Um, remarkable for forget about the tech. I mean, just the, just from the perspective of, and we've been, you know, sort of in this technology for Kevin, you ran a, you know, machine learning startup years ago, it's, you know, we've. We've.

played around with, you know, a lot of AI technology over the past decade or 15 years of different sizes and shapes. But what I really like about open AI is like they brought it into the general consciousness. And, and that, and, and that's when change starts to happen is when people, when people wake up and discover something, you know, net new.

Calvin Hoenes (55:28.657)
Yeah, they did the limelight sessions for sure.

Kevin Nuest (55:29.087)
Yeah.

David DeVore (55:39.657)
you know, that's when real, that's when things really start to accelerate, um, is when the general consciousness starts to become aware of a new technology that, that is, that is a change agent. So, um, no matter whether we like them, love or hate them, they, uh, they have done that for the history books.

Calvin Hoenes (55:52.073)
Yeah, absolutely fertile ground.

Kevin Nuest (55:57.262)
for sure. Can I, can I throw out one more crazy idea, smashing our web three world together with what open AI has now done with the GPT store. So what if they wait a little bit to announce their monetization, uh, you know, payment plan style for the store. And when everybody gets restless about it, they say, they look at what GPT has been used based off of usage. And they.

David DeVore (56:08.431)
Hmm.

Kevin Nuest (56:27.57)
airdrop options in open AI and as it is payment. So if your GPTs got a lot of usage here in this time where they're trying to figure out how to monetize it and pay it out, great news. You get a little bit of KYC to do, and you can literally buy like strike price options in open AI, here's your payment. I think that would be a cool, mash the worlds together, because we've seen that instead of align a community real fast in web three. That would be amazing.

to then own a piece of this revolution here that's happening. And then instead of, you go from being upset about the uncertainty of the payment plan of the GPT store to a ride or die creating GPTs and touting it on every single channel, right? So when is OpenAI bold enough to airdrop options in their company? Be the first.

David DeVore (57:18.358)
Yeah.

David DeVore (57:24.281)
Well, for sure. It's, it's interesting that so many people have, I mean, it's interesting the net collective, you know, build around GPTs and they haven't even yet defined how people get paid from this stuff. Like, I don't, I have not heard one line item like, well, here's how you get compensated. I mean, you know, I think that, I think, um,

Calvin Hoenes (57:26.003)
Mm-hmm.

Calvin Hoenes (57:37.319)
Right.

David DeVore (57:46.001)
you know, in the in the near term, I think that's sort of what people are sitting on their seat trying to figure out is like, well, I built this thing. How do I get paid? You know, is it? It's a store. Yeah, it's a store. And you know, you guys are getting paid. But how do I get paid? So yeah, it'll be it'll be interesting as we see that rollout. And it's like when's the next GPT? Who's like, who is the next GPT billionaire? You know,

Calvin Hoenes (57:58.178)
Yeah, there will be questions.

David DeVore (58:16.233)
Or, you know, or is there a whole creator economy that starts to pop up around GPT is, is exciting to think about as well. So we shall see. So we're, we're coming up at the top of the hour. So thank you guys. It's been a thrilling conversation as always. And, um, I'm gonna go ahead and sign out and, you know, thank you audience, whoever you are. Thank you for tuning in and we'll, we'll see you guys soon.

Kevin Nuest (58:45.047)
Thank you.

Calvin Hoenes (58:46.285)
Thank you.

David DeVore (58:46.297)
Later.