Runpoint: AI Business Transformation Podcast

keywords
AI, ChatGPT, Google I.O., newsletter automation, voice memos, AI tools, private equity, AI quality, sycophancy, technology trends

summary
In this episode of the Run Point podcast, hosts Sam Gaddis and Matthew Hall discuss practical AI tools that can be utilized immediately, including voice memos and ChatGPT projects. They also explore the creation of a personalized newsletter using AI, the latest innovations from Google I.O., and the ongoing concerns regarding the quality of AI-generated content and its tendency to be sycophantic. The conversation emphasizes the importance of understanding AI's capabilities and the potential for it to enhance productivity and creativity.

takeaways
  • Using voice memos can enhance AI interactions.
  • ChatGPT projects help organize information effectively.
  • AI can automate newsletter creation for specific industries.
  • Google's advancements in AI are significant and impactful.
  • AI-generated content quality is a growing concern.
  • Sycophantic AI responses can be adjusted with prompts.
  • AI tools can serve as infinite force multipliers.
  • Understanding AI's capabilities is crucial for effective use.
  • The future of AI will involve more personalized applications.
  • AI's role in content creation is evolving rapidly.


Chapters
00:00
Introduction to the Run Point Podcast
01:32
Practical AI Applications for Today
03:35
Voice Memos and AI
09:00
Innovative Newsletter Creation with AI
16:20
Google I.O. Highlights and AI Innovations
19:40
Exploring AI Utility and Frustrations
20:21
Google's AI Advancements and Market Position
22:08
The Future of Google's Ad Revenue
25:51
The Concept of AI Slop and Content Quality
28:05
Addressing AI Sycophancy and User Experience
30:51
The Need for Better AI Branding and Understanding
34:16
AI as a Force Multiplier in Everyday Life

What is Runpoint: AI Business Transformation Podcast?

Hosted by Runpoint Partners’ founders Sam Gaddis (tech entrepreneur & AI builder) and Matthew Hall (PE operator & growth strategist), Runpoint Podcast strips the hype from artificial intelligence and shows you how to turn it into concrete business results—fast.

Speaker 1:

What's the show called? I think it's called the RunPoint Podcast. I don't know. We should figure that All

Speaker 2:

right. Welcome to the RunPoint Podcast, Sam. How are doing?

Speaker 1:

Doing great. How are you?

Speaker 2:

Good. Great to see you today like it is. Great to see you every day in this new business that we're part of. We have a great show for everybody today. Three segments, just like last week.

Speaker 2:

First segment we're going to talk about is useful shit you can use today. There's lot of talk about what AI will do in the future. There's a lot of promise. We're just going to discuss things that we have built or that we're currently using and how we use AI to get value and then anybody else can do immediately. Second, we're going to go into a couple news and notes items for this week.

Speaker 2:

Talk about Google's IO and where they are in the race among the AI superpowers. And then we're going to talk about the issue of AI slop. So the subpar quality getting pushed out from people that's AI generated and how that issue is related to this other issue of the last couple weeks, which is that LLM models have been tweaked so much to tell you what you want to hear that they're becoming sycophantic and maybe less useful. So we'll talk about those things. And then last, we'll end with our hot takes of the week, our confidently wrong section, where we're going to share some things maybe before they are ready to shine on what we believe and what we think is going to be true from now and the future.

Speaker 2:

That sound good, Sam?

Speaker 1:

That sounds great. Let's get into it. All

Speaker 2:

right, cool. So let's first start with useful shit you can use this week. Why don't you go first? I've got a newsletter maker to share, but I think you have some interesting stuff to share as well.

Speaker 1:

So I think what I want to share today is just how I use ChatuchPutty in a very basic way. And I worry that this will be elementary for some, but I'm continually surprised when I talk to people, particularly in a meeting setting or when you need to be remembering a bunch of things, how few people actually do this. And so the two aspects of this that I think are most important are voice memos on your iPhone and the notion of the concept of projects in ChatGPT. Claude has had projects for a while. ChatGPT has had that for less time.

Speaker 1:

If you're not using projects, you absolutely should. I'll start with projects. The basic premise of projects is you're organizing a conceptual topic in one bucket in ChatGPT. You can go in there and create a project. You can drop in files to that project, and you can give it custom instructions.

Speaker 1:

And I have done this for a variety of things, everything from this business. So I've got a project that knows run point. It knows exactly what we're trying to do. It has a bunch of marketing materials. It knows the types of clients that we engage with.

Speaker 1:

And the important thing is the files that I put in there, include everything from our branding guidelines to our mission statement and all of that, Those are accessible every time you start a new chat, but so are the historical chats within that project. So that means that I can start a new chat and say, I need a landing page that is keyword rich centered around I don't even have to say private equity because it knows that we do private equity, and it'll give me the copy for that. Point being, you can, without providing a bunch of context, get get to the point much quicker. And if I add meeting notes, for instance, that I have a a new prospect meeting or something like that, and I add meeting notes, it's going to deliver that summary with the context of what our company actually does. And so it will highlight things related to private equity and AI.

Speaker 1:

That's projects. The second thing that I think is so simple and that most people are not doing is using voice memos as the primary input to ChatGPT. So this pairs with projects, but it can also just be any old ChatGPT chat that you start. What I've done on my phone is you have this third button here that's new on the newer phones that and I've set that to when I hold it down, it triggers, the start of a voice memo. And so I'm doing this constantly.

Speaker 1:

I do this several times a day. If I'm in an in person meeting, I start this. But even more helpful, I think, is the scenario where I have an idea and I wanna engage with AI in some way to talk about that idea, to flesh out a concept, or something like that. But I know that I need to ramble for a bit in order to make this prompt work. Maybe I'm actually just trying to create a prompt, like a concise prompt, but I know that it's gonna take me five minutes of rambling to get there.

Speaker 1:

That's kind of my thing. Yep. Rambling. And so what I will do is I will record a voice memo and I don't care if it's five minutes or ten minutes, and I don't care if it's organized, if I use a bunch of filler words, if I jump back and forth from start to finish, that's what AI is so good at. And so I'll simply ramble about what I imagine this new product might be or what I imagine a new, you know, implementation of AI might be for for one of our clients or something like that.

Speaker 1:

And I'll and I'll go for a walk. I'll go outside and just talk to this thing through my AirPods for a while. And then because voice memos on iOS now has native transcription, within a second or two after finishing that recording, I get a button on my voice memos that says copy transcript. And the cool thing is if you use a Mac, you copy that transcript, and all of a sudden, it's in your clipboard on the Mac as well. And so typically, that's my workflow.

Speaker 1:

I mean, sometimes I'll do it through ChatGPT on my phone, but I I'm typically working on my computer. And so I'll just copy paste that prompt into ChatGPT, and I'm off to the races with a extremely thorough prompt.

Speaker 2:

Were you a were you a voice memo guy before ChatGPT?

Speaker 1:

Yes. Yeah? Yes, I was. Interesting. I've been using voice memos quite a bit for a long time, and I think that's why I'm an accelerationist on voice memo.

Speaker 2:

I never, never was. It's it hasn't really been since I started working with you again that I've become a convert to this way. But like, and so I guess I'll give the perspective of somebody who's not that. My behavior though has been, I mean, for as long as I've worked in any capacity, to go on walks before I start a project. Like if I had even college before term paper, go think about it.

Speaker 2:

Same thing before presentation or whatever. That's always, I get my juices flowing, talking to myself basically in my head while walking around the neighborhood. And now I just take my phone with me and I talk into, I just use the voice capture within ChatGPT. I don't use voice memos and transfer it over. You get ten minutes of time and that seems to be enough for me.

Speaker 2:

That has completely eliminated like the blank page problem for in whatever I'm doing of, you know, even back in those days, if I'd go think about what I was going to write and then come back and sit in the chair, I'd have another little crisis of shit, this document's empty and I have to figure out all those great thoughts I had a second ago, how to structure them, where to put them, all that kind of stuff. By just talking out loud for a few minutes, you know, while on this walk, it already has an outline for me. It already has captured, I think, some of the best moments I wanna hit. And then the job is just to make it connect. The job is just to you become a you become an editor of this thing as instead of a drafter.

Speaker 2:

It's it's, like, unbelievably powerful, I think.

Speaker 1:

I agree. Why so the reason that I don't use the voice to text in ChatGPT, it's got two versions of that. Like, you can actually interact with an AI and it talks back to you, which I don't think most people use. I find that to be tedious. Yeah.

Speaker 2:

Don't use that.

Speaker 1:

And then it's got the thing where you can essentially trigger voice transcription. And then you can also trigger native iOS transcription in there. But I don't use that because I fear that, like, I'll get a phone call, and it'll it'll kick me out of the app or something like that. And I feel like voice memos is much more stable. You can you can even do things on your phone while you're recording a voice memo.

Speaker 1:

And then also just the fact that I can start it with that button.

Speaker 2:

Yeah. I guess that's nice. Yeah.

Speaker 1:

Do you not run into those issues?

Speaker 2:

I have had a couple issues where my my it gets deleted halfway through either from phone call or because I pressed the wrong button. That happens too. I thought I pressed, you know, go and I pressed x and then I lose it. And it's a little bit heartbreaking, but it's not that big of a deal. But the reason I like to stay in the ChatGPT app though is because then I can start actually drafting on the fly as well.

Speaker 2:

So like if my first salvo of thoughts gets captured, it outlines it, I can then go through a draft and give my live comments. So like looking at the screen and people I'm sure I'm insane walking around and then but just voice voice through what I what I'm seeing, what I want to change. And then just becomes this little iterative cycle until I'm until I'm home. Literally that's what I did before this podcast. Like I have a project set up for the podcast with the outline I like, and I just went through it for for the topics and questions for the fifteen minutes before we started.

Speaker 2:

Alright. Great one, Sam. The one I wanna do, I'm gonna share. If anybody has been watching, it's yet another Lindy that I'm going to share, but I've just been kind of smitten with trying to make this thing work for me. And I got a bunch of bonus credits so I really have been playing around a lot with it.

Speaker 2:

But the thing I built this week I think is pretty cool. And so let me share. Okay, here it is. What it is, is a newsletter researcher and a newsletter writer. And I just built this for myself and Sam for now, but let me know if you'd like to be included.

Speaker 2:

We might make this our run point basis for our kind of a newsletter for friends and clients and things like that as well. The the genesis was I'm pretty well tapped into things happening in AI. It's who I follow on Twitter. It's the YouTube accounts I follow. It's all that kind of stuff.

Speaker 2:

I don't have that same sense for private equity. So this is now the niche that we are dedicated to. It's not the field I came from and not really pretending it to be. So I need to work a little harder to constantly be up to date on private equity. And so I try to figure out what's the best way of doing this.

Speaker 2:

And so I built this agent that writes me a weekly newsletter for things happening in PE, things happening in AI, and what's the intersection between the two. Just three sections. So the way that it works is I can trigger it any time, but I think starting next week, it's just going to run on Monday mornings. And so what it does is first thing it does is go through my email and it searches for anything labeled newsletters. So I've subscribed to a number of private equity newsletters from, you know, Bloomberg and Axios and a couple other places of high quality newsletters.

Speaker 2:

It reads all of those and it spits them into a research document of what's the synopsis, what's the takeaway, what are all the links, all that sort of stuff. It then does a similar process for specific YouTube channels. So for YouTube channels that I know have high quality content that are publishing stuff recent, like, you know, weekly, it goes through and grabs the transcripts of all of them and does a similar summarization, updates the document with the content from the YouTube channels. It only does it for the last seven days, so we're not going to get redundant stuff. It's only for new videos added to specific channels and playlists.

Speaker 2:

And then with those things here's the section where it grabs the the transcripts and then updates the documents with everything it has there. It then does a similar thing in Reddit. So specific subreddits, so the the the private equity, the automation, the chat GBT coding, a couple of the ones that I frequently go to. It pulls in top posts and comments, transcribes them, summarizes them, and it has specific prompts for what I'm looking for for when it's searching. And it dumps all of this into a very long Google Doc.

Speaker 2:

So it dumps this all into a long multi page Google Doc that is not at all ready for action. It's just a lot of, it's a wall of text, but a wall of text that I can pass to another AI, which is my newsletter writer, which has access to my style guide, the template I want it to use. And so what it receives is this long document with a bunch of links and a bunch of summary information, and it produces a draft of a newsletter. And I purposely have it draft a slightly too long newsletter so that I can be a bit curatorial in picking and choosing the things that I want to send out later, but it answers those three questions. What's going on in private equity this week?

Speaker 2:

You can see Blackstone acquired TXNM Energy, and that comes from the Wall Street Journal. You've got sources from Bloomberg. You've got stuff that comes in from Reddit. And then And it's providing

Speaker 1:

the sources. That's super cool.

Speaker 2:

Yes. It's providing all the sources so you can check it. I'll tell you earlier versions of these were often hallucinating the sources, but that seems to be working better now. So it's pulling in

Speaker 1:

the right How did you fix that?

Speaker 2:

I changed the model. So I went from, Anthropic to to Gemini. And that was one of the reasons why I think

Speaker 1:

it Gemini is where it's at.

Speaker 2:

Yep. Exactly. Pulls in a couple of the videos that are worth your time or what it deems as the best things. So there's this discussion on sweat equity assessment and PE on companies that I'll check out in a YouTube video. Does the same thing on AI and then has a section on the intersection.

Speaker 2:

I'll tell you, I'm not happy with how it did the intersection so far. So I've gone one step further on this to your point on projects. And I have a, I'll pull this up too. Here we go. And I have a RunPoint newsletter Claude project set up as well, which is where I do my final drafting and formatting and things like that.

Speaker 2:

And so it has all those things we talked about, my style guide here, my prompts that Lindy's using, so it has the context of that, the RunPoint website so it knows what we do. And this is the synopsis of what I did to build the Lindy. And so it's able to take that draft and critique it and then fix it. So it has more revised versions. I've also added an MCP to Claude desktop.

Speaker 2:

And here's another kind of like pro tip for how to make your Claude super powered. The desktop version you can add custom MCPs to. So this has now access to YouTube transcribers as well. So I found a couple things that weren't in that newsletter. So I wanted it to summarize Google IO, for instance.

Speaker 2:

So I dropped in the keynote from yesterday and I dropped in another podcast post. So it's able to kind of do that on the fly and add it to the newsletter here. So this is the updated one.

Speaker 1:

So I know you've been working on this workflow for a little while. How much work do you have to put into this now to produce this asset and how good is it?

Speaker 2:

So I will say eight total hours of setup is probably a fair estimate to get to where it is right now. And I think though that weekly it should be half an hour or less of ongoing work. So that's me reading through, making sure the links are more or less correct, adding any additional context or my point of view. And I think that's really critical. I don't want this thing, if we are to share this with clients and not just be for our own consumption or our own benefit Right.

Speaker 2:

Right. Needs to have our perspective. And so I need to spend some time adding that stuff as well.

Speaker 1:

Yeah. And to some degree, your perspective is baked into the prompts that develop it. The I obviously, you wanna go further than that. Okay. Cool.

Speaker 1:

Pretty that's pretty amazing. I'm impressed by that.

Speaker 2:

Thank you. Yeah.

Speaker 1:

Could somebody now here's here's the other question. So we're talking about this segment is stuff that you can use right now. You our clients are I mean, they're they're run the gamut. They're super tech savvy. Some of them, they are in ChatGPT constantly and some of them don't use it at all.

Speaker 1:

Could somebody that's not used set this up?

Speaker 2:

Yes, I was thinking about that, about how I would do that. And I think I would It would be a little bit more manual. I think the ability to have it run automatically weekly and give you exactly what you need might be a little hard to figure out because it's a lot of different triggers and it's all different systems. But getting a YouTube transcriber and something that reads your newsletters and summarizes them, I think would be really trivial to set up, especially if it lives inside of ChatGPT or Claude already. So just setting up the right MCPs.

Speaker 2:

And so your job is just to send it the links you're interested in, and then it's going to read them all and summarize them all and have some custom prompts for what you're looking for out of it. So tell it to pull out only stuff related to your industry or field or your region or whatever. I think that would be very, very easy to set up.

Speaker 1:

You know, it's kind of like the next generation of Google alerts, where you're getting really thoughtful summaries that are exactly tuned to what you care about. Hi, Audrey.

Speaker 2:

Hi Sam. Hi Sam.

Speaker 1:

Bye. We're gonna

Speaker 2:

have some bloopers from this video. Yes. Yes, I do think it's the new version of Google alerts or Google trends or even like, I don't know. I always think back to like the times of 2010 of like RSS readers and how great it was that I had my curated list of blogs all the time that I would go through and I had this digest all the time of new news materials and how that world's gone now because Twitter and the aggregators ate that whole, ate their lunch. I don't exactly know what killed the blog.

Speaker 2:

But I do wonder, you know, curated newsletters are sort of the replacement for that these days, but I'm inundated with too many newsletters. So now I have a new problem of I need something to actually sift through the newsletters I've already subscribed to and only pull out the good stuff.

Speaker 1:

Very cool. All right, what's next?

Speaker 2:

Next, we're gonna talk a little bit about news of the week. First thing I wanna talk about is Google IO. They had their big keynote yesterday. They announced a bunch of stuff. We're not going to cover, I think, updates to Android and kind of stuff that's core to the Google business.

Speaker 2:

But as it relates to AI, they have been on a tear lately. Think that's safe to say. Now we've already talked about the Gemini 2.5 research preview a couple of weeks ago, and they've announced new image generation capabilities and new agent capabilities and updates to all of their core models. I first want to ask you, you know, what's most exciting to you about what Google's doing right now?

Speaker 1:

I I don't even know where to start. I mean, I would say the overall aggressiveness of and performance of the AI that they're putting out is mind blowing to me. Gemini 2.5 started this. It's just it's clearly the highest quality model at the moment. And and then the stuff that they're coming out with is the stuff that we kind of imagined.

Speaker 1:

It's it's the stuff that we have wanted for so long, and they seem to be nailing it. We'll see. I think a great example is the translator. Did you see that one? So it's like real time translation.

Speaker 1:

You set up a Google Meet and you just have a conversation, and it's as if somebody has embedded audio translation. So I start talking in my native language in English, and then it translates out in Spanish for the other person, and it's my voice even, with an English accent. So that will dramatically change it'll change how we do work remotely. I love working with South American and Central American developers because of the time zone. But all of a sudden, it's gonna be I there are a lot more of those guys that I can work with because we can just speak in our native tongues.

Speaker 1:

It's gonna be fantastic. The video model too, I I never get that excited about video. I think it's very cool. But the latest one is something else. What they've essentially figured out now, it's completely indistinguishable.

Speaker 1:

You can't tell it's AI. We were kind of there with Veo two, which was the last model, but Veo three, which is the new one, now allows you to write a script, and it will actually create the dialogue as well as sound effects. So you can have a sailor, you know, on a boat, and you hear the background, and you and you hear him in his Irish accent. You can have him sing a sea shanty, and it's just perfect. Those are just two of the examples that they're more What blew you away?

Speaker 2:

I just find I find I mean, it's kind of tough because we're we're in honeymoon. We're twenty four hours afterwards, and so we're dealing with, like, the demos and that kind of stuff. But So who knows which of these things will actually shake out and be useful long term. But it's this like very tantalizing mix of cool toys and stuff that has immediate business value. You know, it's like, so even the stuff that you just talked about it characterized the the translation, live translation.

Speaker 2:

And they do a lot of also stuff of like multimodal use in your in your phone camera, stuff like that. Like integrating AI into live video environments is so clearly useful right now for all sorts of stuff. To give you one more example of that, you've used this as well, but in their AI studio, you can stream your desktop and have a conversation with AI as it's looking live at what you're looking at. And so for coding, I find it incredibly useful for fixing UI bugs and things like that for a team.

Speaker 1:

So you have actually I tried this and I actually didn't find that useful, but you're getting utility out of that?

Speaker 2:

I'm getting utility out of it, yes. There's some frustration, and you kind of talked about this before. It interrupts you all the time because it doesn't know when you're done talking. So like they have to sort of figure out what is, I don't know, increase the padding on how much space to leave in the conversation or something like that or give it more specific instructions. But yeah, I I had immediate utility, some frustration, and saw that this is going to get good quickly.

Speaker 2:

And so I'm looking forward to try the new version as well.

Speaker 1:

What else did they announce? I can't remember.

Speaker 2:

They announced Imagen and VO, so their image in their video generation. It seems like that they've leapfrogged, it was less than a month ago, it feels like, that the new four point zero GPT image gen had the belt and it kind of seems like Google has now taken the belt from them on image gen as well, we'll see. Really? Mhmm. Yep.

Speaker 2:

Yeah. It's great. It seems at kind of a at a bigger macro level, Google was always the sleeping giant here. You know, before OpenAI out with GPT, they were so clearly what everybody thought of. DeepMind is what we thought of when we thought of AI, you know, and then they stumbled out of the gates for at least a year.

Speaker 2:

You know, I think we all laughed at, you know, the first- what was it, the first Gemini ImageGen where it was pre prompting politically correct images into things where it didn't belong. So it was like always Pete. And they were just slow. So it seems that they've fixed their velocity issue. And for that to happen at the scale of Google is, I don't know, I can't wait for the books and articles to be written about how they were able to do that at a behemoth of their size.

Speaker 2:

But I mean, they always had all the data, they always had all the engineers, and it seems like they've sort of been let loose.

Speaker 1:

Yeah. I agree. It'll be really interesting to see how they translate this into revenue, though, because, I mean, there are the obvious ways that you could inject ads into the AI output. Nobody's done that really yet, and it'll it'll be interesting to see as they try to do that, which inevitably they will, how that corrupts the experience for the user. Or if it does, or if it makes it maybe they can find a way to make it better.

Speaker 1:

Originally, Google Ads were that, and then they just sort of took over. And I think most people agree that the ads have made Google search far worse. It'll be really interesting to see if some of these players like OpenAI, which are not ad driven, will be able to have a leg up just because their business model works on a subscription basis, and Google's is so different.

Speaker 2:

I think that's that's the conversation I kind of wanted to get this into, which is generally there's so much negative chatter out there about the death of the ad business, Google Ad business in particular, that people aren't searching anymore. All searches are happening on LLMs. And I think that's overstated, but the trend line is really clear. Obviously, I think people are going to continually move more of their conversations in searches to LLMs. And so you've got this existential race that Google's in to replace their ad revenue with some other revenue assuming that it's going to come from AI.

Speaker 2:

If you had to bet among the players right now, does Google get there? Is Google going be a shell of itself, a different company altogether, or are they going to be able to run this play again and be an ad giant for the next generation?

Speaker 1:

Well, they don't have a choice. The trend line, like you said, is real. There's a 9% reduction in search volume that really moved the needle on their stock price last week, I think. They have to do something. They will do something.

Speaker 1:

The question is, I think, again, it goes back to to what degree does that degree the user experience? Because right now, we are in this honeymoon period as users where these companies are competing to provide the absolute best AI driven answers and responses, and they are completely uncorrupted by shilling products for the most part, at least. What I'm curious to see is how the breakdown of revenue splits between whatever new ad model emerges and the way that they charge currently for the various enterprise services. So we pay for Google apps right now. I think we pay $20 a month or something like that.

Speaker 1:

But if they start piecemealing out these different services like video generation, it's not I I don't think it'll all be bundled. It's simply too costly to generate these these tokens right now. Mhmm. And so I imagine what we'll see is similar to what we see in AI pricing for tools like Cursor, where it's much more usage based. That's it's really what makes sense.

Speaker 1:

And we haven't seen the big companies, OpenAI, Google, switch to a usage based model yet. The smaller AI companies are because it makes sense that you're you're using tokens. It also makes sense for the bigger companies, but they just haven't done that yet. And also they don't care about revenue because they're focused exclusively on valuations.

Speaker 2:

Well, that's true. But so Google for the last, you know, twenty, thirty years has subsidized all of their businesses with the incredibly profitable search business. It's like the margins on that business are unlike anything else the world has ever seen, at risk of being hyperbolic there. But it really, the reason that you have your free Gmail and that you have your free Chrome and you have all this stuff and that they can just kind of break even on cloud and on all this other things is because search makes so much money. Their ads business makes so much money.

Speaker 2:

It's enabled this culture there of experimentation and of research and of all this stuff as well. A pay as you go business model as the core of any business going forward by nature is never going to have, I don't think, the margins that they were able to enjoy before in that search ad business. And so it just has to be a different culture and a different business that, I don't know, I would say they have a disadvantage in that because of how much they're going to have to shed and change about how work gets done at Google as compared to something that's starting closer to scratch and is more comfortable operating at percentage margins that I think is what we're in the next generation.

Speaker 1:

Yeah, and I think there's just a fundamental difference in the cost of moving electrons to power a search engine with the ads behind it versus generating those tokens, which is extremely power hungry. And that will change as the models get more efficient, but I don't see it changing that much that fast. Maybe if we figure out fusion, this becomes a non issue. But until energy becomes free, these things have a real tangible cost in the form of data centers. And so they, yeah, they have to figure out an alternative model.

Speaker 1:

And it's, at the moment, not going to be anything like what they enjoy right now.

Speaker 2:

Right. All right. Let's move on. The other news I want to talk about is it's actually a couple of different stories merged into one. There was an article in New York Times yesterday about the concept of slop, about how we as a civilization have become more and more accustomed to and comfortable with slop, everything from our food bowls coming in kind of messy slops to the the slop that AI produces in the content created by AI across the Internet, that you don't know what's made by a human, don't know what's made by a robot, it's all of a lesser quality is kind of the, you know, driving force behind this.

Speaker 2:

And I've heard a lot of that as well behind the scenes. It's definitely collecting a sentiment that is pervasive around AI slop and the danger there. And then the other groupings of news and notes is around the models themselves becoming sycophantic. I think it was Sam Altman who said that the new four point model was glazed. I think that's the term he used, glazed you.

Speaker 2:

And it's basically just them telling you what you want to hear as opposed to telling. And so I think these things are related, but I want to talk to you about, are we in a- is this an issue of just right now, that's an easy fix, or is there a fundamental issue with how AI is built and at its nature, that it's not going to be capable of creating high quality enough content or it's going to just tell people what they want and that's going to kind of have a cap to utility long term.

Speaker 1:

All right. So I think both of these are non issues. I think they're largely driven by media hype and the sick fancy issue on GPT four o, for the most part, has already been solved. It it basically amounts to changing the core prompt, and then it stops doing that. And they did that, and it's already better.

Speaker 1:

And they'll continue to make iterations to solve that problem over time. Other models will emerge, competitors that do the opposite. Brock kind of already does that. And also, you can tune your own instance of even GPT to be less sycophantic. So I feel like this is, very much an issue of the time.

Speaker 1:

And then the concern with AI slop, I also think is a nonissue. People love to find, especially in the media, things wrong with AI, and this is a great thing to focus on. I don't know if you remember Sydney back in the day when Bing was saying it was gonna take over the world. I think I saw another article about this recently where somebody got Chechipity to say some nefarious things. But these things will all get coded away, essentially.

Speaker 1:

And with respect to Slop in particular, it's just gonna get better. Look at the trend line. You can now not tell that video is generated by AI. We're there. We're there right now.

Speaker 1:

And the songs that are generated by Suno, for the most part, are not very good, but some of them are solid. Some of those got some of those songs are bangers. And the writing, I I heard I I need to pull this up and maybe we can put it in the show notes. I actually heard a funny joke that AI created the other day for the first time. So there's no reason to think that it won't become better than us.

Speaker 1:

And I'm not making a moral or ethical statement about what that means. I'm just saying that if you're worried about slop in the sense that you're worried that there's gonna be a bunch of stuff that's put out there that's not as good as what humans can create, it's the opposite. There's gonna be so much stuff that is created that is better than what humans can create. The slop that you're thinking of right now is gonna be how we talk about things that humans create in the future.

Speaker 2:

Well, I think that's well said, and I think I agree to this point that things are now as bad as they're ever going to be. And that's sort of the that like, that's a pretty incredible statement to make because you just said we're already at the point where video is indistinguishable from AI generated videos and indistinguishable from human generated video. And everything else is only going to get better. So that's it's an like, that's as a time stamp, that's an amazing feeling. I'll say it's often an issue, though, of incentives.

Speaker 2:

I think there are some sort of dark outcomes out there in the future of you know, if we're in this attention game, which is sort of what we're in right now of, you know, of of dopamine hits in social media and all that sort of stuff, optimizing AI for creating salacious material that keeps you on it for long, long times. I do think that is a an outcome that is very probable based on kind of the trajectory we're on right now. And I don't think that's a better version of the world than what we currently live in or what or the or the past. So it's like it I don't think this it won't always I don't think we can always assume that it's going to make the world better. You know?

Speaker 2:

It it getting good at making drivel, making better slot, you know, just to keep you stuck on on your for your you page, you know, is is definitely a, I think, a a watch out that, like, I just don't really I don't wanna be a part of, and I hope I hope we can do my part to avoid that future. You know?

Speaker 1:

Yeah. I totally agree with that. That that's kinda why I was saying I'm not making an, an ethical prediction here because, absolutely as as power grows, people will do things that are shady with it, and the largest companies will probably do the shadiest things. But I'm specifically referring to this question of slop, which what I hear when people say that is like drivel, like bad quality content that's not thoughtful and that doesn't connect deep emotionally. I think we will see the opposite actually fairly soon.

Speaker 2:

Well, that's I want to get to this other thing of it being sycophantic, because I think it's also related to I don't know. I see a clear line between that and the conversation we had earlier about projects, about different context for different models at different times. And they are all an extension of the user. It's sort of like isn't the the reality I've come to. It's like I build my AI suite where I have a project for coding, and I have a project for movie recommendations, and I have project for medical stuff or whatever those things are.

Speaker 2:

And I want the AI to have a different context and treat me differently in each of those things. You know, I might actually want a supportive friend AI occasionally. Right? To, like, to give me advice in relationships and tell me that I'm good at my job or whatever. Or, like, you know, that's I think there is value to that.

Speaker 2:

I think the danger is in this sort of monolith of, like, that that's all that that that that the it is your end all be all. You're just like one interface to AI is a single is a single model that you think is doing everything. Because it's context switching in the background. It's trying to guess what you want it to do all the time. Right.

Speaker 2:

You know? And I think that's the it gets to actually, we can kinda skip now to my my hot take, if you don't mind, because I think it's the next conversation, which is just that, like, here's the hot take. Every name in AI sucks. Every single one of them. And it's a funny thing to sort of joke about about how, you know, how many GPTs all named 4.5, four point one, three zero four o, all these things.

Speaker 2:

And they don't all allude to what they're what they do or what they're good at. And so only insiders know what models to use for what reasons. The Gemini Suite, Google, notoriously bad at naming things. You know, you got Claude. Sometimes they're named after humans.

Speaker 2:

You have all this stuff. But there's actually a bigger issue at play here, which is one of branding of branding and messaging of what the value is. I think we need a better name for this new world we live in. You know? Like, even just the term AI, I think, is it means different things to different people.

Speaker 2:

Clearly, the term agents means different things to different people. And so this this came to me yesterday. I was talking to a good friend, and she was complaining about a consultant she had hired that was supposed to be doing AI for her business. And she was showing me some workflows they had automated in ClickUp and some dashboards that had been made, but she said, I don't think we're using AI for anything. I thought we were doing AI.

Speaker 2:

But but the fact that, you know, probably AI was used to automate those workflows and to create those dashboards and to probably fill out the data and all that sort of stuff, that doesn't jive with her vision of what AI is, which is more like Jarvis from Ironman. You know? It's this it's this, like, super genius AGI level assistant that has machine learning and can tell you better things constantly. And if you're not doing that, you're not doing AI. Right?

Speaker 2:

And we have to this I don't I could probably help write the brief, but I'm not the person to answer the question. But we need a better set of terms for what these tools do, what they enable in life and business, and sort of what this what this next phase is. It's too many things to just sort of be bundled under one roof, which is AI.

Speaker 1:

Yeah, I think that's dead on. We are at such early days, it's really easy for us to forget this because we're so inundated with it. But by and large, people still have very a very limited understanding of how AI works and what it can do. I was talking to my buddy, Mike Klein, about this yesterday, and we had kind of a philosophical discussion. And he he asked me, how do you how do you describe AI to your clients?

Speaker 1:

And I said, well, it kinda depends on the context. I try to gauge how technical they are and then and then base it off of that. But at its most fundamental level, I would say, it's a prediction algorithm and it's predicting it's like autocorrect on your phone. It's predicting new words based on a corpus of knowledge. And he's like, Woah, woah, woah, dude.

Speaker 1:

That is way too much. AI is a computer that has read every book in the world, and it functions as a three year old because it's three years old. And so it's sort of stupid but also incredibly smart. That's what AI is. And I thought that was so relevant to me because I often go way too technical.

Speaker 1:

And you're actually really good at bringing us back on this I do that.

Speaker 2:

-So let me

Speaker 1:

have no idea.

Speaker 2:

Add to that a little bit because I think what that's missed that is, I think, what AI is. But what does it do and what's it for? To me, I probably willfully ignore or I choose not to believe in this future where the robots are going to do all jobs. So we're not gonna humans aren't gonna do anything, and I we're all gonna be the people in in Wally just like, you know, fat lying around on scooters getting fed through tubes. I don't think we're gonna I don't think we're gonna do I think what AI is and what the promise of it is is that it's an infinite force multiplier that, like, that you can you can choose any skill or talent or job or whatever and get better at it by by using these things.

Speaker 2:

You know? Like, what it's done for engineering, which has kinda made your average end computer engineer a 10 x engineer because of the amount of code it can write. I think it'll do all over the place. And so it's like and that's that's how I sort of rather than having thinking of this, like, paradigm of I've got a supercomputer Jarvis who does everything for me to I mean, this is sort of a cheesy, you know, metaphor to I'm building my Iron Man suit based on what I want it to do for me to make me a super powered version of myself. I think that's the that's how I currently use it.

Speaker 2:

So I wanna be a better storyteller. I wanna be a better dad. I wanna be a better engineer. I wanna be able to do more things without working with people. You know, replace some of the conversation they used to have with lawyers, which I didn't like, you know, and get, like, a little bit better.

Speaker 2:

It is this to your point about a a computer that's read everything, it's your smartest friend on any topic, and it has you know, it's not the smartest person in the world. You know? It's not like it's not that it knows the most about every topic, but it knows 80% of everything, which is just this incredible corpus of of knowledge that, like, I can ask it stuff about things that I'm that I'm glib about that I don't know much about and immediately know more and be better after after that conversation. I think that's just like, yeah, people don't see that right now. I'm a little bit dumbfounded by it.

Speaker 1:

Found myself getting frustrated is not the right word, but my girlfriend is gonna plant some things in the front yard and she's like, what if we planted this? And I'm like, well, is that drought tolerant? And does it survive in Austin in this heat? And then what about the freezes? And she's like, don't know, I think it's fine.

Speaker 1:

And in my brain, I'm like, there's only one solution for this.

Speaker 2:

They're so noble.

Speaker 1:

There's only one way to buy plants and you have to put that in Chechiuti. And to do anything else is insane. Unfortunately, she's actually on the same page. She uses it as much as I do. But yeah, I think we're there.

Speaker 2:

My daughter brought a mushroom in from the backyard that she found in our backyard. And she was like, look at this, can we cook it? And I was like, I probably not, but I know how we can figure this out pretty quickly. So I took some pictures of it, said Jet GBT. It was it immediately told us it was poisonous not to touch it.

Speaker 2:

It was like so was like, everybody go wash your hands. Don't touch anything else. And then I did I was wondering, you know, to the point about trusting it and it's sick of fancy or whatever, is it being too safe? I then, you know, used I can't what's that other app? There's another app that's been around for a long time that lets you document plant life.

Speaker 2:

You know? And I tried to do the same thing taking of it from there to a dedicated, you know, vision app to document plant life. And all it was able to tell me was that it was a mushroom. It was completely useless because it wasn't in its habitat, I think, all of its pieces, so it didn't have enough data for it.

Speaker 1:

Yeah. That's great. I think that's, I think that's pretty good for today. What do you think?

Speaker 2:

I think that was good. If anybody is out there still listening, let us know if you got any thoughts, if you got any hot takes, you got anything you want us to discuss. Also, if you wanna be included in this new newsletter we've tried to automate, hit me up. Thanks a lot, everybody.

Speaker 1:

Sounds great. Have a good one.

Speaker 2:

You too.