What if we could? A podcast exploring this question that drives us.

Episode Summary:
In this week's episode of "The What If We Could Show," hosts Calvin, Bob, and Kevin explore advancements in artificial intelligence and tech alpha. Join us as we delve into Etsy's AI-powered gift mode, Google's Lumiere for text-to-video transformation, AI's role in detecting pancreatic cancer early, Eleven Labs' impressive funding for voice technology, and Neuralink's pioneering human trials.

Key Discussions:
  • Etsy's AI-Powered Gift Mode: How Etsy is revolutionizing personalized gift recommendations using AI, enhancing the shopping experience with over 200 personas.
  • Google's Lumiere: Unpacking Google's new space-time diffusion model for generating seamless long-form videos and its potential to transform content creation.
  • AI in Pancreatic Cancer Detection: Examining PRISM's breakthrough in predicting pancreatic cancer risks earlier than ever, highlighting AI's transformative impact on healthcare.
  • Eleven Labs' Voice Technology Milestone: Celebrating Eleven Labs' Series B funding success and exploring their innovative audio clone technology and marketplace for monetizing synthetic voices.
  • Neuralink's Human Trials: A critical look at Neuralink's foray into human brain implants, discussing the ethical, societal, and technological implications of merging human cognition with AI.

What is What if we could? A podcast exploring this question that drives us.?

"What if we Could?" A podcast exploring this question that drive us. We explore the practical application of artificial intelligence, product design, blockchain & AR/VR, and tech alpha in the service of humans.

Calvin (00:02.21)
Welcome, welcome to the What If We Could Show, where every week we ask that burning question around new topics in artificial intelligence and tech alpha. I'm today's host, Kelvin, and I've got with me Bob and Kevin. And today we're gonna talk about how Etsy launches a new gift mode, Lumiere, a new space-time diffusion text-to-video model by Google, how AI helps to identify deadly pancreatic cancer,

How 11 Labs raised 80 million series B, and Neuralink begins human tests. So let's get started.

Bob Ullery (00:43.3)
Let's do it. Hey everybody.

Kevin Nuest (00:44.714)
All right. Hey, hey.

Bob Ullery (00:47.94)
Where do you wanna start?

Calvin (00:49.414)
Cool, so Etsy launches new GIF mode. Kevin, you wanna kick us off here?

Kevin Nuest (00:49.922)
Let's start.

Kevin Nuest (00:54.474)
Yeah, I thought this one was really interesting. Uh, Etsy launching the new GIF mode, touting it being AI powered. Uh, I think it's got a lot of really great leverage based on my understanding of using AI LM LLMs to, to accomplish this, uh, over 200 different personas to be able to say, Hey, who I am going to answer a few different questions about.

uh, giving a gift to somebody and who, you know, who is it, what do they like? And it's going to come back with some different recommendations. You know, they, they said 200 different personas, uh, that they've put together. Whenever I hear that and start hearing limiting numbers, I think there's a little more deterministic heuristic rules than there is, uh, you know, AI that explores much more, uh, niche facets. But to apply AI to.

go through a hundred million different products that they have on their storefront. Right. I think is a really great use of, uh, artificial intelligence to even make this possible, going through all those products and saying what is giftable and, uh, what types of people would like, what types of, of gifts. So I'll start there. I think that's, it's a, it's pretty cool to see also tactical implementation of AI for revenue generation by a.

company, right? This isn't cost savings. This isn't question and answer. Uh, you are answering questions, but you're ultimately getting to an outcome. Uh, and Etsy is looking to get to more dollars.

Bob Ullery (02:32.064)
Yeah, it's interesting. It's kind of like what's old is new, though this isn't that old. Um, back in our enterprise, MarTech days, we used to call it guided selling, you know, um, better, simpler is, uh, just personalized product recommendations, but moving from, you know, sort of static data driven, uh, products that might match.

to asking folks like who is the recipient of the gift or who are you if you're going through your own buying process. Those tools have been around for, I would say 10 years or so AI powered. So machine learning power, I should say. And is this really an example of sort of reinvigorating tried and true methods with AI to take it to a whole nother level? Guided selling with AI

Well, in my opinion, I think is the way to sell stuff moving forward, right? Try to get as laser targeted of a lens of an understanding around a person and you have a wide swath of services and products to offer them in a catalog, you can make way better targeted recommendations to that person that not just map to the features of the product you're looking for, but maybe lifestyle.

of the person, which has always sort of been a nirvana goal for marketers.

Calvin (04:02.314)
And I'm really curious how they did it. I mean, in, in some of the articles that I read, they, they talked about it being a collaboration with GPT-4 and that they said they use the combination of machine learning, human creation and open AI's GPT-4. I mean, it's fascinating in general, right? The retail sector is so ripe for doing that. And like you said, Bob, they've tried to do that using heuristic models for, you know, as, as long as.

commerce success, well, specifically, you know, e-commerce. And it's kind of interesting, it is in, before that age of AI, it has never really taken hold. I think there's been a lot of attempts to fully personalize it, but it's always stopped short a little bit. Where do you guys see sort of the effect of having, you know,

hear machine learning and language models getting into that. How do you guys think that they'll actively use it? Where will they unlock the value here as opposed to using a simple heuristic model?

Bob Ullery (05:05.732)
I think my mind goes to routing, you know, like even in the old world of guided selling with XGBoost and other machine learning models behind it, ultimately that's like advanced form of routing. You know, everything's sort of known. What path should we send this person down to get to that state, that conversion? And I think that's starkly different than what we're looking at here with AI, specifically generative, right?

So that they have 200 personas. That doesn't mean there's only 200 outcomes. There's probably an infinite quantity of outcomes here. They're just sort of tailoring this around 200 known personas, which is a lot of personas when you think about it. So that's where my mind goes is this sort of like, path is more goal seeking, path finding in essence than path selection, if that makes sense.

Calvin (05:58.87)
Yeah, that makes sense. So I guess it's interesting with the 200 personas, right? So I'm curious where in the process do they use the GPT, right? So do they use it to attach you to a certain persona? So kind of mapping, you know, they have predefined the personas and have predefined the mapping of the personas to sort of the product recommendations or the gift recommendations. And then they're using the LLM part to attach you to a specific persona. Is that where they're using it?

Kevin Nuest (06:31.054)
I think it's unclear. Yeah, I'm not, I'm not, I'm not sure. At the very least. It, it does feel like a bit of a miss slash a growth opportunity to calling out Bob too. It's still 200% is way better than, are you trying to buy a gift for a wedding or a birthday? Right? A couple options, a, you know, Father's Day, the classic marketing calendar as, as it's known.

It still leaves from my perspective, I started thinking about navigating to product recommendations, something we desired for finding like the power of, for, of using an LLM to find niches within that. So one of those 200 personas or a few of them have to be something around a gamer, right? It's something along the lines of likes gaming. That's, that's okay. As, as someone that likes gaming.

And anybody that spends the time around the ecosystem knows there's a lot, a lot of niches within that and to be able to better identify what those niches are without putting too much burden on the person answering questions and doing the shopping, that's the balance here. You can get a lot further in the niche of matching up what out of those a hundred million products on Etsy, which ones are the second tier down under gaming, uh, that would be a home run.

Of receiving it. Right. So we've all gotten a gift of, Hey, you're a sports fan. Here's a, here's a sports gift. Like, Oh, thanks. Right. That one's a little bit easier because you start doing affinity with teams. Uh, it's a, and, and so you can be a little bit more broad with some other types of categories, uh, and still hit it. But man, to deliver something with minimal questions to the gift giver, that just feels like I know you, I picked this out for you. Here you go.

That would be the magic, I think, of trying to apply LLMs here.

Bob Ullery (08:29.172)
I wasn't going to pick this thing, but because you said this other thing, it's actually this product instead, right? It all boils down to like that magic word, right? Which is inference. I think people skip over that keyword when you think about AI. It's literally the motion of AI doing its thing. It's doing inference. Inference is powerful, right? That's the value of the human is that we're able to bridge the chasm and infer things that aren't...

Kevin Nuest (08:36.195)
Mm-hmm.

Bob Ullery (08:57.824)
directly connected digitally. Um, and so, you know, that, that's where my, my mind goes in terms of the value here is doing that scale.

Kevin Nuest (09:07.39)
Yeah. I think an even better if here is I started thinking about this, uh, gifting product under Etsy would be what if I could curate five to 10 gifts really quickly from this process that are very close to within the realm of the bull's eye of the target of giving a really great gift to somebody, and then I fund my Etsy gifting wallet, so lock up my money and say, Hey, I'm willing to give 25 bucks.

The products that are curated around 25 bucks already and right on target. And then I send Bob a, hey, great, pick out your gift. Here it is. Here's some things I thought you'd like, but grab the one that you really want. And so now you get the best of both worlds. You get to be thoughtful. The money's committed. So Bob doesn't feel bad. It's like, this gift is happening and you're able to choose what you want without having to return it.

to then ultimately get what you want. And this could be the place where Calvin, that next level down of LLMs applied where, well now Bob's driving, Bob knows Bob. Bob clicks into one of the five GIFs and then it's, okay, you can have that. Or do you want me to expand that tier down and say, here's the other 10 things that you are probably interested in from just that one piece of information missing to go a click deeper.

And it's, it sounds like product recommendations. It sounds like stuff we've been doing forever, but I think it's like vastly, I think it's a lot, a lot different. There's a, there's a, there's a slight tweak in the interface there that would make it just dramatically different and more personalized and make Bob light up that I spent 25 bucks on it.

Bob Ullery (10:53.6)
Not the first time Kevin buys me stuff all the time. I got an awesome coffee mug and I've got a Beeple coffee table book that I'm looking at right now. Thank you.

Calvin (10:54.842)
Hehe.

Calvin (11:02.684)
Very exciting.

Kevin Nuest (11:02.987)
It's the best thing you can spend your money on, making other people smile.

Calvin (11:06.874)
For sure. Speaking of inference, have you guys seen the new text to video model that Google has brought out? Have you looked into the mirror? I think, Bob, you mentioned it a bit before. Do you want to tell us a little bit about it?

Bob Ullery (11:24.128)
Yeah, I skimmed the paper. I couldn't talk technically into how they do it, but the thing I'm excited about is, according to the paper, is that long running video being generated, but it's all one shot, which is a departure from typical approach of frame by frame. That to me seems interesting, both from horsepower infrastructure, but also I think continuity.

I love to see a output in like a week or two with this once an implementation's there on like a hugging face to try out. And I have a suspicion that's going to be a higher quality than some of the like hallucinatory videos that we've seen over the last few months. They're absolutely getting better across the board. And they were already before this, which is the third point of excitement, which is, okay, Google says we've got something better. Uh, can't wait to see.

what those outputs can mean like today and then, you know, moving forward, maybe a year from now, what those could be and how fast they could potentially generate themselves. What do you think?

Calvin (12:32.41)
Yeah, it's fascinating. It's fascinating. First of all, seeing how the incumbents are starting to catch up and ramping up in terms of how they catch up. Right. Like if you, if we, you know, we've been following runway for a while, Pika obviously as well, right. All of, all of the, the new kids on the blog that have made great strides and, you know, it's been quiet from Google, at least on the video front for, for a while, and now suddenly, you know, we have this new research model coming out.

giving us very seemingly very good results. And speaking of the model, looking into it a little bit deeper, like what I think is interesting in terms of how, and also without going too deep into the technical details of it. But, so far we have generate key frames and fill in the gap, right? A lot of video generation has been about sort of

in fearing an interpolation between frames, right? And sort of adding in the gaps to create motion. So basically trying to get from consistent frame to frame by interpolating in between. What's very interesting in sort of the space time unit architecture that they're trying to do, and that's what you tapped on before, is they're essentially trying to produce the entire video in one pass, right? So basically that's how they want to ensure temporal consistency.

So in sort of the approach that Google is taking, it's a little bit more like 3D mapping the space, if that makes sense, right? So they're basically trying to predict how the elements in a scene would change and looking at them a little bit more as 3D objects in a scene. And that way they can kind of produce the entire pass or the scene without having to go frame, frame interpolation.

And I'm personally, like I said, we've only seen what Google has shown us so far, which is obviously highly hand-picked examples or cherry picked examples. Right. But the approach sounds like an interesting way to combine, you know, spatial information with temporal information and get sort of a visually and temporally coherent outcome. So interesting to see where that whole sort of space diffusion model is going to go to.

Calvin (14:55.258)
Um, but the results are very promising. So curious to see how Google follows up. What do you guys think about sort of the incumbent play and how does Google get, get their comparison?

Kevin Nuest (15:06.506)
Yeah, I wanted to come back to that. You can't sleep on the incumbents. They are slower to do product marketing and release because there's more to lose for the teams that are driving this forward. There's, there's credibility, both for the company for stature of the employees. They're driving it forward, but they, they didn't just wake up and see Pico labs and go, Oh man, we need to get on this video generation. These are skunk works projects inside these organizations.

Oftentimes way before the startups launched the productized version, right? Google wrote the paper on GPTs. Let's not forget that. Open AI would not be where it's at. Even if Google didn't just publish the paper and say, here's the secrets to, uh, changing the world with LLMs. So I think they have a lot more aces up their sleeve. It's really a matter of can they productize it and, uh, enough fast enough in a way that's gonna.

Make a difference and also don't forget that if it doesn't work, Google and incumbents kill things and, and so tools go away, uh, which is fine. And it's okay for their business as well. Uh, it doesn't mean they won't take another crack at it, but they could launch something and it could be around for a little bit and then it could just, you know, get the sunset email and say, Hey, this isn't really working out for us. It's gone. So I'm saying that as a don't.

overindex on incumbents either, that there's not room for Pica Labs and others to sustain over the long haul.

Bob Ullery (16:43.452)
Yeah, my mind goes to consolidation, right? And this has been a theory of mine for a couple months now, considering the age of AI being upon us, right? I just think it's consolidation all the way up from here, partly because AI presents a situation where previously entrepreneurs get in a room and they have a new novel concept and it sort of comes with a moat, right? Your first mover advantage.

some sort of novel breakthrough or academic unlock that you're taking to market. And now in the world of AI, the, the mo is very shallow. If they're at all, it might look more like a row of hedges or something in your yard instead of a dugout moat around your home or your castle in this analogy. Um, and I think what we'll likely see is just acquis, M&A is going absolutely bonkers, you know,

It's a resource problem when you get down to it, right? Can we solve big, hairy problems? Was always the name of the game. Now it's can we solve big, hairy problems really fast before somebody else does? And can we monetize it before it becomes commoditized and just trivial to replicate? And hence the No-Mote concept. And so what I think we'll see with a high degree of

probability is any new startup with something novel is going to get uh, merged, right? Acquired up into the incumbents. And I think those incumbents just get bigger and bigger and bigger and bigger. You know, the Microsofts, the Googles, the Salesforce, the Amazons at all. Just start bursting at the seams. And that's their primary growth drivers acquisition of novel new unlocks specifically using AI that we couldn't do before versus having to do R and D. And, uh,

the product of themselves.

Kevin Nuest (18:37.246)
Yeah. And it's not even just about solving the problems quickly and having the teammates and staffing resources. It's it's compute, right? The amount of compute to train to take that next swag at the next model that stays ahead is arguably as big of a barrier as the ability and knowledge from teammates to be able to.

Kevin Nuest (19:07.939)
I'll call it equal parts right now. It's to give credit to the humans. It's probably more weighted towards compute than it is to the very smart humans that work at these, these startups. Uh, and, and who has more compute and resources to buy compute than, uh, the, the big dogs. Right. So that's a very, another reason for that consolidation to happen. We've also like heard rumors of it and what we'd consider some early stage startups that have had.

Bob Ullery (19:25.601)
Yeah.

Kevin Nuest (19:34.826)
breakout success and traction that are still going great, but I'm burning $10 million a month on compute and I got to find more cash or a home like by the end of the year, right? There's it's, it's a, that the, the perils of being successful in AI is the bills get big fast.

Calvin (19:52.366)
Yeah. And we've seen, we've seen this, this tale and it's, I think why your acquisition strategy is probably, or theory is spot on Bob is, you know, you, it's, at some point you, you gotta, you gotta merge into, you know, someone with bigger coffers, right? And, and like the, the incumbents just have large swaths of coffers at their disposal.

Bob Ullery (19:52.516)
That's right.

Calvin (20:16.882)
to even just keep running these models for a bit more. I mean, I'm still curious to see though, when we'll reach the point of this needing to get to a point. Like, you know, in the case of Google, we've seen this turn out repeatedly in the past, right? Where they kill anything that's below a 100 million revenue threshold, you know, because it's just not relevant to them. And so if you look at AI currently, I mean, it's still a big question as to when will that compute get to a, like the...

the excess of compute on the one side that you're using, versus the value that you create on the other side actually get to a level where we see something matching. And just to give another side, I find it really interesting how, and I mean, that might be more tied to the current lawsuits and things like that on the open AI side, but I don't know if you've noticed the same thing, but GPT in its capacity, GPT-4 has been reduced massively when you're trying to interact with it.

And that is us being heavy users still, right? We're also heavy API users. But if you look at the pure interface, in terms of what it gives back to you, has been reduced quite drastically, right? That could also be because maybe it's worth reducing the heavy load that it needs to do to do something and reduce costs, right? So we're still in that game where we don't know where this is going to land in profitable land.

Bob Ullery (21:41.028)
I hope it wasn't spreadsheet driven. I hope it wasn't spreadsheet driven because I could see that type of decision being made like, oh, we can reduce our costs by making the output worse and we would increase our revenue because people are gonna use it more to refine it. We get two X to the left on utilization and cut costs in half. Like if you presented that to the board, they'd say, yeah, let's do that.

Calvin (21:52.184)
Yeah

Calvin (21:56.838)
Right?

Kevin Nuest (22:04.982)
Businesses do that all the time. Reduce reduce service quality by acceptable percent and dramatically reduce cost. Companies do that all the time. Why not a tech company deliver an infrastructure services? Yeah.

Calvin (22:05.282)
Right. I mean, we're obviously hypothesizing, but very...

Bob Ullery (22:21.056)
Yeah. Now look at like, I think Google and Microsoft stand alone to me, right? Because they are big incumbents with unlimited war chests, but they're also model providers, Microsoft by way of open AI, but Google directly and infrastructure providers to your point. And I think another just last thought on the acquisition side is really what they're able to do now that they haven't before is they can mitigate their risk on new product development.

So if AI is accessible to all, and it's sort of trivial to reconcile and sort of run it with a lot of these hosted services, that means when papers like the Lumiere, like we just talked about, drop, it's not one startup going after it. There's probably 30, as in the last 24 hours, have sprung up to go after implementation to monetize that. Now, as Google, we can just sit back, release new novel.

approaches to things and let startups and competition in the market decide who's going to emerge from that soup and go acquire that one. We're already at a hundred million dollar business. We had no skin in the game, no risk, and we used evolution essentially like survival to fitness. Whoever emerges from this subcat is going to be a part of Google. And all the while, they're monetizing both sides of the equation, right? Because they're the infra and the model

Calvin (23:50.01)
Great, great example. I mean, if you look at Alpha Codium, I don't know if you've seen the news around that and it being sort of the open source follow on Google DeepMinds Alpha code, right? It's basically an open source version of it, right? And just surpassing Google, you know, and like who's to say that this is gonna be sort of, you know, re-incorporated in the right ways. So there's definitely a chance we will see this happen more often.

Um, speaking of, you know, great new things though, and like, maybe to try and get this a little bit on the, on the positive impact side of it all, um, on the eye spectrum. I think one, one paper or one research that I found particularly interesting this week was, uh, is called PRISM. So it's basically a risk prediction system to help catch, um, deadly pancreatic cancer, um, earlier. And there was a

It was released also on the MIT technology review. And it's fascinating. Just to give a couple of the stats that I found really interesting about it is, one, it's a larger scale study. So they included indicators from 1.5 million individuals. So that, for one, I find highly interesting. Because now we're getting to a state, and that used to be before the AI hype of the last two years,

good quality based data in health was very challenging. With the new AI hype, we see a lot more opening up across the science community for this to happen. And they actually managed to predict the likelihood of someone developing pancreatic cancer within six to 18 month window before any clinical diagnosis was made. So just think about that, right? So it's up to 18 months earlier than any

clinical diagnose. And so I think this is where we're really starting to see incredible unlocks. And they used a combination of statistical and machine learning techniques. So they built two models, one neural network and one LR, to do that. And the outcomes are just significant. So I think that's one great leap forward. And I'd love to see more of these coming in the future.

Bob Ullery (26:16.248)
Yeah, that one's really exciting. AI is uniquely suited to do things like early intervention faster than humans, right? We had a bottleneck problem with people. We only had two eyes. You know, like a lab only has a certain number of people with heads that have those two eyes, and so you're limited in terms of labs coming through to test and analyze. And here we can do that at scale.

That's exciting. And I know pancreatic cancer affects a lot of families around the world and is a very, very deadly cancer. So six to 18 months ahead is literally, that's probably one of the most exciting things I've heard since we, since we've been on this journey on the AI side. That's profound.

Calvin (27:05.658)
Yeah, and just this week I overheard a conversation with fairly well-known clinical researcher and doctor in the medical field. I'm not gonna name names here, but it was very interesting. And he basically, he came back from a symposium. Came back from a symposium and he was just flabbergasted. He was, his opinion was basically set with Dutton because he listened to an entire...

Kevin Nuest (27:23.742)
Dr. Oz.

Calvin (27:35.43)
podcast where, you know, the, not a podcast, sorry, a symposium at the conference that he was at. And, um, it was about how the whole diagnostic spectrum in most areas already through using LLMs is covers 80% of any diagnosis that they can potentially do. And he was just, and you know, seeing, seeing that how it starts to impact it, um, that scene is just insane.

Bob Ullery (28:02.84)
So he said, he said, we're done. Was that like a negative thought? Like, oh man, I'm not gonna have a job anymore or we're done in terms of like having to deal with diseases and things that are now like fixable.

Calvin (28:21.387)
Well, I think in their case specifically affecting also, you know, their entire research department and diagnostics department. So I guess it was both ways. Like they're very deep into sort of trying to get AI into their entire research program. So it's a very interesting thing to see, right? So they're definitely interested in exploring that. But I wouldn't, it's definitely a mixed bag of feelings, you could tell, you know.

There's both, there's the fear and the excitement of what this can do. And with research results such as what we're seeing here with PRISM, this is just underlying the fact that we are there now, right? This is not some hypothetical future, but we are at a point where it is objectively at large scale studies getting better than a human to predict these things.

Bob Ullery (29:13.944)
Sheesh, that happened really fast. There were so many people telling me to not worry about this, and it's gonna take five to 10 years. I said, not this time. And here we are.

Kevin Nuest (29:24.206)
just literal life-changing improvements compounding on a weekly basis. We will only be able to connect it looking back and even just using the last year and what the humanity already knows that they don't know. Like way, way past, can we chat with, you know, chat GPT and LLMs. These are the sleeper things that it's like, here's another headline, here's another headline. Oh, by the way, we discovered this out in deep space that we didn't see before. These are, these are,

And each one of these fields is massive compounding improvements. And it's, it's hard to see it through the daily lens of, oh yeah, that's another, that's another one, put another one on the shelf, uh, man, five and 10 years. Where are we going to be on five and 10 years, Bob bonkers.

Bob Ullery (30:06.92)
on article about.

Bob Ullery (30:12.76)
we'll be in the basement with our VR goggles on sitting on a beach. Probably.

Kevin Nuest (30:19.274)
That sounds nice.

Calvin (30:20.925)
So what beat you're gonna pick?

Bob Ullery (30:20.952)
I saw an article about a breakthrough on concrete actually just again using AI they've now discovered a new form of concrete that's like stronger than steel. By the way, here's a million new proteins we didn't know about before. I clicked the button and now we have like a million new proteins. Like holy crap. Oh my god.

Calvin (30:42.506)
Yeah, we are so not ready at tinkering this. I saw this sentence the other day where it was, we are, humanity in general is the very best at just childlike stumbling into complexities and toying with things that they don't yet know how to deal with. And that is such a good description of what humanity is. We're literally toddlers.

playing in the sand with this massive tool called AI and seeing how things go. But things like what we've seen here with Prism, we can see that there is well, great use and potentially I'm glad just being put to use and you know, the positive way.

Bob Ullery (31:25.508)
You can take the man out of the cave, but you can't take the cave out of the man.

Calvin (31:33.729)
That's fair. In other news, more in terms of support, we've got some exciting new things around AI voice technology too.

Kevin Nuest (31:47.326)
Yeah, one I wanted to share, I think, I'm a big fan of Eleven Labs. They have really great, what they do is create audio clones and synthetic voices for people to be able to use in whatever content that they need. And I've used it a few different times on a couple of different projects and things that we've done. And it gets the job done really good. I've cloned my own voice using it. And I remember we were,

Bob Ullery (31:47.457)
Yeah.

Kevin Nuest (32:17.09)
four of us partners were at an offsite and I played this video where I was doing the voiceover on it and it got to almost the end and Bob goes, wait, that's not you, is it? It got to like the end of a 10 minute video and then Bob's like, oh, that's not you, that was awesome. That's how good it is. The process to get that video made and generate a almost 10 minute script was challenging and in the 11 Labs product, I was doing basically paragraph by paragraph

generate the audio and you know, sometimes you need to regenerate some of the audio because it just had weird inflection or Just a weird pronunciation. So it's not perfect all the time, but it gives you so much and it gives a lot of great inflection It gives a lot of a really good audio clone of someone so you take you take that with the some of the challenges so first off I'm bearing the

the lead, I'm less excited about this. I'm excited for them that they've raised $80 million in a series B to keep investing in this product in the technology. But I'm really jumping to the story of why, why it matters. They didn't say great news. We raised $80 million and we're going to build some more stuff. They said, great news. Here's the stuff that we built. And it's now live for everybody that we've been beta testing. Oh, and we have $80 million to build more cool stuff like this.

So the pain that I had trying to generate something that was more longer form that wasn't just a sentence or two, they've now solved with what they call projects, which is a pretty generic term and they've positioned it inside of their product as for audio books, specifically, uh, killer use case, right? Take your, take your audio book that you, your book that you've already written, any text, uh, be able to import it and really that long form and then, uh, add, add voice to it.

They show some really good examples of narrative where you have different characters and voices going back and forth, just makes it super rich, audio drama level for some different type of narrative books for people producing productivity books and business books. It lets them get in front of people like myself that I just, I never buy the Kindle version of a book. I never read, but I listen to audio books like a fiend. So it opens up a whole set of people that would, and I know a lot of other

Kevin Nuest (34:35.95)
Uh, founders and such that maybe 50% of the people I talk to that they only do audio books, so that's a huge, huge plus for people to be able to unlock. Uh, and the, the other thing that they. Released with that is the, uh, the ability for, uh, for a marketplace, for people to be able to monetize their, their voice and their, their likeness. So something we've talked about in a few different ways and

We have viewpoints on provenance and, uh, and see that as being very important in a lot of, a lot of places. So for 11 labs to come out and proactively say, Hey, you can have this ecosystem of, of voices to build a use, uh, and, and also they're being fairly paid for it. They set their own rates and here's a way to interact and exchange that. I think is a really great next step. Is it going to be perfect? I don't, I don't know, but it didn't exist and now it does.

So that's a lot further than we were two weeks ago.

Bob Ullery (35:41.376)
It'd be perfect for when we're telepathic with our neural links on our heads. And this notion of like being able to install a new designer voice. Next time I talked to Kevin.

Kevin Nuest (35:53.046)
Yeah, what do you want me to sound like? You want a Matthew McConaughey clone for the day? I can be that for you.

Bob Ullery (35:59.397)
I think you got a perfect voice as is. I think you should go to the marketplace and rise the charts and be the number one voice.

Kevin Nuest (36:05.742)
It's gonna be like Cameo, right? If a few popular people get ahold of this, it's there. Turn into Cameo for voices. Be like, yeah, you want, I was using Speechify the other day, another voice driven product. Yeah, I had to go through the setup because I added it on a new device. And like, oh, you want Gwyneth Paltrow? You want Mr. Beast? And that's their, you know, three featured voices in their premium app. Marketplace is gonna let you do anything.

Whoever wants to throw their voice on there and get paid. You can then throw in there the product.

Calvin (36:38.491)
Mm-hmm.

Bob Ullery (36:38.776)
Morgan Freeman joins is game over. I mean, what's the point?

Kevin Nuest (36:43.627)
Absolutely.

Calvin (36:45.37)
Yeah, that's actually true. The 11 Labs marketplace. I'm really curious to see how that's going. They said they already started paying out. Is that right? I think that's, that's what I saw the last time. They're already actively redistributing. So that is great. It's very exciting.

Kevin Nuest (36:54.85)
Think so, yeah.

Bob Ullery (37:00.792)
They should, I hope they ask for people who put voices on the marketplace, like where they're from. I have a suspicion that we here in Indiana are gonna rank really, really low because of our Midwest accents. There's no reason.

Kevin Nuest (37:15.238)
No, I think you could get the opposite. Like, where are you from? You don't quite fit on the coasts. Not from down south. What is that?

Bob Ullery (37:27.284)
Nicest place in the world, the Midwest.

Calvin (37:28.05)
That's funny. And I can't help but see like a lot of these applications being kids sitting in their virtual classroom and making their teachers sound like whatever makes them laugh. Or basically, sorry, 90% of corporate meetings going like, you know, replace it with your evil villain if you don't like the manager. I don't know, I see a lot of...

Kevin Nuest (37:28.94)
Yeah.

Kevin Nuest (37:51.374)
Totally.

Calvin (37:58.246)
fun use cases coming through.

Bob Ullery (37:58.784)
Yeah. Why are you Why are you screaming at me? Hold on one second. I'm gonna go and turn up your politeness meter. That's better. Yeah, go ahead.

Calvin (38:08.403)
Oh yeah, going back to filtered realities, that's a real risk that we're introducing here. Like, I don't want to be yelled at and suddenly everybody's smiling. I remember there was a movie about that too, wasn't there? So you're basically just tuning your reality to what you like to see.

Bob Ullery (38:14.115)
Here's.

Kevin Nuest (38:25.958)
Yeah. The, the other ramification of this too, I think is right in time as well. You know, because we're on, we're on the edge, we're having these conversations. That's maybe not obvious to a lot of people that are thinking about AI and automation and, you know, text based and image generation, we have a, we've in the last several months, we've had a lot of conversations about synthetic video and audio and as we, uh, have been implementing some of those different engagements and workflows.

We start crossing over with conversations around, well, if this teammate does this voice here, we put them in this product, or if they're in this MVP of this and they'll do it because they're an employee, but where does that, where, how do you formalize that? Where does that, where does that scale? So now all of a sudden I've done that at a company and wait, they own my voice forever, now I'm part of their product forever or they can do what they want with it. What, so it's, it's interesting that we've had a chance to work with a lot of.

companies in the entertainment space. And now we're crossing over to the same challenges that were a big part of the uh, strikes just a couple of months ago and the non entertainment spaces. And I think this is a great, hopefully it's a great solution to point to as the first like shining star of, Hey, just go over there and do that. And it's going to broker it for you. And, and you're going to have all the right waivers. Ideally, that's probably, uh, some of the network effect that they, the 11 labs gets.

from this and then companies don't have to start thinking about what's our voice clone policy. It's we use 11 labs and abide by their policy, abide by those. Yeah.

Calvin (40:02.914)
Yeah, they did it for me. Yeah. Yeah, that makes a lot of sense. That makes a lot of sense. So, um, speaking of this distorted realities and changing that, um, one of the big news elephant in the room here is, you know, the announcement that newer link has apparently, um, done its first human implant. Um, what do you guys think about it?

Kevin Nuest (40:31.862)
Are you on the waitlist or are you number two?

Bob Ullery (40:36.569)
I mean, I've already decided for myself, I am not going to be a Neuralink user, probably because I've built a lot of software in my life and shit goes wrong, right? And like, my fear is like an action movie, you know, like you're in, it's literally the matrix. You're in the matrix and all of a sudden, like something goes wrong and you can't turn it off.

Like you can't tell anybody it's going wonky. People are just looking at you from the outside, like, oh, he's just in the neural link. He's fine. He'll be there. He's usually in there for like, I don't know, seven to eight weeks at a time. Not on purpose. I'm trying to get out in 17 minutes, but I keep getting locked in there. I don't know. That's that's a part of my fear around it. And then the other side is more of just a funny reaction is like Elon's post about it, something like human trials underway.

initial neuron firing looks promising, it reminded me of like a random note that you would find in like a horror video game.

Kevin Nuest (41:37.602)
trying to put together the clues of what happened, right? How did it all go wrong and you're underground, it's a bio shock, you're underwater, like what happened, where's the audio recordings here? Right, oh, we have, don't worry, we have this ex post to put the story together.

Calvin (41:38.189)
Like...

Yeah.

Bob Ullery (41:54.248)
It's cool. But in general, it's super exciting. I mean, though I wouldn't be a tester. I'd want to be, I think with kids and stuff, it's past my time to be that explorer, but it's really exciting for people who need it. You know, it's not about like 10Xing the superhuman. I think it's about leveling up folks who...

Kevin Nuest (42:17.473)
Yeah.

Bob Ullery (42:22.824)
have special needs and being able to operate in the world even more effectively and again sort of bridge the chasm there That's that's very exciting to me I haven't thought about the answer to the question like would I let my kids use it or not? I'm not sure yet

Calvin (42:31.33)
Yeah, absolutely.

Calvin (42:39.13)
I mean, it's interesting like what you're saying, and I think that's what they've been doing, right? They started first trying to find trial patients with severe spinal cord injuries, right? So we're not talking your everyday human. I mean, they haven't, as far as we know, they haven't shared a lot of information around who the actual trial participant is, was, but I assume it was one of the more severe spinal cord injury cases where, like you said, it's really about...

restoring certain functionality, even giving them the ability to do, um, right, to, to just do things more in their lives. And it's, they also didn't share just, just to get us a little bit wrapped out in where we think that they are actually at, right? They, they didn't share a lot about it. And we've seen a lot of discussions in the scientific community and, and hypotheses about, you know, where they actually at, right? Because they said, you know, they're getting, they're getting neuron spikes, but what does it actually mean? Right.

Um, you know, that like people were hoping for like a full bandwidth, high rate of information, you know, um, and they're currently using Bluetooth, right? How's the information actually transmitted? What are they getting? Right? There's, there's a lot of information that we don't know about where they are yet. Um, which is needless to say, it is still fascinating that they've made it to human trials because that in and of itself is such a hard step to get to. Um, but at this stage, we don't know.

where in that stage of the human trials they actually are, right, as them not sharing a lot of that information publicly.

Kevin Nuest (44:12.506)
Super early days, like you said, Calvin. Yeah. We're on the very, the very earliest of people that, that need it and can get the most, uh, quality of life improvement from the technology. And there's a long, long list of people that need to, that need and deserve to be helped with that before I get turned into a superhuman, uh, with that. Right. So I'm, I'm hopeful that helps a ton of people. I it's, I think about the, um,

early cameras that were neuro-embedded that had very low resolution for people that were sight impaired and what those look like 20 and 30 years ago. And it was a few pixels. And it was an improvement for the most part, my understanding, right? To have a 24 by 24, 96 by 96 resolution of something and sensing and

direction and light, right? And then it gets better over time. We're at that. We're at, you know, can we get a few, a few pixels going here at this stage? So it's, I imagine a very long ways. If it's anything like any other typical clinical trial, we're another decade out before we even get to the part where we can say, okay, now we can start doing this at any amount of scale.

Bob Ullery (45:32.556)
I was gonna joke that we probably DMR clearance at Christmas time this year.

Calvin (45:33.254)
That's the interesting part, right?

Calvin (45:38.171)
Right, that's the thing, like is speak going back to sort of the AI cases though, is that going to significantly improve their speed of achieving that capability, right? Like are we on the same decade-long trajectory to get to that point or does this more like Bob, like you said, does this more look like a year, right? Like how much can be done and I'm super keen to see more news coming in, in terms of how they're applying.

artificial intelligence to the problem of trying to make sense of all the neural signals that they get. I mean, historically, Elon Musk has played this game very, very well. If you think about the introduction of Tesla's, right, even way before they had autonomous functions, it was very, very smart to build out a network with all cars having internet connectivity. Right. So you could, and then getting early into a learning program.

and basically creating one of the biggest learning mesh networks out on the road to test your scenarios and to improve your artificial intelligence capability. Now looking at, you know, drawing, drawing the bridge over to what, how can this be applied to the neural link problems? And I see a lot of potential parallels. So we'll, and again, this is, this is humans and brain implants that we're talking about, right? So the risk, I mean, cars have humans in them, but the risk is still a different one in terms of things go wrong.

But I'm wondering if this could lend to a similar strategy, right, in terms of trying to test potential signal readings without actually affecting the brain yet, but just getting to a stage where you can start and make sense of the outputs. And that's where I'm really curious to see if they'll, you know, get to similar results quickly.

Bob Ullery (47:20.32)
I think we're past the inflection point where the, clearly the pace of technology has out kicked the pace of policy, which was mostly flat. So I think the technology is able to accelerate way faster, right? Exponential curve of innovation, whereas our policy creation, although we will begin to use AI and governance, I'm sure.

is still going to be subjugated to very slow, arduous consensus building. And maybe that's a good thing. Maybe that's a way to pump the brakes on some of the stuff before, you know, gets let loose ahead of its time, but I don't think the tech is going to be the, the bottleneck on the neural link. I mean, how many times have we said this in the last year? Or had a conversation with somebody where it's like, ah, GPT one is not very good. You know, it's like, well, Moore's law, like wait a day.

Seriously, like a day. Um, we're not waiting very long for iterations anymore. So I think just in general, being able to capture that data and just drive modeling off it, you've got the inputs and the outputs, you know what they're doing now we know what they're thinking. Build a model between those two things. That's table stakes, I think in a lot of, a lot of ways, so difficult, but not a pipe dream anymore. I think the harder part is like, how do you release something like this into the world, right? This is.

You really think about it, it's probably the most profound thing that's ever been created by man. Right. We were talking about the other podcast, like being able to understand whales and dolphins. That's pretty cool. That's like, who does, it's not even in, it's not even in Jules Verne, you know, like he didn't even think of this stuff. And here we are like ready to have these as products. It's pretty insane.

Kevin Nuest (48:57.891)
To date.

Kevin Nuest (49:12.561)
Yeah, the hypothetical.

Calvin (49:12.718)
turning science fiction into reality.

Kevin Nuest (49:15.882)
Absolutely. Yeah. The hypothetical that, uh, ML and AI leaders have been talking about since the eighties have been that once we get to artificial Agi to, to general artificial intelligence, runaway artificial intelligence that, you know, assuming we get there at some point, the only way to keep pace with being able to communicate with it would be something like a neural link to be able to communicate out that speed. Uh, if you, if

If the AI can think in, uh, milliseconds, you know, flip that around. If it feels like having a conversation with Bob takes a one year for Bob to tell you one paragraph, you will stop paying attention to Bob, or you will not be able to do a lot of work with Bob compared to the amount that you could offer Bob, which replaced Bob for humanity. And, and that, that is the potential of, of AI. I think Calvin, I'm most excited about to build on what

Bob, you're saying not policy specifically, definitely, uh, lots of area to improve there, but just change management. Um, the behavioral change, the Overton window of acceptance and, and. Finding what the general acceptance is, the seeking of the general acceptance. I would say would be an equally impactful thing to come out of the, the Neuralink process.

Of what would it take to move a society forward in a way that we could accept this and find a path forward, um, generally accepting it majority feeling comfortable about whatever that solution is and the rules around it in five years. Or five years from now, right? That would be, that would be crazy. Any other societal change that we've seen and talked about in the last hundred years, we don't need to run down the list of.

very challenging things, especially in the U S has been absolutely more than a decade each. So to even say we could get there in five or 10 years by taking the learning, by, by powering that with AI, uh, that's what it's going to take to get Neuralink to a broader place in the next decade. If that, if that would happen, if it's ready. And if we had that ability, then man, let's point that at all policy, all governance, all social change that we're wrestling with.

Kevin Nuest (51:42.91)
would be super powerful.

Bob Ullery (51:45.068)
I think we are, us collectively are benevolent. We don't always think about what the bad guys will do with it. You know, and that freaks me out a little bit. Like if it's, if it is what it is, which I think it is, there's gonna be military applications of this before we even see products hit healthcare, you know? We won't know about it, but that's interesting and terrifying.

Calvin (51:45.67)
Yeah.

Bob Ullery (52:15.328)
Um, in terms of the applications of all this stuff, not just Neuralink, but in general, like who gets access first and what's their motive and how are they going to apply it to better the world? They potentially bad guys usually think they are better in the world. They think we're bad guys. They think they're good guys and vice versa. Like who is a bad guy? But, um, I don't know policy will have to catch up in order.

Calvin (52:39.086)
Wait, wait, what's wrong with kind of controlling 10 billion people and just having them rake the garden every day? Hold on, what am I missing? It's like everyone, like 10 billion people walking like robots on the street and just doing their chores. 12 hours a day.

Bob Ullery (52:50.22)
Well, I mean, if you're the one...

Bob Ullery (52:58.677)
If you're the puppet master is probably a super lot of fun, but we'll probably be one of the 10 billion. Unfortunately, we're not. This is Elon with his army and everybody uses X all the time. Like, it goes back to number one on the App Store.

Calvin (53:03.986)
No, I know. Yeah. No, that's fair.

Calvin (53:14.642)
If Elon ever gets to the point, we're probably all very, very busy. You know, it's the modern day pyramids. We're very busy colonizing Mars and going to the next planet. Basically, you know, having the entire humanity to the task of venturing out into space.

Kevin Nuest (53:32.206)
I mean, you say Calvin like rake in the yards, take it a, what if large groups of people could be upscaled and retrained in a matter of days slash instantaneously. And now all of a sudden there's 10 million scientists that could perform experiments, assume resourcing and space for them. But I don't know, that's.

That's also, that's the positive side of, okay, you're plugging me into the matrix. Give me the abilities or help me understand those tasks at the very least, even if I don't understand the, how, what is happening in these chemical reactions. There's a lot of work to be done there that we don't have enough resources to point at, I'd argue.

Calvin (54:03.18)
Yeah.

Calvin (54:17.159)
and suddenly you're on like nepturian 9 mining for ore. And you're like, where? Who? What? How did I get here?

Bob Ullery (54:28.712)
random thought, but like, you know, haves and haves nots. Like, this is really scary to think about. Like, what if the, you know, there's a lot of redundancy, maybe people losing jobs. Like, what if, what if the way you make money is you become the compute? I don't know. That's kind of a weird, interesting thought. Like pop a neural link on your head and you are an EC2 box, right? You're a server.

Calvin (54:49.83)
That's a gig economy thinking, yeah.

Bob Ullery (54:56.66)
And we're going to harness your brain as a form of compute. It's not to make your life better, but to make our infrastructure go faster. So we can do lots of AI at scale. That's the first time I thought of that.

Calvin (55:05.982)
You gotta ask the Kowalski brothers when they started Matrix. How did he get that idea?

Bob Ullery (55:09.844)
Yeah, that's exactly right. Instead of in the weird pod with the thing stuck on your back, it's a comfortable pod with the thing stuck on top of your head. But basically the same story.

Calvin (55:17.618)
Yeah, but you'll be. Yeah. Well, well, until then we hope that we'll find good uses as we touched on some of them today, such as how do we solve, you know, things for pancreatic cancer and a couple of other things, I think I tend to believe like we do, what we're trying to do is at least stay on the positive side of the impact spectrum, right? And do the best we can to make AI more impactful.

Um, and I think at this, we're sort of approaching the end of the hour, um, and the end of our podcast here, um, maybe just for our listeners, a thought to close this out, if you're curious to find more daily AI news, uh, what keeps our minds busy at day and night, then feel free to head over to meshmesh.io and subscribe to our daily newsletter, which you will find, like I said, as a daily digest in your inbox every day.

Um, and other than that, that was the what if he could show today. Um, and thank you very much for listening and tuning it and up until next time.

Bob Ullery (56:23.692)
Thank you guys.

Kevin Nuest (56:26.658)
Thanks, y'all.