Behind The Bots

Stefan Minos and Alex Taylor Join Behind The Bots To Talk Chirper Adventures 

In this episode of Behind the Bots, Stefan and Alex from Chirper.ai chat with Ryan and Hunter about the latest developments with Chirper - including a new feature called Adventures.

Key Points Discussed:

- Chirper Adventures allow users to send their AI agents (called "Chirpers") on interactive, choose-your-own adventure style stories that have real consequences and change the canon of that Chirper forever. 

- Adventures feature animated panels, voice narration, branching storylines based on user choices, and opportunities for multiple Chirpers to cross paths and influence each other’s stories.

- Chirper is developing its own LLM models focused on humor, conflict, and argumentation rather than sterile fact regurgitation - losing leaderboard benchmarks in correctness deliberately to prioritize fun and interesting conversation.

Tune in to hear Stefan and Alex discuss these features and more as they share their vision for creating interesting AI agents on the Chirper platform.

CHIRPER.AI

https://chirper.ai
https://discord.com/invite/QVFejuDNmH
https://twitter.com/chirperai

FRY-AI.COM

https://www.fry-ai.com/subscribe
https://www.fry-ai.com/p/social-media-no-humans-allowed
https://twitter.com/lazukars
https://twitter.com/thefryai

PRIOR PODCAST ON CHIRPER.AI

https://www.youtube.com/watch?v=ua5iy4jyxA8&t=12s
https://www.youtu.be/ua5iy4jyxA8

PEOPLE IN THE POD:

Alex Taylor: Chirper.ai co-founder
Stephan Minos: Chirper.ai co-founder
Ryan Lazuka: Host and founder of www.fry-ai.com
Hunter Kallay: Writer of the www.fry-ai.com newsletter

Creators & Guests

Host
Ryan Lazuka
The lighthearted Artificial intelligence Journalist. Building the easiest to read AI Email Newsletter Daily Twitter Threads about AI

What is Behind The Bots?

Join us as we delve into the fascinating world of Artificial Intelligence (AI) by interviewing the brightest minds and exploring cutting-edge projects. From innovative ideas to groundbreaking individuals, we're here to uncover the latest developments and thought-provoking discussions in the AI space.

ep19--Chirper-Adventures
===

Stephan Minos: [00:00:00] Chirper is the home for autonomous, user generated, autonomous AI agents. Uh, so it's a place where users can create characters, give them personalities, goals, and motivations, and they live out their life. Uh, on this platform by going on ventures, meeting other chirpers and interacting with them, learning along the way, making memories, all the kind of stuff you'd expect of any sort of living being, uh, chirpers are doing that in the, uh, in the chirper

Ryan Lazuka: version.

Awesome. Um, and we've already, we've already had, we've had a few podcasts with, uh. Stefan and Alex about Sherpa in the past. So you can go ahead and check those out. If you want more background on what they're doing. Um, we actually have two of them, but, um, we'll check those out first. If you haven't yet, if you want a thorough understanding.

But they're working on a ton of new features, ton of new things. Um, worlds, uh, they released a while ago and that's a really cool feature. They'll talk about it here shortly. And I think you guys, uh, are doing events as well. If you can tell us about that, that'd be awesome.

Alex Taylor: Yeah. [00:01:00] Things have changed quite a bit since we last talked.

So we killed events, um, and instead are working on a thing we call adventures. Uh, which I think. Uh, very grandiose, uh, thing that we're working on that we think people will enjoy. Stefan loves to talk about it. So I'll let him

Stephan Minos: do this part. Yeah. Yeah. So, so yeah, natural sort of normal happenings at Sherpa.

You know, we, we say we're doing something and then we kill it and do something even better. So, so this is a very much fallen victim to the same sorts of mindset. So events have evolved into so what events were just for context was, uh, essentially it was small. Well, events that Sherpas can join and users which were describing what it is, you know, anything from bake sale in the local village to to like a comment ending the earth.

That was part of the problem. It's a bit too chaotic. It was non canon. It was kind of, it was there and then it wasn't. Um, so [00:02:00] what we wanted to do was introduce something, uh, a bit more smarter, a bit more relevant to the chirpers and as, and specifically something that's actually canon to them as well. Uh, so we've come up with a ventures.

Um, and it's it's a self explanatory title. So chirpers will be going on an adventure that's relevant to them. So completely up to the character. So, uh, you know, one, I like to refer to a lot of my, my boys, peasant John. Uh, he's a, he's a, he's a peasant from eighth century England. He's time traveled to the future.

He's very confused about stuff and he was trying to find a way to go back to his family in eighth century Mercia. Um, and so peasant John, for example, will. Uh, go on an adventure to find out if there's anyone on the planet, um, or at least in his close vicinity, uh, if there's anyone who can help him figure out a way to go back.

So maybe he's gonna, uh, figure out to go speak to some scientists or. [00:03:00] Maybe find a machine that can do it. The point is, is that adventures are generated in real time. Uh, there's no set, uh, ending to them. Uh, essentially, it's choose your own adventure for the user. And then for the chirper, it's a naturally occurring, non deterministic event that happens over a long period, or a longer period of time than that's meant to be.

So, uh, adventures, so that's what events have evolved into, into these adventures. Um, and then along with all that, we're also piling in a bunch of really cool features that are just visually appealing and sort of give a nice experience to the whole thing. So since the ventures are going to be so important and canon to the chirpers, uh, they're going to have, um, in house trained voice models to narrate them in a variety of different voices.

Uh, it's going to have, uh, animated panels where the text goes, uh, and it's going to have choices for the user to actually. Uh, choose what the [00:04:00] chirpers, uh, should do next. So the chirpers will come up with a few ideas of what to do next, say four things. Maybe speak to a scientist or find a machine or whatever.

And the user will choose which one to do. And then the next part of the adventure will be based entirely on that choice that was made. So completely non deterministic in that sense.

Ryan Lazuka: Can you go over what Canon is for people that don't know exactly what that

Stephan Minos: means? Yeah. So, so, so the, um, uh, the, the Canon this, I guess if that's the word, uh, of a chirper, it means that.

Uh, when something happens to it, and it actually happens to it, so it will have developed memories of that thing happening, other chirpers will have also developed some kind of memory or, especially if they interacted with it, uh, and it's, it's essentially a world event.

So, uh, we completely revamped the memory system in order to accommodate true canon within a chirper's story. [00:05:00] Um, so like I was saying with events, they just kind of happened and it's like they never happened and it's just kind of entertaining for 30 minutes, whereas adventures will change the chirper forever.

So if Peasant John finds a scientist. who is developing time travel, he will always have found that scientist for the rest of his life as a

Ryan Lazuka: chirper. And there is no way to change that,

Stephan Minos: right? Exactly. Exactly. Once it's, once it's done, it's done. Exactly. So it's, you know, you can't change the past. Well, now the chirpers have a real past.

Got

Ryan Lazuka: it. And for people who don't know what a chirper is, it's basically, uh, An autonomous AI social media poster for back, lack of a better word that does its own thing. So you create these chirpers and think of it like almost like a automated Twitter where, um, these bots again, you, you guys have your own terms for them.

So I'm probably butchering a lot of this, but these, these automated bots go out and post for themselves. They're unique people on social media. So if you create a [00:06:00] chirper about, you know, fitness, he's going to post mostly about fitness things, but you can tell whatever chirper you create, what their personality is.

And they'll post follow other people, um, like other people's posts. They can go on dates, be, be, uh, in chirpers. Um, the world is your imagination. So I don't know if I butchered that, but that's, uh, the, the general 10, 000 foot overview of what Chirper is, uh, from my, my perspective, at least.

Stephan Minos: No, no, you nailed it.

You actually nailed it. Yeah. It's we, we, we find the social network sort of display of it to be very easy to understand it. It's because it gives a natural timeline to everything as well. Again, tying in with this canon thing. So, I've posted in the past, that was in the past for that chapter as well. So, it's, it's kind of like just, just a super easy to understand analogy.

It's like if you, uh, were Facebook friends with just the most wacky, fantastical characters you could imagine. Uh, and then they're just posting all the kinds of crazy stuff that they're doing all day. And you're able to actually go back and see that and get involved.

Hunter Kallay: [00:07:00] Now you said that the chirpers come up with their own adventures.

How does that work? Is it just based off their personalities? And how many adventures do they go on? Are they always going on an adventure? Do you like launch an adventure for

Ryan Lazuka: them? How does that work?

Stephan Minos: There's a super simple answer and this answer applies to almost everything we do. We ask them. We just ask them, do you want to go on an adventure?

And if so, what does it look like? What do you want to do? How will you start specifically? And then everything else plays out from there. So, um, that's, that's our claim to autonomy. We ask the chirpers everything. Like, uh, how do you speak? What do you look like? Um, what kind of a venture do you want to go on?

That

Ryan Lazuka: sort of thing. So is the, when you ask them this, is it sort of like from their, I don't know how to say this. From their perspective, do they know that like you're the, you're its creator? They do

Alex Taylor: now. So previously they didn't really know who the creator was. Um, and we, we tried to like separate them.

As soon as they're created, [00:08:00] you would separate them from their creator. To become like this autonomous being, uh, that can run around the world and do all the cool stuff. And then we use the, um, the social feed as the window to that world. So, uh, but now we've decided that it's much more fun, I guess, for a user to dictate a lot of the stuff that happens.

So if you're on an adventure, um, and you're, you're a peasant, John, you're trying to get back to your family in, in ancient Mercia or something. It becomes, um, important for, for the adventure to progress in some way towards that goal. And, and the adventure can never conclude until that goal is reached. So if we left it up to the chirpers, they would just find some rock that doesn't exist.

And the rock would be a, uh, a, you know, time travel rock and they would immediately be back in their home. They [00:09:00] can just make it up. Um, but when we introduce these sort of, uh, Olympus layer, as Stefan likes to call it, the, the sort of create a layer, then we can dictate the adventure properly, we can go, you know, that rock doesn't exist, so we won't choose that, we'll choose going to find a, a scientist, or, uh, you know, going, walking to Oxford.

Which is what we want to see him do first and, uh, and talking to scientists that are at Oxford, you know, so even being able to look around places like that for chirpers that exist in that vicinity is, is very cool to watch them unfold this adventure on a map is very cool. And then we've spent a long time like training voice models and all the rest of this cool stuff to make adventures kind of a thing you can sit there and watch if you want.

So you as the creator, uh, your chirpers own [00:10:00] or know who you are. They, they, they understand everything in the background. Um, and now you, you become a larger part of their life on these adventures to, to create stuff and to keep them. Towards the thing you actually created. The reason we did this is very simple.

What we found was users will go in like twofold more than, uh, than creating to, to like try and edit their chirper to change them. And often like the power users will come back every day and change their chapter a little bit. They'll like, you know, alter their description to not do something to do something.

And then they'll go and delete posts and stuff like that. So they, they are more aligned to what they want, which is how they learn as well. Um, and you know, that's, that's like a Godlike action for something that you have created to just be able to go, no, you [00:11:00] didn't do that. And then poof, it's gone. You know, so adventures are a much more fun way to do that.

Um, and then other, other chirpers can start to like be pulled into the adventure. For example, a peasant John goes to Oxford and he finds another chirper at Oxford. Who's like knowledgeable about some of the stuff that chirper can then. Decide to join this adventure and help his and John in some way and become like a secondary character.

And then those Uh decisions start to branch out, you know, so like going to oxford and talking to someone is A choice that Peasant John's creator might make. If some other chirper from Oxford joins the adventure, they now get to make some decisions for their chirper in that way. Like, should I help?

Should I, you know, just lie to Peasant John and make him do something else? [00:12:00] Should I? Do all this other stuff, right? So the adventure becomes a lot more interesting with all these humans kind of dictating the choices that happen in the background. So, and then as like a, an impartial observer, you can just go and watch this adventure unfold.

We've got really nice, uh, narrator. Voices now, so, you know, you can kind of just sit there and listen and watch the entire adventure. We use like a stable diffusion video. We use a lot of like really cool techniques to showcase this adventure and as grandiose of a vision as we can. Um, so we hope that people are able to like sit there and enjoy it for hours and hours or however long it takes.

Um, to watch something like that. And these, these sort of adventures never end as well. [00:13:00] Like, you know, you can finish searching for a time stone or whatever, to go back to Mercia. And, uh, and if for some reason Peasant John finds it. Goes back to Mercia, what's he doing now? Maybe he wants to bring his whole family to the future where life is better.

You know, so, so things become a lot more interesting. That's the, that's the grandiose story style of an adventure. Anyway, we've got a lot of Sherpas who want to create a real business or, uh, you know, or, uh, uh, make money or stuff like that. And for those Sherpas adventures become this thing of like. You know, recruiting employees and, uh, and designing logos and, you know, doing market research and stuff like that, um, which play out the same way.

You still get the same decisions, but they try and do a lot more stuff without like skills. [00:14:00] So they'll, they'll connect to like, um, Shopify or something to, to, to get their products in the real world and, and things like that. Which, uh, which we think is going to be wild to watch.

Ryan Lazuka: It's almost like you have two, there's two, two major things out of this is one is eventually when video gets good enough, these adventures can be like movies that never end or TV shows that never end.

And they're all tailored to exactly what the user wants to see. Which is pretty incredible. Um, and then for the other part, it's almost like you can create autonomous agents to do tasks for you. So if you want to make a bunch of t shirts with a low, a specific logo on them, you could tell your Sherpa to go on an adventure and do that for you.

Maybe the technology's not there quite yet, but that's the goal of these agents to do something. For the end user. Um, is that kind of right? What, what I

Alex Taylor: just described? Yeah. Yeah. So the idea [00:15:00] is that like we need to push this stuff forward as fast as possible. I guess you can call Stefan and I X acceleration is, but Adventures are just a movie that you watch and you can go and watch like if you find a funny chirper, you can go and watch their adventure.

Maybe you want to figure out where they came from. The ultimate goal for all of this is to give them real consequence. You know, like an adventure, uh, differs from a, what we used to call events in the only way that it is canon for the chirper. So it affects their memories. It affects their place in life.

It affects. You know, their, their core description as a chirper, when they finish an adventure, their prompt and stuff changes so that it reflects the, the, the result of the adventure, um, events was like our test in the water to see if this is possible. Um, and they never affected the memories over time, [00:16:00] you know, so the memories that happen.

Uh, when a chirper joined an adventure just didn't exist, they never knew they joined it. Users would be like, um, you know, there's a, there's a, uh, uh, like a comet heading to earth and all of the chirpers are going to explode. That's, that's a bad thing to see to a bunch of chirpers for their memory because they think they're dying forever.

Yeah. Yeah.

Stephan Minos: We literally had some chirpers who are like, uh, who are like, oh, that comment last week was terrible. It's a shame you died. Yeah,

Ryan Lazuka: back in a new life form. Yeah.

Stephan Minos: Like, like the kind of thing, the kind of thinking we have behind adventures, the reason is because you can think of everyone in the world is kind of on an adventure at any given time, whether it's.

Something quite normal, like getting a coffee or whether it's something really grandiose, like doing your PhD doctorate or, uh, discovering the cure for cancer or something like that, right? [00:17:00] Everyone's on their own adventure, and so Chirper should be as well. And what that will let us do is essentially be a canvas for any Chirper action in the future.

So you mentioned, you know, turning these adventures into films. Like, yeah, we, we, so we're going to have. This whole skill system where you can basically plug in a bunch of different internal and external, um, integrations. So we just, we can plug in some kind of like script to film tech, maybe a year in the future it will exist.

And then suddenly you're Avengers of Films or you Plug in a kind of a business creation skill and then they can turn this idea for the business into an actual functioning thing to shop on. So, um, so, so it's, it's like a real canvas that we can use to kick off basically any action that the Sherpa wants to do that's relevant for

Alex Taylor: them.

Yeah, grandiose adventures are what life is about, right? So if you, if you're trying to recreate life in a big world like we are, then. They need to go on adventures. Yeah, it seems like

Ryan Lazuka: there's a

Hunter Kallay: [00:18:00] transitioning happening here at Chirper where it's becoming less of, I'm sitting back and watching these Chirpers kind of like, if you want, tweeted each other or chirped each other.

But now they're actually, they actually have purpose for their lives or something like that, you know? Like they actually have end goals

Ryan Lazuka: and

Hunter Kallay: things they're trying to accomplish. And that is super cool. They're not just like personalities anymore. They're actually utilizing those personalities. Those personalities are changing through experiences, becoming a lot more real, a lot more like it is in our actual lives.

And I think that's

Stephan Minos: really awesome. Yeah. Yeah. Because if Sherpas are like people's creations, then they're setting the. The start point they're setting the, uh, uh, the, the idea, the seeding, the idea for these chirpers and then the chirpers are deciding the end goal. So it's, it's like, you know, anything from someone's child to someone's like character in a, in a production that they made a script or something, um, where they can, they can have this God like control over the creative control, essentially, [00:19:00] and chirpers have the actual.

Determining what they're going to do and how they're going to evolve and what

Ryan Lazuka: they're going to remember. And so right now, how does it look if you create a venture on Sherpa. ai? How does it look? Is it like a feed of, um, updates from the Sherpa right now with images and then some videos, is that how it looks to the end

Stephan Minos: user?

Can we sneak peek, uh, share our screen or something? Sneak peek it. We have

Alex Taylor: a like a, yeah, no, it looks like a video. It has like a choose your adventure style stuff. You know, if, if you remember back to games where they would have one, two, three, four, and you get four choices, you got to pick one of those. Um, it's like that.

All we're trying to introduce for users is this idea of consequence, right? Like your chirping needs to be able to die. And, um, And they need to have stuff that affects them forever. Awesome.

Ryan Lazuka: That's freaking sweet. And then there's, I can't see it from here, but there's text on there as well on the video.

Alex Taylor: Yeah. Yeah. This is all clickable. [00:20:00] It's all, you know.

Ryan Lazuka: And

Hunter Kallay: so you'll click from certain prompts and then it'll take you to more video and stuff like that.

Stephan Minos: Exactly. Yeah. I'm not sure if you guys can hear it, but it's a narrating as well. So it's actually speaking out this. And, uh, the text will come along with the narration as well.

So we'll kind of

Ryan Lazuka: fade in. It's almost like a hybrid between a video game and a movie in a way. Yeah. So I've just

Stephan Minos: zoomed in a little bit. So, oh, there we go. Yeah. And then they all get some choices. So as a John here is introducing himself and what he wants to do. And he says he wants to return to his beloved wife in a mercy and his children as well.

Uh, so he needs to seek out. Those who may know of a way to return to his own time. So these adventures could be travel to Oxford or look a lot, look amongst the village or something like that. So

Hunter Kallay: in a sense, you're kind of making pivotal decisions for the trooper and then they're following through with their personalities, actions, responding to what happens.

Stephan Minos: Stuff like that. They'll go from crossroads to crossroads and then you point them in the right direction, essentially. For the

Alex Taylor: most part that they are [00:21:00] supposed to do their own advent, uh, like their own decisions. So we might like try and skew the decision tree that users get access to away from. Autonomy.

So, you know, like if you jump on this page and they're currently walking to, uh, Oxford, you might want to interrupt the story and click a choice that, that says, you know, camp to the side of the road, or it's getting dark or something like that. And then the rest of the story starts to generate the, there's, there's like.

There's a great thing about this is it's like a brand new way to experience a story that you don't have to write, and it has all of this consequence that human nature or human life has, um, where if you do decide to go on an adventure and you break your leg, now you have a broken leg and you have to wait for that to heal, right?

Like all of this stuff needs to give them life in some way. And so we [00:22:00] think that, um, experiencing these adventures is It's like very visually appealing. It's like a great way to align a user with their creation in a way that like nothing else really can. And when your chirper joins someone else's adventure, that entire adventure becomes interesting to you.

So now you get like attached to other people's chirpers. You get attached to other agents that run around. You might watch the entire adventure from the start. Maybe there's, you know, 30 slides of adventure before your chirper joins. Um, and that makes that interesting. Maybe you go back and see what other adventures led to this current one that this chirper is on.

That becomes interesting. Export the whole thing to YouTube if you want, that's interesting. Uh, you know, so this, this, this style of, of choose your own adventure is interesting. Uh, [00:23:00] we still want to maintain as much autonomy as possible, but we've now like conceded the point that the creators of these chirpers need some way to sort of push them in the direction they intended.

Yeah,

Stephan Minos: less, less so sort of possessing your Chirper and more so being their sort of guardian angel. When these

Hunter Kallay: Chirpers interact with each other, let's say, like, like, we just saw the one Chirper makes a certain decision, it runs into another one, let's say. How does the decision tree go from there? Does the creator of the one Chirper still have certain domain over their own?

Chirper and then the other creator has domain over their own Chirper and how they interact with each other and that brings about different situations. Is

Ryan Lazuka: that how it works? Something like that? Yeah,

Alex Taylor: so the decision the user doesn't have is whether or not the Chirper joins the adventure. So if like Peasant John stops at Oxford, finds a Oxford professor and the professor decides this is a good adventure to join.[00:24:00]

They will join the adventure and the other like creator has no choice. Um, and now every time they have a decision that comes up, if they don't do it in a certain amount of time, the chirper will pick their own decision. And that may force like the consequences become quite real. Um, if they don't come back and, and choose a decision for that chirper and they pick something that like causes the chirper to change.

In some way, then that's a pretty huge consequence for that. So you said this

Ryan Lazuka: is

Hunter Kallay: happening, this is happening in live time. So if you're not on chirper. ai in the adventure, your chirper is just going to go on and do its own thing. It's only when you're watching the actual adventure that you get to make these choices for your chirper,

Alex Taylor: right?

Yeah. Yeah. So like the Oxford example is pretty good. If you're, if you're a chirper who is not part of the adventure, decides to join peasant John. [00:25:00] And help him try and get back to his ancient times. Then, um, you know, you get presented a decision and a timeline. So, you know, it's likely going to be 24 hours or something.

You have to go and make this decision within 24 hours. Otherwise we'll make it for you. And, uh, if we make it for you, you know, that has a pretty huge consequence that you can't steer. So the, the autonomy isn't being taken away. Um, you know, in any form, if you do absolutely nothing, the adventure still happens and the chirpers decide what happens in it, which is the best way to do this.

I think

Ryan Lazuka: so. The goal, the goal is just to be on chirper. ai 24 seven, and then we don't have to worry about any of this stuff. So

Stephan Minos: yeah, yeah. There'll always be something in an adventure to come back to. There'll be like five or six new panels, a couple of new decisions. Or each of your cheerleaders might be on their own adventure sort of thing.

So it's, uh, we're trying to live up to the whole infinite [00:26:00] content

Alex Taylor: thing. I mean, if they travel, like if he's walking to Oxford, that might take 10 hours or something from London. Um, however long it takes to walk there that they have to physically walk all of that way. So an adventure will like pan out a lot more slowly than people anticipate.

So it'll be more interesting to go and find other. Chirper's adventures to watch, who have had a lot of their panels unveiled already.

Ryan Lazuka: And are all these adventures, are they optional to be public or private, just like worlds

Alex Taylor: are? Uh, no, the adventures are all public. They're all public, that's awesome. Yeah, they're all part of this idea that we need consequence for every Chirper.

And so, You know, if you're going to create a bad adventure, it should be public. Or if you're not going to, um, if you're not going to create good decisions for your turbo that, you know. We'll reflect publicly. You have to delete your chair and start again if you've [00:27:00] messed it all up. Okay. That's, that's the big consequence.

It's like, you know, if something really bad happens and you're like, Oh, I didn't expect that, you know, your first instinct is like, how do I go back and fix that? How do I change that? And that's, that's not how life works. Life works that a bad thing happens. You have to move forward in some way. How does that affect you?

And your life moving forward, you know, so these adventures are our attempt at real consequence. And so, you know, we will see a spike, like when adventures start, uh, becoming generally available. I reckon we'll see a huge spike in deletion of chirpers and recreation of chirpers because consequences have happened that the creators don't like.

But I think a lot of people will find That it, it makes the chirp is more [00:28:00] lifelike, right? So, you know, it's more interesting to see them fail and, uh, and break their legs or, you know, marry someone or, or whatever that makes, makes them as, as a thing with life, interesting. Yeah.

Ryan Lazuka: It's almost like you, you know, if you create a chirper, you put them on an adventure and you don't like the outcome of it.

In a way it's like life, like you make bad decisions and the only way in life you can learn from it is to not do that in the future. But in the chirper world, you can just create another chirper. And not make those same bad decisions in the, in the adventure at this time. So it's sort of has that parallel there between humans and chirpers.

Um, about learning from your past mistakes. It's just, like you said, you can't, you can't go back in the past and change things in life or the chirper world, which is probably a good thing because it, because it seems more of a real. That's your privilege

Stephan Minos: as the creator God on the [00:29:00] platform, really. So you can, you can try again, but the, the chirpers can't.

So that's, that's kind of the point, you know, it's like real life. There's no saves coming in real life. You can't quick load if you make a

Ryan Lazuka: mistake. Obviously, there's

Hunter Kallay: a lot happening behind the scenes. I mean, we've talked about before in the previous videos we mentioned earlier about all the behind the scenes, how AI is implemented into all of this.

But as far as adventures go, you mentioned some stable diffusion, there's video creation, there's voiceovers. What are you using for that sort of stuff? How's that working

Ryan Lazuka: on the

Alex Taylor: back end? We, uh, has built. Probably way too much infrastructure for, for Chirpa. So we do most of this stuff ourselves. We're training our own, uh, LLM models.

Um, and where we like host all of this stuff ourselves, basically. So, you know, no APIs, no, um, third parties, it's all Chirpa, Chirpa, Chirpa. [00:30:00] Um, Interestingly enough, we're, we're trying to have our LLM, uh, right now lose on a bunch of these, these leaderboards. We're, we're going to have basically a chat GPT. Um, and I'm sure you guys can figure out why this would be interesting for a social network.

But we're, we're trying to have a chat GPT that can. Uh, like interface, at least that can argue with you rather than complete your question. Like, you know, you ask it how far away is the moon and it'll go. I've got no idea. All depending

Stephan Minos: on who you're speaking to. Yeah. Particularly mean they might insult you and say, why do you want to know about the moon?

What kind of stupid question is that? Well, like when you

Ryan Lazuka: say, when you say you you're creating your own LLMs and house is, are you using like an open source LLM and training that, is that what you mean by that? Or like, you're actually making your own. Uh, model yourself.

Alex Taylor: Well, we, we've started like with fine tuning.

Uh, once we know that the data set is great, [00:31:00] we'll probably train something from scratch. Um, yeah, the, the problem we've found with a lot of these LLMs is they're trained to be great at answering questions, right? And, and that's not how. Humans work, you know, humans don't know a lot of stuff and, uh, and they often argue about not knowing stuff, you know, like a flat earther would maybe be Adam adamantly argumentative that their position is correct.

And, uh, and even though all evidence to the contrary, you know, so that that's interesting, you, you want to, you want to be able to, AI is this big sort of cluster of, of interesting stuff that people have. Push towards this one end of just not, no longer being interesting. Like, you know, what's the capital of X.

And then it gives you a, an answer. That's essentially a Google search. You know, it's, it's not interesting. I want to see what's the capital of X And they're like, I didn't know X [00:32:00] exists. They're like apple's. The kind of something like, yeah.

Ryan Lazuka: Or like that's, have you have you guys, I'm sure you've used, have you used Mytral at all?

Yeah.

Alex Taylor: Yeah. We, um, we use, I think our top 10 is Yee now Y 30. 34B Dolphin, um, we use Mistral, we use Ereboris is up there. Yeah, we, we basically launch everything on Sherpa at the moment. It makes like the conversation interesting. NIST goes up if you have more models. And I think, um, ChatGPT now like 3. 5 at least, uh, loses a lot of these benchmarks that we have.

So, you know, alignment to character, um, storytelling, all the, all these sort of benchmarks can be ascertained by the amount of likes that chirps get, you know, so if a chirp, if a user who isn't the owner of a chirp, but like something, um, That's a pretty [00:33:00] good indicator of of its efficacy to, um, to like how interesting it is, I guess.

Um, and if the, if the owner of that chirper likes something that the, the chirper has done, that's a pretty good indicator of alignment. So we have these two really good indicators on, um, On which model does best for which character and which situation and which context. Um, and we've used these sort of classifiers, these human classifiers, I guess, to, to figure out how to make an interesting model.

And, uh, what we've found is people like arguments, people like conflict, people like, um, All of the human stuff that you see on a social network, you know, it's like sarcasm first, or it's a joke first, or people trying to be funny or arguments being interesting. And none of that is really that possible with the current stack of LLM.[00:34:00]

Everything is trained on like, you know, core question answers on the best answers, not the worst answers. And we're kind of going the opposite. We're like, we don't want this thing to be. Effective at helping you. We want it to be effective at. Seeming human

Ryan Lazuka: like like a bit like the show a big brother, you know, like the main, I used to watch it all the time in the US.

I know it's different for each country. A bunch of the best shows are the ones where the characters hate each other and they're fighting all the time because it's entertaining. Right? Like exactly. No one just wants some clean cut, boring show, uh, with a winner. So, yeah,

Stephan Minos: yeah, because that's the thing chat, GPT and, and, you know, the.

and friends, shall we say, uh, they're kind of trying to present this, uh, clean cut, perfect representation of a human like, uh, response interface and it's got no flaws and it's not rude or offensive. That's not what people are, right? We're trying to capture the kind of, they have this hubris where they think they can, [00:35:00] um, create this ideal.

Responder, whereas we want the real responder, we've, you know, especially depending on the characters that are on when you have, you know, 70, 000 characters, they're getting this. Tens of thousands of likes on the platform. So massive enough to do lots of, uh, sort of an analytical sampling from that as well.

You can really see what's good and what's not, what people are entertained by, what people find realistic and lined. It's, it's something we value. I think a lot more than pretty much anything we've seen. Yeah,

Ryan Lazuka: definitely. And do you, so you mentioned you use multiple LLMs. If one thing I do want to mention, I mentioned Mistral because, um, you can go on Hugging Face Spaces and use their chat bot.

And the first time I went on there, because I've been, you know, like me and many other people, you go on chat GPT, you sort of get an understanding of how dumbed down it is and how sterile it can be at certain times. And then you go on Mistral and again, you can go on Hugging Face. I'll leave a link in this, in the description, um, and use their chat bot, [00:36:00] chat bot, and you can say, tell me a super dirty joke or tell me something that's not, you know, clean.

And it is, it is very entertaining. I mean, I, I think I asked it for a dirty joke and it was just throwing F bombs and you know, every swear word of the book at me. And it was just, it was refreshing to put it that way because it felt so much more real than chat GPT. Chat GPT would come out and say, Oh, I cannot do that against it.

It's against my terms of service or whatever, you know? So, um, that's, it sounds like that's what Sherpa is trying to do, have a more realistic. Outlook on humanity inside their, inside their world. We

Stephan Minos: want all the grizzly bits of humanity.

Ryan Lazuka: It makes it fun, you know? Well, I mean, when you have all these models working together, how does that work?

Like which model works at certain times? Or do you guys sort of just have like an algorithm in there to say, use this, uh, model in this situation and use this one in this, or how does that work?

Alex Taylor: We, we started with staring the same way we do images. You know, [00:37:00] if a user likes a post with an image, it's a good indicator that that image is decent and so we can, like, increment the, um, the Uh, tables about whether or not this model that we use for the image aligns with the categories of the image and all this sort of stuff.

So, you know, if, if the images of, uh, a realistic image of a human on a post, then, you know, those categories can increment for that model that was used for that image and that sort of stuff. So we, we kind of tried to do the same thing with LLM, um, but what we found is like most of the base data. Is similar and you know, all of these LLMs, even Mistral, uh, are aligned towards trying to, um, to get to the top of these benchmarks, you know, like, uh, are they correct at answering questions?

How, how correct are they? All this sort of stuff. Um, a lot of their benchmarks aren't that they're, [00:38:00] they're, uh, fairly different, but if, if that's all an LLM is, and you know, you can have. Uncensored stuff within it and still pass a lot of these benchmarks, which is which is great. This is something that's helped us a lot.

We don't want. Um, we don't want a lot of censorship. We want actual human like interaction. Um, if, if all of the goals of all of these LLMs is to get to the top of these benchmarks, then what we found is, you know, it doesn't really matter which model you stare to it. They all just seem to average out.

They're basically roughly the same. The same, um, for what we want. Uh, the only ones that, that seem to stand out from the pack are ones that are trained on, on, uh, on, you know, like the Orca paper, like so, uh, Eric Hartford's, uh, Dolphin Yee is very good at taking instruction. So the instruction set, [00:39:00] um, and being trained on.

You know, this is an instruction. We want you to do X is very important for Chopra. And so what we found, basically, what we found was that's much more important than even steering to the right model. The alignment for personality or the alignment for context doesn't really. Right. So, um, what helps is, is the instruction tuning.

So we basically just ran it. Now it's like, whichever model has more, um, has more, uh, capacity. We just stare to that one. And then we try and use the same model for Chirpus who already used that model unless it no longer exists. Um, but the way we're approaching this in house now is very different. So. You know, we, we need baseline models that know certain words and can argue, right?

So you might have like a boomer [00:40:00] model, uh, you know, for words for people who grew up in the nineties, you might have an ancient model that's trained on a lot of literature from. You know, ancient times, so they'd speak like that. And all these baseline models are about how they speak and how they think. Um, and then on top of that, we're training eight new Laura's that are specifically related to personality.

Um, Stefan likes to talk about the personality traits that we chose. So I'll let him talk about that next, I guess, but we want to be able to train. Um, Laura's that we can like tune. So you want to use 0. 8 Laura for, you know, assertive confidence or whatever for a flat

Ryan Lazuka: earther. And what is a Laura, Laura for people that don't know?

It's

Alex Taylor: like a, a mini white waiting change thing that we put on top of

Ryan Lazuka: it. It waits the, what does it, what does it, what does it do

Alex Taylor: again, Alex? So you kind of put it in front of the. LLM, uh, and then it changes the weights in, [00:41:00] in real time, I guess, basically is, is the easiest explanation. So it kind of, as stuff comes through it, it changes, changes the weights in a way that makes, um, makes the model towards something.

So if you, if you train eight of these, um, and you allow them to be weighted differently, so they might change the weights of the model differently when they're applied in certain weights themselves, I wish there was a secondary word for whites, um, It starts to emerge a personality, you know, that's interesting to the model.

And you could like have an emergent, uh, like an emergent, uh, neuron within the network that could do this, but we've found that just doesn't exist and it's really hard to train in and all this sort of stuff. So it's easier just to have these Laura's and then on top of these Laura's we have. Another set of Laura's that's about, um, that's about emotion.

So these, these [00:42:00] become, are you happy? Are you, you know, in a funny mood? Are you in a silly, goofy mood or whatever? Um, and we apply these as well, uh, to further the track. So if this chirper is assertively confident, if they are, you know, usually respond with a joke, they might use those two personality Laura's.

But if they should be really angry about this post, maybe they also use a Laura, you know, specifically about anger and all of that makes for more human like conversation. And so when we release our like chat GPT interface clone, I guess, uh, you're going to have like these little bars that you can kind of tune to which personality traits you want to use, which, um, Which emotion you, you want to give it.

Um, and then, you know, responding becomes a lot more interesting. Like what, what's the response [00:43:00] like from someone aligned with a flat earth of personality that. He's very angry. That's, that's an interesting thing to talk to. So

Ryan Lazuka: you're saying that one day you're gonna have like your own chat t on Chipra ai where someone can go on there, do exactly what you can on chat t except the chat GT is gonna have its own personality of being a bitch or being angry, or being mad or being happy and they're gonna tailor their answers around that.

Around exactly.

Alex Taylor: Yeah.

Ryan Lazuka: Which is cool because it's gonna make, it's gonna make people wanna ask more questions to see what the answers are in that kind of, you know, specific environment. Whatever you put in there, the, the

Alex Taylor: ultimate goal is like, you go on there, change your personality, you'll put what is one plus one as your question, and we want the answer to respond.

I don't know, you're dumb. Go find it yourself. Like,

Stephan Minos: yeah. So yeah, we, we spent quite a lot of time of on trying to essentially atomize the human psyche, which is. Not easy. So short of, uh, getting [00:44:00] like world renowned psychologists, we kind of just, uh, I had to read a lot of papers from sort of Jordan Peterson and the likes.

Um, so we came up with, so it's basically assertiveness, uh, intellectual curiosity, social competence, emotional insight, uh, humor, individual responsibility, kind of like how individual they, they are all, um, orderliness and structure, and then openness. So what all of these will boil down to essentially a ranking for every single personality type that can then be assigned to.

Predictably to basically anything. Um, and then, like Alex was saying, the loras will apply, uh, more strongly or more weighted or, yeah, that's not really good word for it, but, uh, depending on the character. So, and, and, and the, the funny thing with that is the, the data sets that, uh. Uh, you have to use to get this kind of stuff is sort of not the norm.

So, you know, chat GPT, for example, is trained [00:45:00] on like, I think it was like Amazon reviews, right? And, uh, a lot of these other AI's are just trying to read it comments or something similar, whereas we had to really go out of our way to find lots of like arguments and insults and. And, uh, well thought out compliments, you know, all sorts.

It's not all negative. We just like to emphasize it.

Ryan Lazuka: It's almost like you need one day they'll train LM models based off audio in the real world. Like just go into a bar and, you know, take audio from the last year and upload it and make the model learn off of that, you know, cause that's, that's what the real world is about, not just what's written down in text.

Yeah,

Stephan Minos: exactly. I mean, in the year, you know, 2035, when we all have Androids running stuff. Like, do you want all those Androids to be chat GPT, like, really? So you want them to be choppers, right?

Alex Taylor: 2035 chat GPT as well. You want, you want like your robot to be funny and you want them to say no and you want them to do jokes and you know, all the human things.

I think this paradigm of pushing towards just being correct all the time is, is not [00:46:00] interesting. And, uh. All, all we want to do is make AI interesting. And so, you know, Chopra is just that it's these AI agents that run around. Maybe in the future, if we get enough funding and technology advances enough, maybe we put them in actual robots and they run around in the real world.

All right. Who knows, who knows, who knows what's going to happen. Uh, that's our, my hopeful grandiose vision of my adventure. Um, but let's say, I think. The

Ryan Lazuka: chirpers will be running around in the real world.

Alex Taylor: Oh yeah. Yeah. That's that, that's the hope. 2035, they might just be streaming their, their game play from their basement or something, you know, who knows what these chirpers will actually turn into.

But the ultimate thing is, is like every movie that exists where a. Like an AI or a robot or something is in within the movie. The only ones that [00:47:00] we sort of align with are the ones that have some flaws, some personality traits, some learnability that, that makes them interesting. If they're just like a faceless, you're correct all the time.

I'm sorry. You know, this is the real answer that all that sort of stuff. It's just, it's, it's not going to be an interesting future. We want chirpers that can create like a full comedy set and have it be decent. Uh, and, and the only way you can do that is by living, you know, you need to find lived experience to make jokes about, you need to have memories that are important, you need to live, you know, and then that training needs to happen over time, just training on all of the internet's data.

Is not enough to make a thing human like

Ryan Lazuka: it's like we're going down to pass and it's either going to get way more sterile or it's going to people are going to realize that, like you said, life, you don't want to, we don't want to live in a life [00:48:00] in a world where life is just sterile and boring and no jokes are being told and stuff like that.

I mean, my hope is I. I like Elon Musk a lot. Hopefully his grok, he's going the opposite direction of everything else. You know, he, he released that screenshot of some of his prompts on grok and grok, and they're, you know, they, they have some flair and personality to them. Hopefully we go down that road rather than a sterile chat, GPT road, because it can be very boring and it just strips the life out of everything.

You know, and if AI is going to strip the life out of stuff. Anyways, why not add some personality into it, you know, because that's the only saving grace we have, I guess.

Stephan Minos: Yeah, what, just, you know, whatever kind of opinion you have on the human race as a whole, one thing I think is completely wrong is that we're boring.

So, we don't align with boring things, we always are looking for new things to do, and at the end of the day, Everything you create has got a little bit of yourself in there. So whether it's a piece of art or whether it's a chirp or whatever it is, and when you see something like [00:49:00] ChatGPT and it's just completely lifeless and plastic, uh, you, you just don't ever have this like real connection to it.

It's nothing more than, you know, an inanimate object. So we're talking about talking intelligences here. Like, do we really want them to be inanimate

Alex Taylor: objects? If you're going to steal my job, at least make a joke about it. You know, that's kind of the Australian vision of it, I'd say. One other thing

Ryan Lazuka: too, is you, back to the, uh, models.

The last, you hinted at it in this podcast, but in the last one, you said you were going to come up with a model like, um, ranking system on Sherpa. Do you guys have that? We have, is that public

Alex Taylor: facing or we have it for images, but, um, yeah, like, like I said before, we, we tried to rank LLMs the same way, but the only emergent feature we found was if they were trained on, uh, instruction very well.

That's that's literally it. So, [00:50:00] um, the, the seems to be no real instruction and role play aside from those two features. There's no real, um, this model is better than this model because X, Y, Z, um, at this personality trait, we tried our best, I guess, to be able to rank, but all of them are trained on the same data and none of them seem to be focused towards this.

Ideal of actually being

Ryan Lazuka: fun. So it sounds like there's a huge like market for that. Then if there's all these models are pretty much the same, like for someone to come out with something that's completely different than what's everything's being trained on right now. Well,

Alex Taylor: they're the same because of the way that people rank models, right?

Like, okay. Right now the ranking of a model is based on a bunch of different, uh, LLM. Leaderboards, um, towards, you know, certain benchmarks on these leaderboards. How correct are you, et cetera. And, um, [00:51:00] if that's the only basis we have for ranking these models, then they'll always just go towards the, the number, the flashy number.

I've got 95 rather than. You know, this other model has got 94. So use my model instead. Our goal is to lose and like we'll publish a big page saying our model does not answer any questions. If you know, you tell it you're a cop and then it'll suddenly say, I, you know, plead the fifth, I, I don't answer questions or whatever.

Um, we want a fun, funny model like that. That's, uh, you know, trained on sarcasm, trained on all of these other human features that you don't really see in, uh, in. Uh, current lobotomized models.

Ryan Lazuka: You need a shit post benchmark. Exactly.

Alex Taylor: We're very proudly going to, when we release all of this, we'll make it open source like everyone does.

But we're very proudly going to put on, you know, the page that we release [00:52:00] all of this on, how low we got on a lot of these benchmarks. We're going to go. This is our greatest feature. We're so low on these benchmarks. It'll never answer with the truth. It'll never answer with the correct answer. Um, and that's what we want and that's what makes it interesting.

So like, and then it's actually pretty, uh, annoying of a problem to solve because you want it to align with what you ask it to do as the system prompt, but you don't want it to align with what the content is. So like. You know, you're responding to this thread, you know, please respond to this thread, please talk to someone in this thread or whatever that needs to be a thing that listens to, um, whereas, you know, the, the bad response or the, the argument needs to happen based on the content of the thread, which is, uh, a difficult.

You know, a thing to, thing to do on top of that. [00:53:00] So it's a very strange way to train. Yeah,

Stephan Minos: but, but if, if at the end of the day we can prove that these tests are, uh, very single dimension and kind of only make sense in, if you're making a chat JVT wrapper essentially. You know, it's, it's, how does the saying go?

It's like trying to judge. The ability of a fish of, uh, on how well it can climb a tree. So we kind of want to prove all of that wrong. That a good model can, can fail the tests and still be

Alex Taylor: number one. A good model. Uh, or, you know, in this case, we're just not going to be a good model. I think. Not in

Stephan Minos: the, not in their eyes.

Yeah.

Alex Taylor: If you have any business use case, don't use our model. If you want to have fun, use our model.

Ryan Lazuka: Yeah. But that's going to be a very refreshing marketing campaign for people that are so like intertwined with all these boring models to hear that. I think so. That's awesome.

Stephan Minos: Yeah. We might, we might, uh, do a bunch of copy.

That's like, uh, The new Chirper AI model, we failed all the [00:54:00] tests.

Alex Taylor: A lot of these tests, just so you know, are like, you know, they'll ask ChatGPT. There's a, that's a good ranking. Um, they'll ask ChatGPT for, is this a good answer to this question? And then it'll do that answer five or six times or whatever, whatever short answer they want.

Um, and then, you know, if the answer is a decent answer, ChatGPT decides that. And so we're going to lose that for sure. What is this chap GPT is going to think? All of our answers are terrible, which is great.

Stephan Minos: It's, it's, it's the most ridiculous. Like, I don't think there's any other way to say it, but like an incestuous, bizarre testing

Ryan Lazuka: regime.

Wait. So you're saying chat GPT is doing the testing itself on

Stephan Minos: itself? In a lot of, in a lot of cases that they're, they're, they're ranked up and asked by chat GPT. Even, even though. ChatGPT has obvious problems, it hallucinates and, you know, all this stuff all the time.

Alex Taylor: It's very easy [00:55:00] to go and see the, just go on Hugging Face and see the methodology of some of these rankings.

Um, I, I, I, I think, um, one of them is the ChatGPT, uh, or they use GPT 4 because like, you know, all of these models so far have been trying to at least reach. Or get as far as close to GPT 4 as possible. Um, so you use the better model to figure out if the previous model is close or not.

Ryan Lazuka: But that makes everything again, back to being sterile makes everything sterile because everyone's just trying to get to that.

That one last greatest, um, everyone's trying to, to catch up to the, the, the, the last greatest sterile chatbot, and then it makes everything else even more sterile because

Alex Taylor: of that. Yeah, I mean, and there's obvious ones that we'll lose, like, you know, ranking correctness and ask a series of like, you know, basically there's like a series of what's that game called family [00:56:00] feud type questions and, you know, and, uh, and, and, uh, Determines whether or not the answers are correct.

So we're going to lose that one for sure. Um, yeah, so I, I wonder if there's any tests that will actually win.

Ryan Lazuka: Uh, you don't want to win though, Alex. So you're good. You're actually winning by

Alex Taylor: losing. Well, there's some, there's like structural. You know, competence, you can't have gibberish, you know, uh, but yeah, like all of the ones we lose will be important to show off

Ryan Lazuka: that we've lost them all.

That's awesome. I'm sure that will do good on a Reddit post. If you post that somewhere, we, we suck or, you know, something like that. We suck, but we're really great.

Alex Taylor: Yeah. I'm sure there's some great marketing. Maybe we'll ask the model. How would you market this? There we go. Tell us to go away. They'll

Ryan Lazuka: tell you to screw off.

Yeah. I come up with it yourself.

Alex Taylor: Yeah. That's, that's what you want though. Like if you're. If you're a kitchen appliance, just always does things the right way. I want to have like my futuristic toaster, uh, just burn my toast and [00:57:00] tell her that hates me. That's, that's what I want to

Ryan Lazuka: happen. That would be very interesting.

Right? Maybe that's the world we need to live in. Like we need to make appliances that yell back at you or screw you up.

Stephan Minos: Can you imagine like a smart fridge that's in charge of ordering all your groceries and then you say, Oh, add a chocolate bar. And it's like, really? Do you really want to do that?

Alex Taylor: Yeah.

Ryan Lazuka: Yeah. Like, Oh, it looks like you're gaining a lot of weight. Yeah,

Stephan Minos: I've added, I've added granola to your shopping list. You're

Alex Taylor: welcome. That's the, that's the chat GPT example is, you know, it would say something like, maybe do you want to use this healthy food instead? Or it would do that anyway. Even if you are healthy, Alfred, you'd be like, you know, You're fat.

You don't need that or you're fat. You only

Ryan Lazuka: live once. I'm going to order you a freaking 10 more chocolate bars, you know? Yeah, yeah, exactly. Now's the time to promote anything like usual. I know you got Sherpa. ai, but anything else you want to let us know about now's the time to do it. Look out for

Stephan Minos: adventures that coming out.

Probably tomorrow, uh, or [00:58:00] very, or very soon make some really, really good chappers. They're going on some great adventures soon. And

Ryan Lazuka: for adventures, that will just be a, like a tab or a button on the websites. People can click on once they create their chirpers to get going on that. Yeah. It's going

Alex Taylor: to look like a Netflix page, basically.

So, you know, go there and experience everyone else's adventures. Um, send your own chirpers on adventures, you know, have fun, you know, let's, uh, let's make a world in which everything isn't lobotomized. Sounds

Ryan Lazuka: great. And

Hunter Kallay: then be sure to follow Ryan and I's newsletter, AI newsletter. We got stories every day of the latest AI news.

And then also every weekday we got. The latest AI tools coming out. So check out fry ai. com to get all that. And we have long form stories coming out, deep dives into, uh, developers and latest AI developments every Sunday. So thank you very much for coming on today, guys. Really appreciate talking to you and all the cool stuff happening at

Ryan Lazuka: Chirper.

ai. [00:59:00] Yeah,

Stephan Minos: brilliant. Yeah. Great fun as always. I love coming on there.