Svelte Radio

In this episode we FINALLY manage to catche the Pngwn 🐧. He works at Huggingface and created MDSveX. Enjoy!

Show Notes

Vercel is the platform for frontend developers, providing the speed and reliability innovators need to create at the moment of inspiration. Founded by the creators of Next.js, Vercel has zero configuration support for 35+ frontend frameworks, including SvelteKit. We enable the world’s largest brands like Under Armour, eBay, and Nintendo, to iterate faster and create quality software. Try out Vercel today to experience the easiest way to use Svelte.

In this episode we FINALLY manage to catche the Pngwn 🐧. He works at Huggingface and created MDSveX. Enjoy!

Intro music by Braden Wiggins a.k.a. Fractal (

Unpopular Opinions
  • Swyx: Svelte will stay on top of React
  • Antony: Mastodon is not the answer to the supposed demise of twitter 😭
  • Swyx: Every developer needs to be AI literate - Software 3.0
Mastodon Links

Creators & Guests

antony 
Dad / @SvelteJS maintainer / @SvelteSociety co-founder / Svelte Radio host. Born at 341.57 ppm CO2.
DS Eng @Provihq 🧜 😺 👩‍🏫
Kevin A. K.
Co-founder of Svelte Society 🌎 Organizer of Svelte Summit 🏔 Host of Svelte Radio 📻

What is Svelte Radio?

Things about Svelte. Sometimes weekly, sometimes not.

Hi, in this episode, we get to talk to Penguin about Gradio AI and what he does at work.

But before that, here's a word from our sponsor.


Vercel is the platform for front end developers, providing the speed and reliability innovators

need to create at the moment of inspiration.

Founded by the creators of Next.js, Vercel has zero configuration support for 35 plus

front end frameworks, including SvelteKit.

We enable the world's largest brands like Under Armour, eBay, and Nintendo to iterate

faster and create quality software.

Try out Vercel today to experience the easiest way to use Svelte.


Welcome back to Svelte Radio.

This time, we're very calm and happy.

No, we're always happy at Svelte Radio.


We're here.

We're back again.

And we have a guest this time.

Say hello.



We're not going to introduce.

Yeah, I might need some names.



Oh, yeah, you're right.

But he's so well known in the Svelte community.

His name is like a penguin emoji.



There's a penguin showing up everywhere.

So yeah.


So welcome to another episode.

That was a very long, weird intro, but we're here.

And we have all of the hosts this time around.

Say hello.





I don't know if we want to do like, you know, this like names and voices because this is

like five people.

But hi, I'm Sean.

Hi, I'm Anthony.

Hi, I'm Brittany.

I'm Kevin.

And then our guest, Mr. Penguin or maybe not Mr. Well, what's the?

What's the title there?



Overlord Penguin.

Sounds reasonable.


We just went for it.

So Penguin, you're the creator of MD specs, right?

It's this small, little thing that we've all used a bit.

And you've been around in the Svelte community for a good while.

Maybe you can introduce yourself.

Yeah, I am a penguin on the internet.

I've been involved in Svelte for I don't even know how many years, like four years or something.

Five years.

I created MD specs.

I work at Hugging Face on on Gradyo.

And I'm here to talk about whatever anybody wants to talk about.

Yeah, so we thought we would talk about Gradyo and AI and I guess Hugging Face in general

and why you use Svelte, what Gradyo is and all that good fun stuff.

But yeah, all this interesting stuff.

But yeah, so Penguin, you've done a bunch of talks that are pretty worthwhile to watch.

One about building your own REPL, which is mind blowing to me.

And recently at Svelte Summit, you also did a talk on, yeah, what was it about?

It was a talk without slides.

On storytelling, right?


On storytelling, yeah.


I thought it was a nice breath of relief from like the tech, technical talks, like just

to have something that was more about just how to have a good talk and a good story.

Yeah, it was a lie.

I think I was just after Sean.

So it's kind of the strange meta talks section of the conference.

But yeah, I felt like doing something a little bit different.

I think it can be difficult to do non-technical talks at a technical conference, depending

like who you are.

And I think that sometimes, I thought it was an important talk, the way we kind of tell

stories and encouraging people to tell their stories.

But it can be, if a stranger to the community gets up and does a non-technical talk, everyone's

like, why is this person not talking about Svelte?

What has this got to do with tech and with Svelte at a Svelte conference?

So it's kind of, there's an element of, I was in a position where I could do a talk

like that.

And I felt it was important.

And I felt I should kind of use that platform, because you say I'm pretty well known in the

Svelte community.

So no one's going to question whether or not I should be on stage speaking.

So it kind of gets that out of the way just because of, not that that's correct, of course,

that's a bit of a, that's a whole discussion in itself.

But the booker's not going to boo you off the stage.


I was the booker apparently.

Did you not feel that you might disappoint people by not following on from your previous

Bristec type talk that's heavily technical?


So one of the reasons I wanted to do it was because of the expectations that I knew there

would be.

Like I've got a bit of a track record of doing pretty deep technical kind of dives into topics

or kind of live coding things.

And I love doing those things.

But part of it was kind of, it's our first in-person conference.

It would be nice to do, it's the kind of talk that does well when you've got an audience

in front of you.

You've got kind of people that you can kind of talk to.

It kind of felt more intimate because of, you know, it was a nice, nice, nice size kind of

group and stuff.

But yeah, part of it was actually people are going to be expecting this.

So I'd like to kind of mess with those expectations.

I think that was part of the, it helped with the impact to a degree.

But I was definitely nervous about the talk.

It's a risky talk.

I'd kind of joked to Rich.

I met Rich in New York a couple of months ago and I kind of joked to Rich that the most

kind of radical thing you can do in a tech talk is be sincere.

And so I was kind of like, and there was an element of, if I'm going to go on stage and

ask people to kind of to tell their story and to kind of to be honest and to kind of

take risks in that way, then I have to do the same myself.

So it was, there was a kind of an element of kind of, I don't know, kind of following

my own advice on, you know, if there's something you actually want to talk about, something

that you feel is important, then you should kind of talk about it using whatever platforms

you have.

So it was quite, you know, in some ways it's a kind of like, it echoes some of my reasons

for being involved in open source and involved in tech in the first place.

So as someone who does talks and as someone who watches a lot of talks, I just wanted

to compliment you.

Like that was one of the bravest and also it's very raw and like definitely expectation

breaking talks I've ever seen.

So well done on that.

I think everyone was somewhat dubious.

Like everyone's like staring at the black screens, like expecting some slides to show

up at any point.

And you just kept not delivering on that.

I thought at the end it was like, the name of the talk was, I told you my dog wouldn't

walk or something like that.

And then I expected like at the end there to be some like big thing about like the dog

not walking.

No, that was more of a red herring, the title.

Clickbait, just clickbait.

I'm going to do a talk called how to fill a room and it's all about the title of penguins


I have a sense that you are one of the more, you're the one of the people that think about

code in a more holistic fashion.

Like even though you're very technical, you know, you're on this file core team, plus

you maintain empty specs and I don't know what else.

You definitely view it as like code plus humans.

I don't know if you have any thoughts on like how a community and code intermix.



You seem to care quite a bit.

Yeah, I do.

And I don't like, I just view technology as a kind of a means to an end really.

Like it's a great enabler.

I'm very conflicted for example, and you know, I often joke that the internet was a mistake,

half joke.

And it's true, you know, it's exacerbated a lot of the existing kind of injustices and


It's kind of amplified some of those.

But in other ways, it's kind of access to information.

If you look at things like, I don't know, as a kind of very popular examples from Khan

Academy, you know, making a high quality education available to a huge amount of people worldwide

and has been kind of really successful in democratizing in the true sense, not in the

investor sense, education.

And I think that like, so when I first got involved in Svelte, I was mostly focusing

on community.

That's kind of how I got my start.

There was 30 people in the discord and people like Rich and Conjuritory were really generous

in terms of kind of helping me understand.

This is back kind of Svelte two time.

You know, now we have 45,000 members on discord and obviously no one person can manage that.

But you know, I found it very rewarding kind of helping people out and stuff.

But I think what's more interesting is when, you know, maybe someone that you've helped

and then someone that that person has helped this kind of chain of helpers goes on to build

something incredible or something impactful.

And I don't think tools can be successful without a strong community.

I don't think without the people to build the things, to write the content, to present

new kinds of, I don't know, whether it's tutorials or documentation, whether it's kind of novel

uses of a technology.

I think those things kind of prove out to technology, but they also kind of, I don't

know, they communicate with the possibilities of a specific tool or a set of tools and so

on and so forth.

And Svelte has very much been successful because of, you know, the community.

The tech honestly hasn't changed that much.

Like we've seen huge growth over the past, probably over the past kind of 12, 18 months

since Svelte three or, you know, a little bit after the launch of Svelte three and Svelte

hasn't changed.

There's more hype, there's more people building, there's more people doing interesting things.

And it is because of that community.

And one of the things that sets maybe the Svelte community apart is it's, you know,

it's very friendly.

It's very kind of very welcoming to people, but it's also the people are very engaged.

I get this feedback a lot that people like the Svelte community, not because, not just

because it's welcoming, there are other welcoming kind of tech communities out there.

There are other knowledgeable tech communities out there, but because people are still willing

to kind of engage in conversation and try hard to keep it kind of civil and friendly

and welcoming and help like newcomers and experienced people alike.

And that kind of that dynamic is in my experience relatively unique to keep that kind of almost

small community feel as a community grows, you know, exponentially, like it's grown like

enormously in the past couple of years where at first the growth was slow.

But it is this idea of ecosystem, this idea of Svelte as an ecosystem instead of Svelte

as a library, you know, it's something that we're thinking about at work.

How do you go from library and a couple of integrations to ecosystem?

And when you start thinking in terms of ecosystem, you know, that phrase specifically, you know,

when you think of ecology, it's about, you know, our relationship to, you know, to various


And it's the humans are at the center of that.

It's like, what is our relationship?

How do all of these things to relate together and getting the most out of your community

and, you know, almost empowering your community to do the work is, you know, to build the

community, interesting things to find those new use cases to find those new applications

is, for me, the difference between a success and a failure.

There are many open source libraries that have existed for eons and they've got no usage,

you know.

Playing devil's advocate here.

Do you think that there is some element of the community being very friendly and welcoming

and helpful because nobody's being forced to use Svelte for their job yet?

There's no disgruntled React developer that's been forced to use Svelte that's come turned

up to the Discord with a chip on their shoulder and gone, you know what, I hate this framework,

I hate these people, I don't want to use React, but my job demands it, therefore I'm just

going to abuse you all until I get what I want.

Is there a notion of maybe we're still a bit lucky in that respect?


You know, you have the, when you're not forced to use something for work, when it's all personal

project, you have the, you know, the liberty of choice.

If you're working with a tool, you know, you don't have that choice.

And as soon as you take choice away from someone, you know, they feel trapped.

And I think we'll see that in satisfaction surveys, you know, being pragmatic.

It's like, you know, we celebrate these, these kind of satisfaction surveys and stuff, but

at the end of the day, it's because people are going to be happy with the tool that they've

chosen to use for a pet project that may only have existed for three months.

When people are forced to adopt a legacy Svelte, you know, you know, pick up a legacy Svelte

code base and figure out like what on earth the previous developers were thinking for

three, four, five years, they're going to be a lot less happy about some of the design


And you know, you're likely to see questions around, you know, why is this a thing in Svelte?

Who thought two way bindings were a good idea?

You know, those kinds of questions when it gets abused, the same way that when React

patterns get abused, people question the actual design of the feature itself rather than the


And that's totally valid and we'll see that in the next few years if we're successful.

That was Sean's talk from Svelte Summit.

He talked about that, right?

That was a lot of things.

I'm going to insert a call to action here.

It just so happens that the state of JS 2022 survey just started.

So if you want to voice your dissatisfaction with Svelte, go ahead.

Also, mention Svelte Radio as the podcast that you're listening to.

We mentioned Svelte Radio, write it in because they're not going to let us in until we force

themselves onto the regulator.



That's right.

Yeah, that's right.

But yeah, so I call this a second framework syndrome.

It's a good thing in a way that everyone comes here by choice.

It's one of the things that I shouted out when I wrote about why I enjoy Svelte.

So yeah, I strongly agree.

It's pretty interesting that I think I call back to Sophie Alpert, the former manager

of the React core team.

She was doing a keynote for ReactConf and she talked a little bit about React is the first

framework that a lot of people know, sometimes even before JavaScript.

And that imposes a huge level of responsibility on React to be accessible to beginners that

most libraries do not.

And it's not necessarily a good thing, actually.

It's just different.

It's just qualitatively different.

It means that everyone who chooses Svelte chooses it as a second framework and then

the community that we get is more enjoyable for some people as a result.

I do think we should celebrate it, not that we can, but we have something to celebrate.

We are in that position.

So I think we should celebrate it.

I definitely don't think that we should go, "Well, this, that, and the other."

I think there's a lot of new frameworks and Svelte is still doing really well comparatively

to them.

And I appreciate Pete's perspective and how you connect everything back to the underlying

ideology of how the community makes the framework a little bit.

And I love that.

I often joke that Svelte isn't a technology, it's a philosophy.

And you see this quite a lot in terms of, it's frustrating sometimes when people say,

"Where's this library?

Where's this integration?"

But there's this kind of minimalist approach, this use the bare minimum.

If the library is doing too much, then maybe write something simple for yourself.

And there's an element of that side of things as well.

And I think what's interesting, actually, Sean, talking about the kind of attracting

beginners is we attract a different kind of beginner.

Because one of the ways that Svelte is framed is it's very similar to just HTML.

So people often come in, because it's an easy to use framework, you know, air quotes, we

do attract people with maybe limited experience.

And then when they actually want to start doing more complex stuff, they don't necessarily

have the kind of like that heavy JavaScript backgrounds to kind of apply to their Svelte.

And so we kind of have a different kind of beginner problem, I guess.

But it's because of the nature of the framework.

And I guess this is how, you know, there's a whole thing around the design of your framework,

the philosophy behind your framework will dictate what kinds of users you attract and

what kinds of challenges they have when they want to go from, you know, beginner to intermediate

to advanced.

But that kind of minimal approach maybe doesn't work well for those users.

They're not as comfortable kind of like writing their own kind of simple libraries as, you

know, maybe a more experienced developer.

Maybe not for this podcast, but I would love to get your thoughts on how to do that transition

from beginner to intermediate to advanced and get like a course layout.

I would love to do a course like that.


So I mean, nowadays, Svelte is more popular than React, as we've seen on NPM trends, right?

You just ruined my unpopular opinion.

So sorry.

I was saving that.

Thank you.

Celebrate it.

Celebrate it while she can.

Celebrate it.


So for those of you.




I was just going to say like there's someone somewhere is using some build tool or I don't

know, some automation that it's just like downloading Svelte like crazy.

So I asked Lori Voss about it.

She's the co-founder of NPM, and he said that Theo was right in that tweet that the CI is

like messed up.

So something somewhere is just like the Svelte CI.

I think Theo said that the NPM or somebody CI.

Oh, he said some Svelte Dev CI.

So like you said, like somebody's CI is messed up downloading.



I mean, it's also a compiler, right?

You should it should see fewer.

Gatsby and Next had similar bumps recently, too.

So they had like a huge spike and then it went back down.

It's weird.

It's a conspiracy.

Maybe it's one of these unpackaged type tools.


Oh, I wanted to mention like we talked a bit about like we had a different kind of beginner

and I just wanted to shout out the kit documentation that has a section on web standards, which

is very nice to see.

It's basically about how to use fetch form data, stream APIs and stuff like that, which

I think a lot of people, they don't know how to actually use web standards.

They know like some stuff.

But yeah, so just a shout out.

I think we could actually expand on that even more.

Using the platform.



You know, I think one of the earliest, I think that the first Svelte Summit or maybe Svelte

Society Day, we had people from like a few governments, right?

Like I think Norway and Mexico that they were putting it in curriculums.

So yeah, I definitely don't want to give the impression that, you know, this is not suitable

for beginners.

It's easier to learn just because, you know, it's closer to HTML.

But yeah, that's beside the point.

You don't have to learn functional programming or some fork of JavaScript to do that.

So anyway, I feel like I've gone on this rant.

Did we want to spend any time talking about MD specs before we move on to Gradio?

I'm thinking maybe we do another podcast episode on just MD specs.


Head into Gradio and Hugging Face maybe.


What is Hugging Face?

Sounds all right.


What is Hugging Face?

Hugging Face is a company that is building a basically we build libraries, platforms

and services to make AI as accessible as possible.

The context around this is, you know, and sometimes when people who aren't familiar

with the AI ecosystems, here they're still like, you know, what's the big deal?

But like, it's genuinely the Wild West out there.

You know, there's a lot of code that gets, I mean, a lot of code isn't released for some

kind of research.

A lot of code is, you know, very difficult to make use of to get to work yourself.

So having a kind of a consistent set of libraries and platform where we can, you know, it's

easy to use state of the art machine learning models is a bit of a game changer.

So the core, I guess, the heart of Hugging Face is the Transformers library.

That's kind of what everything is built on top of.

And it's a, you know, consistent transformers is an architecture in machine learning, but

it's also a library that you can use.

And we make available lots and lots of state of the art models that you can use relatively


And we've got a bunch of, you know, we've got a whole, you know, we're building an ecosystem.

So it's not everything kind of works together.

We've got a get host.

We've got a hub where you can host your models and your data sets and even your little kind

of Python apps.

That's our spaces, which we'll come to shortly.

But also, you know, we've got a series of libraries.

We've even got APIs where you can easily, you know, models that are hosted on the hub

can easily deployed, be deployed to an API.

And you can then just like make predictions using an API instead of needing to kind of

write your own code.

So it started off as just focused on NLP, but now it's just pretty much anything goes.

NLP being natural language processing programming?

Yes, not neuro-linguistic programming.

I was going to say that.

I have some friends also into that.



Mostly consists of looking yourself in the mirror and telling yourself that you have

confidence today.

You are.

It's very sad.


No, actually, like one of my one of my smartest friends does that and like it works.

So I'm actually very, very hesitant to be skeptical about it.

That's kind of what we were saying before we started recording.

We're going to have smiles on our faces and be happy.

We just made ourselves happy.


And I think it worked.

You have to manifest.

One analogy that I've heard is Hugging Face is GitHub for machine learning didn't make

sense to me two years ago, makes a little bit more sense to me today.

It seems like that is the way that, for example, stable diffusion distributes model weights.

And basically, is it just GitHub on steroids?

What does Hugging Face do differently that GitHub doesn't do?

I guess the biggest thing is kind of Git LFS is free.

So when you're dealing with which it isn't on GitHub for reference.

So when you're dealing with like, you know, models are huge, checkpoints are huge, you

know, these things are very large.

So you need you need some large file storage.

And that could get very expensive on GitHub, whereas that's that's free on Hugging Face.

So it is a Git host.

And that's our kind of in terms of the public facing kind of part of Hugging Face from a

product point of view.

It looks like it's just like a Git host.

And it's definitely very good for hosting your models and your data sets.

But there's a bunch of kind of I think one thing that Hugging Face does differently is,

for example, you can get you can buy compute, you know, you can't buy compute from from


GitHub isn't like a way to host your code.

I mean, it's very general purpose.

So obviously, it can't be because we're focused on machine learning.

It's like if you need compute, for example, on whether that's we've got inference APIs.

So more kind of production focused, you could upload a model to an inference API and use

one of our API's to run your productions.

But we've also got this idea of spaces, which is a way you could think of it like, you know,

like GitHub pages or something like that, except it's, you know, it's an it's an actual

like Python app with a server running behind it.

And you can if you need a larger GPU, then you can you can buy a larger GPU.

If you need an A100 for your predictions, then you can buy that directly from, you know,

you can just upgrade your space in the GUI.

And then you can potentially you could use that as an API if you wanted to.

But you know, we would encourage for production use cases go into one of our dedicated production

services, it's going to be more more performance.

But I think that's the you know, we have a set of because we're focused on a it's a pretty

broad vertical, but it's a vertical machine learning that yes, it's good for hosting.

But yes, it's also good for production API's.

It's also good for prototyping apps with spaces.

We've also got the libraries that kind of the power all that that you can also use as


So it's more of a I would say it's more of a kind of a rounded kind of solution to the

to a kind of a kind of vertical.

It's just a very, very wide one.

Yeah, very cool.

Yeah, I gotta say that that so explains it so much better to me.

I was so confused about what it does, but like having like the GitHub face for like the models

and then you have the spaces to actually host the thing that you need, right?

You have physical servers or those like located?

Are they more like a CDN or are they located in specific areas?

They are in specific areas that I think everything's just AWS.

Are they built on AWS?


Yeah, as far as well.

So the spaces things are kind of like I say, the kind of you're more likely to hit the

limits of what those resources give you.

But we've got more production focused products as well.

I wanted to give a little bit of OK.

Do you need me to stop down or?

Go on, go on.

I wanted to give a little bit of context.

So a 100 are the it looks like they're the chip of choice for machine learning.

People are measuring their capacity by the amount of a 100s they're stockpiling.

Like literally there's a chart out there going like number a 100 per company and the people

that have more win.

And that's it.

It's kind of like a nuclear arms race to me, which is which is pretty cool.

But one of the most magical experiences for me, like when I started looking at Hugging

Face differently compared to GitHub was you host Gradial UIs on Hugging Face on spaces

and people can just run the models for free, which is not my normal experience for machine


Like you normally have to download it somewhere and run it yourself.


Doing this for free is must cost them a lot.

Yeah, it's not it's not cheap.

But in terms of making AI as accessible as possible, making, for example, making papers

reproducible, which at the minute is like really, you know, just got some code.

It's like, trust us.

Yeah, yeah, exactly.

So, you know, encouraging people to build, you know, a Gradial demo for the paper, which,

you know, we've had a lot of success with is, you know, game changing.

So like for us, it's the important thing is to is to build this ecosystem is to make sure

people are using spaces.

And you're right, you know, it's obviously it's a whole machine.

It's a dedicated machine for these kind of these spaces apps.

They don't they don't necessarily have GPU.

So like the GPUs are, you know, upgrades that you can pay for.

We also have community grants that people can apply for.

So if they've got an interesting kind of space that we think, you know, the world should

be using, they can apply for a CPU grants and we can we can award those temporarily

as well.

So, you know, it's a it's certainly a cost, but it's a measured one in terms of, you know,

making machine learning as accessible as possible, whether that's making it easier for researchers

and ML engineers or whether that's kind of opening the door to software engineers, which

is obviously the biggest part of kind of tech is software engineers who, you know, maybe

have some familiarity with machine learning, they know what it is, maybe they don't know

how they would integrate it into the, you know, their workflows and how they would use


But these kind of gradient demos, for example, is a good way to, you know, build a proof

of concept to, you know, to show to stakeholders to get some buy in so that they can maybe

actually invest some serious funds into that.

Is A100 a the Bitcoin miner of AI or is it the graphics card of AI?

Are people going to be, you know, building all this hardware and then it goes obsolete

and you have to get the latest hardware because you want to do some specific task?

It's the probably the kind of reference GPU of choice at the minute.

There are cheaper ones.

There are, I think, more expensive ones.

You know, there's innovations in hardware all the time, especially for machine learning

and it will obviously there will be a newer, shinier version in 12 months.

Before we continue the conversation, here's a word from our sponsor again.

Vercel is the platform for front end developers, providing the speed and reliability innovators

need to create at the moment of inspiration.

Developed by the creators of Next.js, Vercel has zero configuration support for 35 plus

front end frameworks, including SvelteKit.

We enable the world's largest brands like Under Armour, eBay, and Nintendo to iterate

faster and create quality software.

Try out Vercel today to experience the easiest way to use Svelte.

So we talked about Hugging Face then and A100s and all that cool stuff.

But where does this all tie into Svelte?

What does this have to do with Svelte?

Nothing at all, in honesty.

So end of conversation.

No, Svelte is used across all of Hugging Face.

So pretty much a lot of what is kind of public facing, almost everything is kind of written

in Svelte.

So the hub is all Svelte.

It's a relatively custom setup.

We've got a couple of SvelteKit apps.

So we've got our inference endpoints, kind of landing page is a SvelteKit app.

We've got a Hugging Face store.

Go buy our merch.

That's a SvelteKit app.

Gradio that I work on, which is a library, which I guess I'll explain that in more detail

in a moment, is also all of the front end is written in Svelte.

So pretty much everything uses Svelte, but obviously, you know, machine learning itself

requires no UI.

There is no necessarily no Svelte involvement.

That does seem to be a trend of, seems to be a trend I've noticed over the past 12,

18 months that web three and AI companies use Svelte.

I don't know why that is.

I don't like, they just like new things.

So they're just using the most shiny thing.

It's an interesting one because when I first started off in web three, there was no one

using anything.

The only thing that was available was the Ethereum SDK and it was written in this dodgy

JavaScript that was shipped through browserify and it barely worked at all.

It was terrible.

But I started building things called truffle boxes in Svelte.

And so there were, there was a few ways to build like a blockchain app and one of them

was using Svelte and it was one of probably five ways to do it at the time.

And then suddenly when I sort of exited web three in 2017 or so, React took over and React

became the de facto way to build, well, you know, anything, but also web three apps.

So it's interesting now that the tides turned again and Svelte has become a good way to

build them.

And I think it's probably the kind of company it's modern, it's faster in my mind than React.

And I think it's also just kind of, you know, it feels like it's, I don't know, a modern

company I think we'll look at Svelte as a good option for getting a front end up and


I mean, it's easy.

In the AI kind of like sector, I know that Coher are using Svelte, Coher are another

kind of AI startup more kind of following the kind of open AI model, I think, where

they're kind of closed source models with APIs to use them, but they're using Svelte

and they donated like $10,000 to the open collective about six months ago.

So thanks.

There was a protocol I was quite involved with back in the day called melon protocol.

And it was by two people and one interesting, the reasons took my mind is because the lady

was called Mona Elisa, one of the ones who owned it.

And I thought that's the best name I've ever heard in my entire life.

And so but the guy and unfortunately I've forgotten his name, sorry.

But he sort of when I was early days as a maintainer on Svelte, he appeared in the discord

and he was rebuilding melon protocol website in Svelte.

And I was like, well, that's interesting.

That's sort of unexpected.

Also, you know, the guy designed the protocol also writing the front end.

But there you go.

Just a tidbit that one.

I want to offer some thoughts on this, like why people are using or just like investing

in UI.

Essentially, you know, both machine learning and Web3, there's a bunch of opaque APIs that

are not super usable to the general population.

Like everyone's interested in mass consumer usage of that.

And for that, you need user interfaces.

I very much think like, you know, there's a movement in machine learning of like, there's

a lot of research being done on foundational models, but it's not super accessible.

Stable diffusion itself is not super accessible.

You kind of have to build the UI around it.

And that's kind of what people are investing in there.

I think that's the opportunity for front end developers to get involved with AI as well

to essentially reinterpret it for people, for regular people to actually use, make use

of this stuff.

And, you know, I'll point out two more things, which is there's a lot of these Python ecosystem


So Gradio itself was an acquisition.

It was a startup independently and then it was acquired into Hugging Face.

And Streamlit is another one that was acquired by Snowflake for $800 million.

And these are basically Python to UI interfaces where people just write Python and a UI is


And that's kind of the context, which is like, I know Python, I have my machine learning

thing or my data thing or whatever my thing is.

And I want to make a user interface without being a UI expert.

And essentially what Gradio is, from my point of view, is it's a bunch of small experts

creating components that are accessible in Python.

Correct me if I'm wrong.


That's pretty much it.


It's Python.

You know, Python people want to write Python.

So you know, the whole JavaScript ecosystem is totally inaccessible to those people.

So Streamlit and Gradio are two different approaches.

There are differences.

There are things they're good at, things that they're not as good at.

Do you have a...

I would love to know what's the similarities and differences.

They're getting more and more similar.

I'll say that much.

I mean, the big difference is that Streamlit has its own custom interpreter, which kind

of allows for this very, very kind of clean line by line kind of, you know, you can build

up a UI kind of line by line.

It's very kind of intuitive.

But the way they do that is with a custom interpreter.

We have a...

And Gradio, we have a goal that we need to run in like Google Colab.

We want to be able to run anywhere without basically being more portable.

So we want to kind of stay as a...

You know, this is kind of very kind of interestingly, the kind of React Svelte kind of, you know,

one of the old arguments was, you know, this kind of compiler is this really heavy abstraction.

It's basically, you know, Svelte isn't a library, it's a language, which is kind of, you know,

arguably true.

And it's kind of a similar sort of thing where Streamlit to a degree is its own language.

And it has its own semantics outside of Python, whereas Gradio is just a plain kind of Python

script that you can run in any context.

But obviously, the mechanics of the way they work, this will probably change over time.

Like over the past kind of nine months, Gradio has become more similar to Streamlit.

Gradio used to be just this simple, you know, here your input components that are going

to pass some data to a predict function.

And here are the output components that, you know, the output of that predict function

is going to take those outputs and display them.

Whereas now you can build more complex UIs as we've added more APIs.

So kind of becoming more similar to Streamlit.

But the way in which they run is, at the minute, like, Streamlit kind of runs top to bottom.

So it kind of reruns when you change things pretty much completely.

You can cache some things and it knows when things haven't changed, so it can kind of


But fundamentally, that's kind of how it works is kind of runs top to bottom with some kind

of caching, memoization sort of tricks.

I think that will, you know, they've published a roadmap recently that's really interesting

and that will change.

Gradio is a bit more kind of selective and you can say, I only want these kind of things

to update when this predict function runs, but you've got to kind of manually define

those dependencies.

That's something we are also hoping to improve in a future version as well.

So they are kind of getting more and more similar.

But that whole custom interpreter, you know, just pure Python remains true and probably

will remain true for both of those tools.

Interesting that the rendering philosophy matches React versus Fault, you know, rendering

top to bottom versus partial updates.

So what are some cool, like, I guess, Gradio apps?

What would you call them?

Yeah, apps.

I think the most popular ones, so what is now Crayon was Darlene Mini and that was kind

of probably the most popular, even to this day, even with stable diffusion kind of taken

over the world.

Darlene Mini was on a different level.

We were getting like 50 million generations a day, the kind of the mass usage of Darlene

Mini was absurd, you know, probably kind of, you know, machine learning genuinely at scale,

like in ways that, you know, only certain organizations have had to do before.

So that was definitely huge.

That is now kind of Crayon is like a different, its own product now and that's no longer using

kind of Gradio and spaces.

But that was the big one.

That was kind of like was very, very popular and that was everywhere on Twitter.

And that had the unique kind of elements of people who knew nothing about machine learning,

people who knew nothing about tech, were having fun and playing with machine learning kind

of like on mass for maybe the first time in a long time.

So that was huge.

I guess we could kind of explain what the Darlene Mini actually is.

It's like a image generation tool, right?


So they're all text.

These have become very, very popular, I think partly because they work very well on Twitter.

They're text to image generation models.

So you give it a description and the AI will generate an image based on that.

It will often generate like a selection of images based on that.

And there's a really great Twitter account like weird Darlene Mini, which kind of posts

really strange generations.

And Darlene Mini kind of lended itself well to this because it was kind of quite stylized.

They had a very specific look to it.

So it often came out with some very bizarre images.

Stable diffusion does as well, but stable diffusion is incredibly realistic.

So in some ways, it's kind of has less novelty effect than Darlene Mini.

You want different art styles.

So I have actually been following weird Darlene generations, the Twitter account, because

it's so it's funny.

There's often a little bit of a social message involved.

And I just saw that they reached a million followers, which is absurd to me.

So one thing I wanted to be very clear about was Crayon or Darlene Mini created by Hugging

Face or like a third party and you were just the host of it?

Yep, third party.

We collaborated with Boris Deimer and the folks that were building it.

And we want like it was open source.

You know, there was a lot of kind of I don't know, it feels like sometimes this stuff is

made open source almost as a response to open A.I.'s very aggressively closed source nature,

which has kind of changed recently.

Yeah, yeah.

It's ironic.

But you know, so it was this kind of open source model and we collaborated with the

folks that built that model.

It's not an internal Hugging Face tool in the same way that Stable Diffusion.

Again, the initial demos kind of that were hosted and the models were hosted on Hugging


But that's just a collaboration with another organization as well.

So it was and then kind of Boris and the team wanted to take, you know, try and build a

product around Darlene Mini.

So they kind of created Crayon, which is what is now Darlene Mini.

And that became self-hosted and they did their own UI and they did their own thing.

And for those listening, Crayon is the regular word Crayon with the A.I. in it.

So C-R-A-I-Y-Y.

Yeah, Crayon.


I just like 50 million generations.

That's a lot of money.

And I'm just like, I'm just trying to really go like, all right, who paid for that?

Like I know there's a queue.


So I don't know the details on this.

We definitely though.

I'm just asking this just because, like, you know, that's a really good way to host things.

And I'm like, what's the limit on this?

You know, like.


So Darlene Mini was like a pretty custom setup.

So typically when you create a Gradio app, it like it spawns a, you know, a fast API

for you and it gets like, you know, it gets kind of built into this Docker container and

so on and so forth.

But you can also if you want to, you can call, you know, external APIs and stuff if you're

hosting elsewhere.

And that was the situation with Darlene Mini.

It had its own custom backend and those, you know, I think various, I don't know if like

we sponsored that or only we sponsored that or if other people were also sponsoring that

as well.

But there was definitely a bunch of the backend was like a totally custom backend and the

Gradio app was actually a static app, essentially hosted on spaces that called out to this API

that had its own queue.

And you know, it did like a bunch of batching and stuff on its own.

So we didn't actually enable our queue because then it would have meant that they couldn't

use their queue.

So they had their own queue.

It was its own kind of custom set of infrastructure that we provided.

We definitely provided a lot of support around that.

But in terms of who actually foot the bill, I'm not 100 percent.

Yeah, I don't mean to pry.

I'm just like, this might be one of, like, I would love to make something like that.

And I just I have no idea how much effort or money it takes to run something like that.

You know, that's pretty crazy.

But it's so fun.

It really is so fun.

So, yeah.

Any other kind of applications that you want to shout out that are fun to play around with

or like, how do you get started with Gradio?


Yeah, I think the I mean, you can go and browse our spaces.

So if you go to, you can actually go and browse spaces.

We have lists of trending spaces.

So if you wanted to explore the different kinds of things that people are doing, then

you can take a look at that.

I think like the most popular one at the minute is obviously stable diffusion web UI.

So this is a pretty complex web UI around stable around stable diffusion, which is,

again, another text to image generation model.

Very, very kind of like high quality image generations.

And the web UI allows you to kind of tweak the parameters.

It's a pretty complex model.

You can make lots of kind of like tweaks and things.

And that has become very, very kind of popular.

You could, you know, you can create a Google Colab and kind of fork and take a look at


You can run it locally.

And what's interesting about stable diffusion now is, of course, people have started to

fine tune, you know, with with Dreambooth.

So they've started to kind of make their own kind of, you know, you can you can fine tune

on a relatively small set of data to create like different kind of styles and stuff.

And they've become, you know, I guess they've kind of kind of viral in a much smaller kind

of sense.

They're a bit more specialist.

It's a little bit more like technical to do that.

But that's been an interesting kind of offshoot.

And you know, interesting showing you were talking earlier about community and ecosystem,

again, like entire mini communities have built up around these things and setting up their

own kind of sites to share all of the different models that they've generated so people can

play around with them.

You know, everyone's frightened.

Is Disney going to sue me because I've trained my model on a lot of Disney images and stuff.

All those kind of questions start opening up.

But yeah, stable diffusion is kind of it kind of it became popular, but it's it's what's

been impressive is how people have kind of continued to play with it, continued to kind

of find new and interesting use cases.

And it's even today, it's still pretty constant.

It's still pretty popular with people using it.

And that's definitely worth checking out if you like image generation models anyway.


So the the perspective I would give there is you said it was it was less popular, but

I'm not sure about that because that's the that's the the dream booth.

Fine tuning is the is where I saw it crossover on YouTube because people and the way that

will frame this is people are most interested in themselves.

And so there's a qualitative difference between you being able to type any text and it generates

any image and you being able to type any text and it puts you in that image.

Puts your friend, puts your loved one, puts your pet, whatever it is.


Like people are very interested in themselves.

And and those I think the one that the video that went viral was from Corridor Digital

where they like told a story with their coworkers and inserted themselves in that story.

And it just became that much more interesting just because that's hey, that's a face I work

with every single day.

And some of the more interesting product businesses would be Avatar AI, which is the same thing.

And and so what I would qualify there is like there's these foundational models like stable


Fine tuning is kind of that last mile thing that people add on on top of that that requires

a lot of UI work and and and yes, additional machine learning training.

But like that actually makes it more useful to the end user who is ultimately interested

in their use case and their own specific constraints and needs.

So well, I mean, if we're if we're ending off on the Gradial thing, I wanted to ask,

like I think I think just the general sense of like UI generators or like not many people

work on this, essentially what you're working on, which is you're interpreting some intention

of a developer and then creating in a sort of a low code fashion UI.


And I think that's a very interesting and rare use case of front end frameworks that

we probably stretch Svelte in some interesting ways.

And I thought that there might be an opportunity to comment on some lesser known APIs that

provide you a lot of value that people may not know about.

Yeah, I think so that at a very high level, you can kind of see kind of something like

Gradio as it's a framework fundamentally.

And we often talk about this that, you know, you know, when we're looking to looking for

new people, we're more interested in people that got experience building frameworks than

experience in machine learning, because, you know, Gradio is very much positioned as a,

you know, make web UIs for your machine learning kind of models, but you can build anything,

you can build like normal web apps, like with it.

And the big difference is instead of like the DOM being our primitive, like it is in

Svelte or React, like we have these kind of components that are our primitives.

So you can't render, you know, this h1 or that, you know, paragraph tag, you can render

our image component, which itself is a very, very complex thing.

So most of our work, especially in the front end is, you know, we spend a lot of time with

relatively low level APIs, maybe with canvas APIs, with audio APIs, with video APIs, with,

you know, who knows in the future with, you know, WebGL is something that will be utilized

in the future as well.

So a lot of our work is spent just doing kind of web API stuff.

In terms of the Svelte, it can get a bit tricky sometimes to process all of this, to take this

essentially this, the way it works in Gradio is we have a configuration that gets generated

by the Python backend and gets passed down to the front end and that, you know, you want

to see it as like almost a virtual DOM representation or a virtual component representation, then

you can.

It's a tree of components that Svelte will then render.

And I mean, you know, dynamic components, Svelte colon components is your friend in

this case.

Rendering all these components.

But I think the difficult thing is having an architecture that supports kind of managing

all of that state.

So there's all of this states that you have for your components.

And that might be props that are being set at the very kind of like high level that you

need to pass down.

But then those props can potentially change throughout the lifecycle of the application.

You need to ensure that your kind of config reflects that you need to be able to, you

know, if you were to show and hide something, can you rehydrate with the same state?

But you also kind of start to face challenges where because we're kind of, because the kind

of the code, you know, all of our kind of event handle is what would be an event handler

in Svelte.

Like that's written in Python and that involves a kind of like a server round trip to a degree.

So we have, for example, events on, we have certain like change events on certain components.

So if the value changes, that event is triggered and we have a handler for that and that's

a function written in Python.

So you can almost see this, it's a little bit like, it doesn't work in the same way,

at least not on the minute as Elixir live view, but it's that same kind of, you know,


We do use, we do use web sockets and stuff for various reasons.

And maybe we'll go more granular with updates in the way that kind of live view does in

the future.

But it's kind of that process where this manage some state or something up here, do some,

make some changes.

You want to pass that down to a component and that's, you know, request response cycles.

So you do have a problem though, when say you've got some internal state for a component,

if you're going to dismount and remount that component, you're going to lose the internal

representation of state, which you might need to say, if you're working with canvas, you

might want to send just images up to your API to do something with, to run a prediction


So internally you might have like a bunch of paths, you might have some, you know, mathematical

representations of like vectors that you're drawing onto a canvas and stuff.

And you can potentially lose all of that internal state.

So having a way to serialize both the state that your API cares about and the state that

your front end cares about is, you know, really important.

And we've had issues in the past where maybe we were, you know, our, our kind of like state

mechanisms were a little bit, you know, too naive and in some ways they still are and

they need to be kind of improved as you want to do more with the framework.

And maybe it's performance optimization.

So you're only kind of sending partial updates and stuff.

Maybe it is the idea of mounting and dismounting, but it means you need to think kind of carefully

about how your, how your kind of front end and your backend are talking to one another,

because we have this hard requirement that users write Python.

And you know, they, there are escape patches for JavaScript, but that means anything that

you potentially want to do hiding and showing a button, you should be able to do that in


So that's, you know, it poses interesting challenges when you need to work, kind of

try and work across languages.

And it also introduces UX challenges.

Like if we want people to be able to show and hide a button in Python, and that involves

a server round trip, is that going to deliver the performance that we need?

Are there some new APIs we can have to support this use case?

Some short hands that can, you know, make the UX, you know, faster.

So it's, it definitely has a number of challenges that you just don't, you know, you don't have

to face in, and a lot of hard constraints that you simply don't have to face in, you

know, building a typical product, for example.



That's a great explanation.

Thank you.

All right.

So yeah, and you're still hiring, right?

Yeah, we are hiring.

So we are looking for a front end engineer.

So it's advertised on our Hooking Face website as a front end engineer, fully remote, the

whole company's remote.

And you know, we're looking for someone, you know, a Svelte person would be good, but really

someone who really understands, you know, has some experience with those core web APIs

I was talking about, someone who's comfortable on MDN and implementing things at a relatively

low level is ideal, what we're looking for.

Yeah, that makes sense.

All right.

So I think that's it for all of the topics, unless you have something else that you want

to add?

Well, I was going to mention SvelteKit.

You have some SvelteKit apps.

For those listening and who are not on Twitter, there are eight issues left, apparently, on

SvelteKit 1.0's roadmap, which means maybe soon.

We don't know.

But I just wanted to leave some opportunity for people to comment on the maturity of SvelteKit

and how people are excited about it.


I'm sure we'll be getting SvelteKit pretty soon.

I feel like it's going to happen.

It's going to happen.

Feel it.

All right.

We're getting the side eye.

A lot of silence from the two maintainers on this call.

We're definitely in the last stretch, right?

We're feeling kind of positive.

Obviously, there's still a lot to do.

You know, we feel very strongly that the other stories that we're going to be talking about

stuff, the docs and the tutorials and all of that sort of stuff needs to be there when

we release.

It's not 1.0 without a really great adoption story, whether it's how you integrate, how

you deploy, the whole end-to-end.

The tutorials and all of that sort of stuff needs to be there, partly because you're nothing

without your documentation, but also partly because you only get one chance to launch.

If you screw it up, you don't get another chance, especially with the hype that's been

building around SvelteKit for the past...

Oh, I don't even know how long it's been.

So we need to make sure that it's...

Yeah, it's been years, literally been years.

But we need to make sure that everything's in the right place when we launch.

So we're taking care of that.

So there's no timescale, but it's not...

Things are going well.



What a tease.

All right.

I'm seeing one issue being merged right now, so hopefully it'll be seven.

And there's podcasting.



So let's move on to the fun part of the episodes.

No, I'm just kidding.

Unpopular opinions.

Do you guys have any?

Let's see.

Oh, we did tell Penguin to prepare one, but we'll leave him to the end.

But Brittany, you want to go first?

Kevin ruined mine.


Mine was just that Svelte will stay on top of React based on the NPM surveys, or download

numbers that were...

I was just giving people shit.

It's fine.

I think Sean stole mine.

I mean, didn't know it was mine to be fair.

And then he's changed it.

And it's even worse now because I have another one.

No, go for it.

Sean, do you want to go with...

Because you said it first anyway.

Do you want to go with your original one as well?

Well, no.

So I think last week we had this conversation about is Twitter dying or will it last?

And I think my thoughts over the past week have been Twitter as you knew it is already


And primarily, there's just a lot of discussion about how Twitter will run, the uptime and


The community nature of Twitter seems to be permanently gone.

Let's just say the ownership changes again, and it goes back to normal.

I think just the illusion that this is a space that you can invest in permanently is gone


And people are definitely diversifying.

And I think definitely, masternode usage is picking up.

I've tried it out recently.

It seems to be more developed than I thought.

And in particular, I think the data science and developer community are converging on

two servers,

I think Penguin, you're there on sigmoid.

I just created an account as well.

And then the general tech community, I think it's on HachyDerm or HackyDerm.

And I think that's going to be it going forward.

It's just going to be diversification of tech community from Twitter to Twitter and Mastodon.

Obviously, with other existing like TikTok and the text-based ones, people are definitely


Even as of last week, I would have agreed with you.

But after seeing the movement, I feel like it's just a small percentage of people.

It seems like my circle is kind of moving towards Mastodon.

I don't know if the bigger, greater Twitter ecosystem of developers is really moving that


It's enough that people are clicking the backup button on Twitter.

The trust is gone.

You don't default to believing that everything that you have on there will stay up, will

be accessible at all times.

You don't default to believing that there will be decent moderation on the platform.

And you don't believe that the timeline will not be taken over by some main character energy

that is completely distracting and irrelevant to the things that you actually want to spend

time on.

For that reason, I think there's just a number of people who are just permanently off Twitter.

There are a number of people who are not permanently off Twitter, but at least significantly diversified


So as a content creator, that matters.

I put a lot of my public thinking online for me to search later on.

It's specifically for me.

I have to start thinking about moving somewhere else.

I just cannot risk putting it on there anymore.

That's the conversations that we're having a lot too.

I'm on Mastodon, but companies are starting to think about coming to Mastodon.

I'm like, Mastodon is not a place really for companies right now.

I think they would get kicked off basically.

We should share all of our Mastodons on the show notes too, just in case anybody's interested.

Well, hang on.

So Anthony is against this.

So you go ahead.

Well, no, so I'm confused now because everyone's read out my unpopular opinion and I didn't

even say it.

And that was the one that I took because Sean saw my first one and now Sean's first one

that I had originally is now disappeared entirely.

So I'm so confused, but I will respond.

My only thing is I just don't think Mastodon is usable for anyone outside of tech really.

I think it's just, you know, people have struggles, you know, the regular people have struggles

handling a username and password.

And I don't think that Mastodon is really accessible for them.

So I just don't think it's going to, it might be where tech goes, but I think that'll be

it, which is probably what kind of what you're saying really.

I just don't think it's got the usability and even the name is not that appealing.

It's, you know, some kind of random, I don't know what it relates to actually what Mastodon

is, but it's something sci-fi I assume.

And yeah, no, it's a big elephant.

Is it a big elephant?

It sounds sci-fi to me.

My elephant knowledge is not great.

So yeah, sci-fi elephant.

It's a, they're extinct, right?

They're extinct polycursors to elephants.

Yeah, yeah.

Out of the five Megazords, Mastodon was the Black Ranger Megazord.

Wait, hang on a minute.

This is not sci-fi.

Are we sure this is not sci-fi?

Megazord, are you sure?


I don't know, I think you're all trolling me.

Share your other unpopular opinion because it needs to be said.

It needs to be said.

Yeah, it's kind of Sean and mine.

It sounds like it's a few people's and it's not unpopular at all, but MD spec should be

part of Svelte and the only reason I thought to see the day is because, you know, setting

up MD specs, you've got to follow some instructions to stick it in there.

And I feel like I would actually want projects to be able to like add a folder, add a file

with let's say MDS, you know, a more acceptable file name, Penguin, MDS or something.

And then, and it just worked out of the box.

I think that would be something a language should have because I see a lot of people's

real, when everyone mentions JSX on Twitter, they always in the same sentence mentioned

MDX, right?

It's almost like they go hand in hand and we don't have JSX for obvious reasons, but

I think if we had like MDS or whatever it's going to be, you know, as part of the core,

I think it's a huge win.

Documentation is important.

I think that's absolutely true.

And I think that would put us a little bit on Astro too, because Astro has like that

MDX support out of the box.

And even though you can use Svelte in it, you can kind of use Svelte along with MDX,

but it's, I think it should be integrated at this point.

I don't think it needs to be a core package for that to be the case, you know, to have

a really easy experience with Svelte kit, you know, it could easily be pre-configured,

but I'm probably the barrier to it becoming core.

Like I'm probably the only person who thinks that that shouldn't be the case.

So that's interesting that you think that it shouldn't be the case.

With Svelte add command though, it is very easy to set up.

So I guess I will say that it is very easy to add in and you don't necessarily need it

to be, but it would be nice to have just access to it out of the box, I guess.

Megasword commands you.

How about that?

Yeah, I make it required in all my templates.

I think it's just a better way to author most pages by default, unless you really, really

need to write HTML, then go ahead and do that.

Otherwise, most people were better off choosing in these facts.

I mean, I also configure it with a lot of like remark plugins and stuff, so it's there

will never be zero config in these facts.

It's just a question of how much config and how much set up.

I wonder that too, like if we could get like actually a package to render the markdown

built into it.

Also, that would be nice.

Oh, OK.

Like you said, you use remark and stuff and like you have to add more configuration to


Yeah, that's true.



I'll go quickly on my one.

So I think every developer needs to be AI literate.

I think this is not a fad.

I think everyone needs to know how to leverage AI as part of their jobs because it will increasingly

be part of knowledge work.

And yeah, that's the long and short of it.

I picked Copilot last time, but this is just more general, like play around with these

things because you will be using more and more of them over the next 10 years.

Absolutely agree.

I agree.


And now I have something else to learn.

There's always something else to learn, Brittany.


So I have a repo going.

If you want to learn along with me, I have I have a repo for prompts engineering, which

is what people are calling.

Let's call it like for people who are unaware, the progression of software 1.0, 2.0, 3.0.

So 1.0 is us writing code manually.

Software 2.0 is data defining code from from specific machine learning cases.

3.0 is using large foundational models and understanding how to interface with it, but

not training your own models.

So all of those are levels of AI to which I think the developer needs to understand because

these are new forms of software.

3.0 is cryptography.

Come on, Sean.

I think there's a space for Penguin, if you have an unpopular opinion you want to share.

So leading off, I have two just like as a riffing off some of the things that have been


The first one is that if the destruction of Twitter, whatever happens, does lead to fragmentation

of communities, that's probably, I think, a good thing.

And it's one of the best things that could happen to the Internet is if these communities

started to build up in a space that made sense for them rather than defaulting to some generic

platform that might not actually be the best environment for them to actually communicate.

Maybe artists going back to a more art specific platform, maybe engineers and tech folk going

to a platform that better facilitates communication about tech in whatever way that makes sense.

And all the celebrities can go to TikTok or something.

You know, where maybe platforms where rage is not the most valuable currency isn't a

bad thing for discourse online.

But yeah, my second one is basically leading off what you've just said, Sean, is that I

think that AI companies are not doing a very good job of reaching outside of the AI bubble,

not reaching outside of existing ML ecosystems to software engineers and figuring paths for

and I think that's there's an element of like a meeting in the middle, like you say, software

engineers need to do the work to understand what's available, what the ecosystem looks

like, what the you know, what the current state of the art is and you know, what's happening,

it moves incredibly quickly, you know, month by month, you don't know really what's going

to be what's going to be going on.

But on the same token, all of these AI ML companies building these tools and services

and platforms need to do a better job of reaching out to software engineers, which is a really

good thing.

There is of course, a colossal market that absolutely dwarfs the kind of like the ML

ecosystem, there's still more software engineers than there are ML folk.

And I think there needs to be more work to to reach out to those people, which is obviously

conversations that we are hooking face have all the time, but I'm sure it's a conversation

other AI companies and startups and stuff are having, but it has to happen.

And whoever does the best job of that is going to like win at the end of the day, if they

can bring those people in early before they've, you know, become super literate in AI, and

they will kind of like, win out in the end.

All right.


On your first point, I just wanted to say the only thing that I worry about with that

diversification is I do think it's good that communities are breaking up.

I just wonder if it's going to make them kind of shallow and in their bubbles and not get

outside viewpoints and for politics, especially like that worries me that I guess it doesn't

spread outside the bubble, but it worries me that people aren't seeing other people's

viewpoints and things.

That's a good point.

All right.

So I don't have one, an unpopular opinion.

Well, I have many, but not today.

I don't have time.

Yeah, picks.

What do you guys have?


Mine is just a TV show.

I've been binging and I hate that it comes out once a week because it's so good.

I want to actually binge it and just watch the whole thing.

It's the peripheral.

Double agreed.

And I just got the book, so I'm going to download it and put it on my Kindle and spoil the ending

for myself because I need to read the book now.

There's a book?

I need to get that as well.

It's a book series, I think.



I know, right?



It's really nice.

I've been binge watching.

It's a good show.

The one sketch that actually was really properly cracking me up was the last one in that film.

So if you decide it's not for you and skip the rest, you should watch the last sketch

where they're sat at a restaurant because I think that one was great.

If it's on YouTube, you can try and link it up.

Well, mine's very simple.

One of the more interesting companies other than Hugging Face would be Runway ML.

I think they recently raised to Series B and it's essentially a sort of video AI toolkit.

If you're thinking about, I think, After Effects or some kind of other video editing tool,

they're basically applying the entire, every single advance in AI, they'll try to reproduce

it and offer it as a product so you can sort of use to edit your videos.

And their demos are super cool.

So if you haven't seen this, I linked a video in the show notes.

All right.

Penguin, do you have a pick, sir?

A pick could be a movie, could be a film, could be a...

Something you enjoyed lately.

Something you like.

The thing is we like to spring these on guests without any preparation whatsoever.


I don't know why we like that.

It's quite bad.


We don't really like that.


So there's a...

I'm going to just go with a really local choice, but there's a restaurant here in Amsterdam

called TerraZen and it does this like Japanese kind of Caribbean fusion kind of food and

it's incredible.

So if you're ever in Amsterdam, go to TerraZen and eat at this restaurant.

That's my prop pick.

Swear you're the second person that I've heard this from in like two weeks and I've never

been to Amsterdam and this restaurant must be amazing because like I now need to go to

Amsterdam just to go to this restaurant.

It's really good.

It was featured in a YouTube video a couple of weeks ago or something actually.

So I think it's become more popular recently.

We even had to, it's like a tiny, it's down some random side street.

It's a tiny little place and we have to wait at like lunchtime to get a table.

So it's definitely worth a visit.

Is it Japanese Restafarian?

It's kind of hard to place.

It's very fusion-y.

The menu is like pretty weird, but they do have like...

There are Japanese people who work there or I don't know if one of the founders is Japanese

and one is Restafarian or from the Caribbean or something.

But obviously it's a very strange kind of fusion, but it's like the food is very, very


So, come to Amsterdam.

It says we serve Caribbean and Japanese soul food according to Restafarian principles.

What the hell is Restafarian principles?

I can't speak for that.

I can't speak for that.

It's great.


All right.

I think that's it for this week and thanks for joining us, Penguin.

It was a long time coming.

Thank you for having me.

It's been a pleasure.

We'll see you in a bit when we do an episode on MDsfx maybe.


Should be fun.

And thanks everyone for listening and we'll see you next week.

All right.



See you next week.