The Deep Stack Podcast

In this episode of the Deep Stack Podcast, hosts Raph and Lio dive into the intricacies of the Defang stack. They discuss Lio's past project Noun.ly as a stepping stone, detailing the challenges of URL shortening and spam prevention. The conversation then shifts to Defang, a tool designed to ease cloud deployments, particularly on AWS. Lio explains how Defang automates infrastructure provisioning using technologies like Pulumi, Docker Compose, ECS, and VPCs. They also talk about future enhancements, including using AI for debugging and potential support for other cloud providers.


Technologies:

- AWS
- Pulumi
- Docker Compose
- ECS
- VPC
- Project Honeypot
- GRPC
- CloudFormation
- DynamoDB
- Kaniko


Links:

🛠️ Check out Defang:
https://defang.io/

💻 Check out EC:
https://goec.io

✨ Check out the Chewy Stack:
https://gochewy.io

If you want to talk about your stack on the pod, contact us at hello@deepstackpodcast.com

Creators & Guests

Host
Lionello Lunesu
Wanderer. CEO of Defang.io, building a better Internet.
Host
Raphaël Titsworth-Morin
Builder of the Chewy Stack. Co-founder of Éphémère Creative. Trying to do some good in the world with tech and design.

What is The Deep Stack Podcast?

A podcast where we explore everything in a modern (and not so modern) application stack. Join us as we talk about everything from modern TypeScript to old-school D.lang, Postgres to SQL Server, Heroku to AWS and everything in between, as we break down different apps, their stacks, and architectures.

Raph: hey, friends, and welcome
to the DeepStack podcast.

I am your host, Raph.

Lio: I am your host Lio.

Raph: And this is the DeepStack podcast,
where we explore everything in a modern

or not so modern application stack.

Today, we are going to be talking to Lio
about Defang stack what powers Defang?

So yeah I feel like I already have a
reasonably good sense of how all of

this works, but I'm super curious to,
really get into the guts of this thing.

Lio: Yeah.

So I, in a previous episode, we talked
about nounly, which is a project

I've done more than 10 years ago.

And it's a very simple URL shortener.

So it has a key value store Actually, the
actual URL shortening is trivial, right?

In the nounly case, we pick a noun from
a list and we put that in the key value

store and those nouns remain mapped for
about 24 hours and then they get released

and somebody else can get that noun.

So there's a bunch of logic around.

You know how, which noun to pick,
it's deterministic, usually it

tries to pick a noun from the URL.

If you'll see if you share like a
YouTube view link, you often get the

view as the noun, if that is available.

So there's a little bit
of code around that.

The key value store
integration is pretty trivial.

The key is literally the noun, so
it's pretty straightforward, but

then 80 percent of the code in the
nounly case is about preventing spam.

of course, anytime you have a page with
a edit box or text box or form, you can

just count on bots putting crap in it.

And so in the nounly case
that was integrating with

project honeypot et cetera.

Anyway suffice to say it's a
simple project and I've always

wished this kind of project to
be like a weekend kind of thing.

One day, a weekend, you spin it
up, you put it out there, you buy

a domain, call it a day, right?

And that, that just wasn't the case.

And so Defang is modeled.

All those experiences, not just
nounly, also the work that I've

done while working at Microsoft.

I've worked at Workday, I've
worked at other startups.

And so each time I think like why is so
much time spent on deploying and everybody

is trying to figure out the same things.

And so what can we do to just make
this, let that people focus on their

application instead of how to deploy it.

And then that's where Defang is based on.

So in essence, if people don't know
what it is it's a tool mostly though

Defang is mostly a CLI tool that
helps you deploy your application

and the vision is that it helps you
deploy your application to any cloud.

In practice right now we've
started with AWS so it takes your

application and it deploys it to AWS.

That could be your AWS account or it's our
AWS account, which we call the Playground.

And so in essence, what does the tool do?

Infrastructure as code.

Everybody knows it.

It has been a great improvement
over ClickOps, but you still

need to write all that code.

So all those clicks that you used to do,
They still translate almost one to one

to the code that you have to write now.

And so in the AWS case, that means
if you want to make a VPC, you still

have to deal with unless you go
with the defaults, which is almost

never a good idea for production.

If you don't want to go with the defaults,
you end up dealing with subnets and

routing tables and internet gateways,
NAT gateways, all of that stuff.

Infrastructure as code made
that easier, but it's Still

Hard.

You still need to be, you're still
writing code and now you're writing

very specialized code and it's also very
different from platform to platform.

So in essence, Defang
writes that code for you.

Raph: I think that sounds great.

Lio: Yeah.

And I don't mean that in the
large language model sense, right?

So this is just.

So this is just uh, rule based.

You have a description of your
application in the current version.

That is a Docker compose file.

We'll use the same file format.

In fact, we use the same Go library
that they use to, to parse those files.

based on what you have in your compose
file, we'll write so AWS code, or

we're working on DigitalOcean and
we'll, whatever's next will depend

on usage or what people ask for.

You have a CLI tool that looks at your
compose file and creates infrastructure as

code based on the target that you go to.

More, one level down we are using Pulumi.

So the specific infrastructure
code that gets written is Pulumi.

And so it, it'll be fun maybe to go
chronologically what the tool does.

Raph: Yeah.

what if we do it this way?

Okay.

So I'm a Defang user.

I download the CLI and I run.

Defang login.

Let's just start with that one.

What happens?

Lio: Yeah.

Raph: when I log in?

Lio: So we do have a login.

Even though our major focus is
bring your own cloud, right?

This is your project on your computer
going to be deployed to your AWS account.

So even though that is our major
focus, we still have our login.

Part of that is because of the
generative AI features that we have.

There's some analytics that we do.

We do have a portal, even though
right now that one doesn't work

for your, Bring your own cloud
projects, but eventually it will.

So there is a login step.

Our login right now is solely GitHub.

So social login GitHub account.

I love auth.

So anything auth I'm happy to dig in.

So in, in the Defang login case yeah
you'll get prompted to open the browser

or the CLI can open the browser for you.

That's where the first time you'll
be asked to, do you want this app

to to log in as you, you want to
have this app to have permission.

There's no really permissions.

We only need your GitHub account
the name and which organizations

you're based on are a member of.

That is used, we use that
for kind of project access.

So if you use a Defang CLI in
GitHub Action, that'll decide

which projects you can deploy with
with the Defang CLI in the action.

so your browser opens the page
after you say consent, yes.

It redirects to a local host
page, which is the one that the

CLI is serving and is waiting

And then the CLI sends the
authorization code to our backend.

And then that's how we know, okay,
this person was indeed logged into

GitHub as, whatever your handle
is in my case is, it's my name.

Raph: That backend?

Lio: our backend.

It's running on our Defang domain.

And that is our API endpoints, right?

So it's implemented with GRPC, so
it has a bunch of GRPC methods.

And so the login method is one of them.

there's a tail method to see
what your service has been doing.

There's the deploy methods for the
playground case I mentioned earlier.

If you don't want to deal
with your own AWS account, you

can deploy to our playground.

That's actually how we started.

We started, Oh, let's make it super easy
so people don't have to deal with AWS.

Only deal with, the data.

You can see that they only
deal with their project.

There are some good reasons why we are
slowly moving away from that model.

I can name a few.

There were some potential customers
that wanted to do audits, right?

They, our app is running.

That's cool.

But we want to pass SOC 2 audits.

So if that's running in our account,
now we have to pass those audits.

Raph: Yeah.

Lio: because we're using AWS ourselves,
it has been pretty trivial for us to,

instead of you deploy to our account,

we use exactly the same code, but
you deploy to your own account.

And in that latter case, our
server has not much to do, right?

I call it our backend.

Our backend basically
just checks your login.

Again, that is still relevant for like
the portal and a bunch of other cases.

Eventually, maybe there could
be a flow where our backend is

completely out of the picture.

I can imagine that there's
a scenario for that.

that seems to be a reasonable use case,
but right now it's still in there.

So it's uh, I think not right
now, 90 percent deployed with

Defang not everything yet.

Be good.

Yeah no, yeah, we do try
to eat our own dog food.

Our own backend is behind the
curve on, on, on some of those

features that we've been adding.

we need to make some effort
to get like completely a

hundred percent Defang itself.

But that is definitely
a goal that we have.

Yeah.

Raph: Yeah.

What other, so the, does the backend
right now, it uses a key value store

or anything like that, like how do you,
I know as a user that there's the, you

have to accept the terms of service.

But I think that's the only
thing that's stored anywhere.

Lio: That's the only state we have.

Raph: legal things need to be in place.

Lio: So thanks for reminding me.

That is the main reason
we have a defang login.

we will ask you the first time, Hey,
do you approve of our terms of service?

Partly that is because we're in beta.

We don't want people to think
that their app will be running.

Forever for free, forever no downtime,
et cetera, that is definitely not

a promise we can keep right now.

And so to track the acceptance
of the terms of service, we have

our backend has this login call.

One of the RPCs that I mentioned
is indeed like acceptance of the

terms of service and right now that
is using on our side at DynamoDB.

So our backend talks to Dynamo and stores
there the so it's your GitHub, that's

your tenant name is your GitHub account
with like the terms of service Boolean.

And of course, if you're running
in our account, you also have a

list of services you've deployed.

There's a bunch of other stuff, but if
you are bringing your own cloud and you'll

just have that Boolean there whether
you have agreed to the terms of service.

Raph: Cool.

Lio: So that is yeah, step one, the
Defang login, and then if, then you do

a Defang, so even Defang login, by the
way, I think most people don't do that

because the CLI is smart enough to prompt
you if you happen to be logged out or

your token has expired it'll always
prompt you to log in interactively.

So, uh, Defang login.

There's no, not many reasons why you
would explicitly type that command.

And so the next thing would
be defang compose up, right?

So that is if you already have
a compose project, your next

step would be defang compose up.

And, like I said, we try to
be Docker compatible when

it comes to the file format.

And so we'll look at your Compose file,
anything that we, in the Compose file

that is specific to Docker or that is
specific to running on your local machine

that we cannot or haven't implemented
yet we'll issue warnings for those.

But then that Compose file gets
parsed and that goes to the backend.

And so here we have two different paths.

Depending on whether you go to our account
or you go to your own AWS account, right?

In our account, the next step
that happens is done by that

backend that I mentioned earlier.

But apart from that, it's very similar.

in the BYOC case, what is done by
our backend is done by the CLI.

So the CLI actually parses the
compose file and kicks off what

we call the CD function, right?

So we actually have a function.

It's a serverless function.

It's not a Lambda.

We use ECS, an ECS task and in the case
of AWS and so in this CD function, that's

the one that is actually running Pulumi.

And writes that AI that infrastructure
as code for your project,

Raph: Cool.

Lio: but it's in, in the, both
of those cases it's very similar.

It's exactly the same code
that's now happening, right?

And in the Playground case, it's our
backend that kicks off the CD function

with that compose file which kicks off
Pulumi and bring your own cloud case.

It actually starts that
task in your own account.

And so to do that, there's a little bit,
there's a small CloudFormation stack

that we use purely for bootstrapping.

So I, myself, I'm not a
big fan of CloudFormation.

I've always thought it's so
easy to get into trouble.

They don't do drift they, they
can detect drift, they can't

fix it, this kind of thing.

Rollbacks are super painful.

Many times you have to delete your stack

Raph: Yeah.

Lio: it.

I found it very hard to work with.

So we try to keep that as a minimum.

So we have this CloudFormation stack
that only has I think seven resources

in it or something, bare minimum.

And that, that is, so ECS
cluster a task definition.

A security group, a ECR repository.

The container repository, like something
like that, like bare minimum to run a task

Raph: Can we pause for a second there?

I'm curious.

Why does CloudFormation need to, why do
you need to use CloudFormation for that?

Could you use Pulumi?

Lio: if Yeah.

If you have local Pulumi.

If you have Pulumi on your machine,

Raph: that's

Lio: probably use Pulumi.

Raph: I see.

Yeah, so you can't expect
someone to have local Pulumi,

Lio: yeah.

Raph: CloudFormation you can use.

Lio: Yeah.

So the, our CLI invokes
the AWS cloud formation.

SDK the endpoints to
create that initial stack.

We call that the CD stack.

You'll see it in your AWS dashboard.

It's called Define CD and
that thing doesn't change.

Shouldn't change.

Doesn't depend which project you
can even you have your whole team.

running the Defang CLI, they'll be
using the same CloudFormation stack

because there's no state in there.

It's purely used to run your actual CD.

Raph: Okay.

Lio: and then we kick off
the CD container, which is a

container with Pulumi in it.

It has Pulumi and it has
our own TypeScript code.

That is the code that basically
generates the your whole cluster, all

the stuff that you need for your project.

Again, identical in both
cases, that is the same image.

Raph: Yeah.

Lio: no dependencies
on your local machine.

You really need nothing
else but the Defang CLI.

It's the Defang CLI that talks
to AWS CDK, SDK, not CDK, SDK.

creates that little cloud formation stack
runs the container with your project yaml

your compose yaml which then generates
the plume code for your actual project

and deploys it to that same account.

You don't need any images that you
are building from your compose file.

Also, all kicked off by that
CD task that we started.

Raph: Yeah.

So my project just gets uploaded and
everything gets uploaded and then

all of the build happens in my cloud.

A question, you say generates the code.

So does it actually do you have like
templates and you actually spit out code

or is it just that the Pulumi logic,

like you have logic in a Pulumi program?

Lio: So right now it's the latter, right?

So right now we have the,
that YAML structure comes in

Raph: Yeah.

Lio: and we iterate through
it, so you can imagine there's

a loop, something like four.

Each service in services,
which is one of the top level

constructs in the Compose YAML.

And then we do new Pulumi stuff, right?

So we Pulumi resources,
depending on what you're doing.

If you need an image, then we
create a image builder resource.

We actually use Kaniko.

So we kick off a Kaniko build inside
this while we iterate those services.

So they're not , the code is not
generated per se, there will, there's

no source files that you can look at.

I think it would be super cool totally
something I'd love to look into to have

that step for multiple reasons, right?

It's cool to, for help, so
debugging, you can actually look

at, hey, this is what happened.

And if there's anything weird yeah.

You can see it in, in
those generated files.

I can imagine it's also attractive for
people that say how do I know I can

use Defang 10 years from now, right?

Yeah, you have those files.

Although Defang is open source,
so you wouldn't per se need those.

Generated files, if you can
regenerate them the other way.

But I think it will be cool.

Totally something I'd look into.

Raph: So the build is running.

What's it generating?

What am I, what's actually
being created in my account?

Lio: So right now the translation
from YAML, from Compose YAML to

cloud resources is fairly static.

That will not be like
that for a long time.

By that a service
translates to a ECS service.

Raph: Yeah.

Lio: And that is one-On-one there's
a few exceptions, but right now

it's pretty much one-to-one.

So a service in your composed
file becomes a service on ECS

the Elastic Container Service.

Eventually that it will be decoupled, that
is the vision that we can decouple that.

We already started this last
week we released this new feature

where we have managed Redis.

So if you declare a Redis container
or Redis service in your compose file.

Then, the naive way, and it
works, you can do it, it works,

it's not great, it will create a
Redis service in your AWS account.

But of course The problem with
that is, if a service gets

restarted, there's no persistence,
you'll lose your state, right?

You'll lose, which in the Redis
cache case might be fine, but

probably not what people expect.

And so now we have this decoupling
happening, where, what if I want my

Redis service in my Compose file?

Because, it makes sense, I have
all the dependencies, this Compose

file becomes a logical description
of everything my app needs.

I can test it locally, docker compose
up, locally, everything is connected.

Now I want to deploy to the cloud,
but I don't want to be in the

business of hosting Redis or keeping
it up and running, maintaining

it, updating it, et cetera.

And so we have this custom attribute
in the compose file x Defang redis

that actually says, Hey, this is a
Redis service, but once you deploy.

I want a managed Redis.

And so now our Pulumi code
says, Hey, wait a minute.

This is service is running a Redis image.

Instead of actually running Redis as a
service, which would work, but with all

the downsides mentioned earlier, we're
going to deploy a managed Redis for you.

And your app doesn't even know because all
the C names, everything gets hooked up.

And, Your app connects to it like it
always would security groups get created.

So your app can connect
to your Redis cluster.

All of that gets created by the
Pulumi code and your app is up and

running, but now it has a managed
Redis service, whereas locally,

Docker Compose up, you would have
your Redis running in a container.

And this is where that decoupling
starts from the logical description

which basically started as a physical
description, but it gets more and more

logical because it's no longer a mapping,
a clear mapping between what's running

in the cloud and what was declared.

Raph: Yeah.

Out of curiosity.

Okay.

So to me, one of the things
that I find most frustrating and

confusing about dealing with any
cloud platform is the networking.

I don't like networking.

I get very lost with
networking very easily.

So what do you actually
provision To make sure that I

can connect my services properly.

Like how does that all work?

What are the different resources?

Lio: Yeah, there are so many that I
might not be able to recite them on

the, from the top of my head, but yeah.

A Compose file is a project.

you have a top level Compose
file, that's your project.

And so for each project, you'll get a VPC.

Raph: Okay.

Lio: So that is pretty standard.

Raph: Yeah.

Lio: And then it depends
on the services, right?

If one of your services, one or more
of your services have declared ingress

ports an ingress port, Would mean a
load balancer in AWS, so now your VPC

gets a load balancer those ports will
get target groups, all of those things

get created, you get listeners, right?

Because that's how the AWS load balancer
works, you need the different ports that

you're listening on become AWS listeners.

We'll provision certificates so you can
even in your compose file, you can say

this service has a domain name, right?

So it's literally domain
name, colon something.

For each of those domain names,
we'll provision certificates.

That gets complicated very
quickly, but suffice to say

there's a major two code paths.

There are.

Is this a domain managed
by Route53 or not?

If it's a Route53 domain, which we can
check during the deployment, right?

So during the deployment, we can
say, Hey, is this domain that you

are using already in your Route53?

If it is, we'll use Route53 and
ACM to issue a SSL certificate.

Raph: Cool.

Lio: So if it's not, now it's tricky
because now we cannot trivially use

ACM to issue a certificate for you.

And that'll actually, we'll
provision a ACME workflow for that.

So those are the two main, there's a
few more details, but if this domain

name that you use in your compose file
is not managed by Route 53, then we

cannot do the automatic verification and
we'll use ACME Let's Encrypt instead.

And the CLI will inform you and the CLI
actually has a defang search generate

command to finish up that generation.

It's something you'd invoke
after you set up your DNS, So

this is not a Route 53 domain.

Please create the following CNAME.

You go to your domain register and
you put in the right information.

Then the Defang cli, Defang sr generate
will finish up that ACME workflow and

you'll get an ACME certificate instead.

Again, we're still in the load balancer
kind of domain of the deployment.

We also look at the By saying a function
and a function itself contributes a net

interest to something in that function.

This is why, so we can say a function
is a network that generates all the

connections in a network of a service.

So suppose this client runs this
service by embedding the client

to the networks of a service.

Eventually we might have
a bit more options there.

Now it's just public private.

So now we'll say, okay, this is a
private service, no public IP addresses.

And so we'll need a NAT gateway
to communicate to the internet,

to the container registry.

In the very least, you'll need to be
able to talk to your container registry.

Raph: Yeah.

Lio: And so again, NAT gateways
only get created if they are needed.

All of this is To avoid
incurring unnecessary costs.

We'll create an ECS cluster.

We'll create a container registry.

We create container registry
pull through caches.

Again, this is also to minimize costs.

And Amazon is notoriously expensive when
it comes to bandwidth And so to minimize

all these cross AZ, cross data center
bandwidth costs, we'll make sure that

everything gets cached in your account.

So those images that are being built
are pushed to your container registry.

Any other dependencies they'll be
cached in your container registry.

All of that gets set up automatically.

We'll create security groups, right?

So you'll, when you declare
your ports in your compose file.

That translates to a security
group that this service will be

listening on these two ports.

And if it's load balanced and
the load balancer is able to

connect to those two ports, et
cetera, all of that is generated.

Route tables get created
as part of the VPC.

Again, a public route table, private
route table, and the servers end

up with the respective subnets.

Yeah, there is so many things.

Which actually, when I started this,

Raph: Yeah.

Lio: at some point I had this
realization that, okay, so we made

a CLI that deploys a container.

If you don't know how hard it
is, then it sounds so silly.

But yeah just, I, if you
really have a day, And you're

curious how hard AWS makes it.

Try to start a container in AWS.

Raph: It's such a pain.

Lio: such a pain.

Raph: even things like Elastic
Beanstalk, which is like such a bad

way to deploy container, but like they
added container support back in what

was like 2017 or something like that.

Elastic Beanstalk was supposed to be
like the Heroku of AWS, but you've

got to do all kinds of weird things,
set up an S3 bucket to store your

credentials for your private registry.

And you have to configure it in
some weird way to it's such a mess.

it's so bad.

Lio: You just remind me we create
S3 buckets as well for logging.

Yeah, it's true.

We create CloudWatch log groups.

That's where your logs end up.

Also something you have to create.

Yeah, one simple compose file
with a single service will result

in some 30 odd AWS resources.

It's just crazy.

Raph: Yep.

It's so I think to me listening to
this, I think should highlight the

value to anyone listening of like
why Defang is like worth using.

But I think there's something else that
I want to highlight, which is like when

you're using a service like Heroku or
many others that I've tried in the past.

It's often they're framed per
service, but I think one of the things

that's really nice about Defang is
that you get like this, what you

get out of Docker Compose, right?

Out of a Compose file, you
don't just define like a single

service and deploy this for me.

You have like your whole
architecture, like your application

is not just a single service.

And so all of these things are set
up to like communicate together,

like all of that sort of stuff that
you need to set up normally, to make

these things talk to each other,
work together properly, be properly

secured, like it's all just there.

Whereas with a Heroku, or I can't
think of them off the top of my head,

but other platforms, it's okay, your
application is just a single service.

have to have this mental overhead of okay.

How do I connect them?

I have to go in and set environment
variables that aren't like

automatically pulled together.

Like you have to go in and configure each
thing separately and you can then build

your own layers on top, but it's a pain.

And so having this like nice translation
layer of just here's a compose

file that defines not just a single
service, but my whole architecture

of my application, and then just
push that up and you get all of it

Lio: Yeah, exactly.

And this is what I've seen with
the big companies and small

companies that inevitably there is.

A few people in the team that start
building this platform that, the others

use to deploy their application to
just because they want this notion

of, Hey, this is how we do things.

This is how we combine stuff.

This is what our application looks like.

And every company is coming up
with this model of their own, I

actually found Compose to be a
pretty good uh, is that a model for

Now it's like, uh, like a completely
separate uh, like some Terraform,

some CDK, Pulumi, completely unrelated
to what is used locally for testing.

And that is just asking
for trouble, right?

Raph: Yep.

That makes a lot of sense.

I've spoken to one person in the past that
told me they were using Docker Compose.

In production, and I found that a
little confusing, but he explained to

me, they had all kinds of cool steps,
with their compose file or multiple

compose files to manage, like rollovers
to new application versions and it was

pretty cool what they were doing, but
it also seemed like, And that's it.

A pain, which Defang takes care of, right?

Like you make sure that if
you're switching application

versions, that you're not getting
a broken version up and running.

you've got health checks and all of that
to make sure that it's all good to go.

Lio: So we started as the Defang
opinionated platform where, you

deploy and then we have opinions
about how this should run.

Raph: Yeah.

Lio: So we're becoming less that
more developer tool, but some of

those opinions are still in there.

Like for example you will have
health checks because that's just

how things should deploy, right?

So it's fine that your Docker Compose
doesn't care if the service is up

and running, but your Defang Compose,
will, because we have rolling updates.

If you're new, you do Defang Compose,
but your new version has a bug in it, the

old one will keep running until the new
version actually passes health checks.

This is just opinions that we
think should be part of the tool.

Even if you deploy to your own cloud we
won't let you make those silly mistakes.

Raph: Yep.

Lio: So what other dimensions are
there that we haven't talked about yet?

There's config, right?

Which is a A little bit like Docker
config we have a Defang config, which

is basically just environment variables
that you can manage instead of having

them declared in the compose file.

Raph: do you keep those safe?

Lio: they're all backed to
by AWS parameter store, which

itself is encrypted by AWS.

And that is one to one, right?

So DefangConfig, the, what's special
about that is those are environment

variables that you set out of band.

They are set using the CLI, not
during the compose deployment step.

And so your CLI, when you do
DefangConfig set, it's your CLI

talking to your AWS account directly
and writing them immediately as a

secret value in the parameter store.

it's an out of band way of managing
environment variables for your services.

Right now it's still very simple.

We don't have a concept of environment
or stages or something like that.

We're still in beta.

We're still testing some
of these patterns out.

So far, this pattern has been
proven good, so we might, extend it.

To include all other formats
or environment stages.

Raph: Awesome.

Lio: all of this assumed you
already had a compose file.

Raph: Oh, yeah.

Lio: we have a defang generate
or defang new command.

Those are actually aliases of one another.

And that's what you'd use to get started.

So with a few questions it'll ask you,
what language are you looking to use?

And you pick a language.

There's a little picker in the CLI.

It'll grab samples from
our samples repository.

You'll browse the samples, or you
have this option generate with AI.

In which case you give a little
language prompt and we'll

use Chat GPT on the backend.

Again, another, one of those
reasons why we have Defang login,

because we want to avoid abuse.

You're going to track how
many times you do that.

And use Chat GPT on the backend to
generate like an initial project

skeleton based on your prompt.

Or, yeah, you can browse the samples or
if you already know the sample because

you browse the amazing homepage that we
have, if you already know your sample,

you can just do Defang new like Next.

js for example, and then you'll get
dropped into a folder with a, the Next.

js sample ready to go.

Raph: I'm curious.

I know a little bit about
some other AI stuff that the

Defang is working on, but yeah.

Do you want to talk a little
bit about like that stack and

how all of that is shaping up?

Lio: Yeah, so the current shape
of the CLI, I think has been

released for three months or so now.

And it's cool, like we've been
to hackathons, we had some

interns in the company use it.

Feedback has been very good, but
that once you get into trouble.

People have a hard time debugging
and trouble could be anything.

That could mean that in your compose file,
even though compose file is the way I

like it is getting more and more logical.

Like less mapping mapped to the
concrete services that are running.

It's still pretty concrete in
that you have to say, Hey my HTTP

server is listening on port 80.

And then in your Dockerfile, you
have to make sure that the HTTP

server is listening on port 80.

Otherwise the number 80 in my compose
file, doesn't jive with the 80 in

the source code or in the Dockerfile.

beginners, might not realize the
link between these two numbers

or because they copy paste one file
and not the other, so then you do a

compose up and it builds fine, it gets
up fine, never passes health check

because the port is not open.

And so to help.

In this scenario, and many like it, right?

Even like you have a typo a docker file
has a typo in it, or whatever it is.

A lot of these scenarios to get people
pointing people in to the right direction.

We actually use, again, check GPT on the
backend to help you debug these issues.

So anytime a deployment fails,
there's actually an agent.

So you've got this website that
gathers logs after a failed deployment

and has this dialogue on your
behalf with Chat GPT to figure out,

where to pinpoint to the problem.

And that has been very promising.

That's looking very good.

And so for example, if you have
a typo in your docker file.

Sure if you do a Defang compose up
and you have a type in your Docker

file, you'll see it right there
in, in the, during the compose.

That is the easy case.

in many cases, the AI is able to
figure it out and it'll tell you,

Hey there's a, the fixed report
number and that has been so powerful.

Raph: Docker Compose for, and I
still run into just like those

random little issues where, you know
you forgot like to, I don't know,

something's just misconfigured.

You've forgotten how to, or there's
an environment variable or something

that's not properly set, which
is a thing that can get messy.

So yeah, I feel like having an AI that is,
that has the context of that deployment

that can see things as they're going
wrong and parse all of that for me instead

of, I would probably just copy those
logs and paste them into Chat GPT and

be like, Hey, help me find the thing.

So if it can take care of those
steps for me, that sounds awesome.

Yeah.

Lio: so I think that is the biggest
next feature that we're working on.

We have it internally in
beta, not released yet.

And there are some other stuff
coming up again, all has, everything

has to do with this slowly
decoupling of your compose file.

The description of your app with
the actual thing that gets deployed.

So we're now, for example, every service
ends up being a ECS task or ECS service.

Raph: Yeah.

Lio: That might not be the right compute
resource, depending on the service.

If this is a service that gets a request
once a day, because of some cron job

or whatever, We want to be able to
say, now this is not a ECS service.

This is a Lambda or other.

And so slowly we are decoupling what your
description of your application is or your

project from the thing that gets deployed.

And also there do we see a good
use of machine learning, not so

much language models, because
that's little to do with language.

More about creating like
a feedback loop based on.

Actual usage of the service and then
feed that back into the deployment model.

Raph: Cool.

Thanks for coming on the podcast, Lio.

Not as a host this time, but as a guest,

Lio: It was fun sharing.

I've been working on this for a
half a year now, so I'm very happy

to share what we've been doing.

Raph: Yeah, it's epic and
you all should go try it out.

Head to defang.Io, which Lio
is about to tell you about.

But yeah, thanks for listening.

We are building projects that should
make it a lot easier for developers to

launch projects using tools, like Defang.

Lio: Yeah, thanks for
having me check out defang.

io, check out our samples download the CLI
give me feedback on, what you think of it

Raph: yeah feedback would be awesome.

We all want feedback when we're building.

Also more people to get a sense , of,
what's needed is always useful.

And you can come check us out at goec.

io.

That is EC, the , digital product studio.

We're also building a tool we're taking
a lot longer than Lio is with Defang,

but we're building the Chewy Stack, which
hopefully will get deployed with Defang

So that's at gochewy.

io a deep stack framework that helps
developers build better products faster.

Yeah, we'll see you in the next one.