Explore how software engineers use Effect to build reliable, production-ready software in TypeScript.
As an engineer, you want to build
features, you want to
build like real robust systems.
You want to spend ideally
all your time doing that.
You don't want to spend your time
fighting the language,
building something that, you
know, the language was not meant for.
To give you an example,
like concurrency in TypeScript.
You want a way to build a concurrent
system where you're
thinking about what is
the system actually doing rather than
like setting up all the
boilerplate in order to
have that system in place.
So I would say Effect really
gives you the right premise.
So now that we've completed this
migration, kind of feel
like this is the end game feels
like more stable and at least if we
continue building our
code base on TypeScript with
Effect, it doesn't feel like we're going
to spend many more like
cycles, like refactoring
the code.
Welcome to Cause & Effect, a podcast
about the TypeScript
library and ecosystem called
Effect, helping engineers to build
production-ready software.
I'm your host, Johannes Schickling, and I've
been building with
Effect for over four years.
With this podcast, I want to help others
understand the powers and
benefits of using Effect.
In this episode, I'm talking to Michael
Fester, co-founder of
Markprompt, an AI-based customer
support platform.
In this conversation, we explore how
Markprompt uses Effect
across its entire architecture,
including API and RPC endpoints,
long-running workflows, and their
customer-facing product.
Let's get into it.
Hey, welcome Michael.
So nice to have you on the
Cause & Effect Podcast.
How are you doing?
I'm good.
Thank you.
I'm really happy to be here.
It's been, I guess it's been a while that
we're trying to
organize this, but I'm very
happy that we're doing it now.
Yeah.
I mean, you're super, super busy.
You're based in San
Francisco building an AI company.
So I'm really, really happy that you took
the time to share
your story about Effect.
Maybe you can share a little bit about
yourself, your
background and what are you doing?
Yeah, sure.
So I have a background.
I started in academia.
Actually, I was doing
research in mathematics.
Then I started getting into AI and I
built my first AI
company in Europe, specialized
on device voice recognition.
And we got acquired by Sonos.
And since then I moved here to San
Francisco for my next
venture, which is we're building
AI infrastructure for customer support.
So this is all like posts,
chat GPT, GPT 4 coming out.
And we saw this, this incredible
opportunity to start
working in this space.
That is awesome.
And I highly encourage everyone who is
like interested in AI,
particularly is looking
for a better customer support solution to
check out Markprompt.
Can you share about a few companies who
are already using Markprompt?
Yeah, sure.
So we've been working very closely with
some companies very
early on in the process.
Actually, that's really interesting.
Vercel started using us as one of our
first customers,
actually ElevenLabs, Replicate.
Things are just moving so fast.
So our approach here is not so much of a,
you know, we have a
vision of how things will
end up, right?
It's very much like a learning process,
working very, very
closely with customers, hand in
hand getting their feedback,
understanding together, how
this thing is evolving, how
we can improve, you know, the behavior of
the LLMs or how we can
extend the use cases.
So this has been a very different way, I
would say, of building
a company from the past,
because it's so new to
everyone in the space.
And we're just like scratching the
surface of the
opportunities of building these systems
for you know, down the line.
That's awesome.
So before we go into the technical parts,
maybe you can share
through the lens of one
of your typical customers, what
Markpompt does and which
problems it solves for some
of those customers.
Sure.
So there's multiple facets of this
running a support
operation is it's not a one size
fits all.
Every company has their own ways of, you
know, the nature of
customer issues, the way that
they engage with customers,
is it like direct contact?
Is it, you know, is it mostly self serve?
Technical challenges are very different
from one company to the other.
So we offer really an
end to end solution.
So there's multiple
components in our system.
It all starts with understanding your
system of record, which is
things like your underlying
customer knowledge, your, you know, your
documentation, your CRM, the API is that
you use for automating
parts of your support.
And we connect all that together.
And then we build applications that are
either customer facing
or agent facing so and use
internally for the company.
So it can be, you know, everything from,
you know, your contact form powered by AI
to your chatbot to your
Slack and Discord bots, right?
This is customer
facing email order replies.
So you send an email and the AI agent
will start, you know,
trying to resolve the issue
for you.
And it might be possible
that it resolves it fully.
And if not, it's going to ask you for all
the questions that the team then needs to
use once you know, the ticket
hits the inbox internally.
And then there's the all the features
which are internal
facing for the support
team to be able to streamline their work.
Because the truth is that customer
support before AI involved
a lot of essentially like
robotic work, just answering the same
questions over and over again.
This is not super engaging
and exciting just to have
to answer the same
question over and over again.
So if you can start like taking that
burden off of your
shoulders and help you, you know,
work on more like high leverage strategic
questions, then, you
know, we can really help
streamline the support operation.
what happens when you've put
an AI or both like customer
facing and agent facing, then you can
start, you know,
extracting a lot of intelligence,
a lot of knowledge, because you're really
at the nerve center
in a company, you are,
you know, the interface between your
customers and your company, right?
So you have so much like knowledge that
often just gets siloed
within the support team,
right?
It's it's actually very rare that support
team reports to product, right?
And I think that's a
that's really a problem.
But if we can start rooting
out all this insight and
knowledge, we can start
having a really, really good
understanding on, you know,
product shortcomings, where
the engineering team might,
you know, put some effort, right?
Because this is where your customers have
been experiencing a
lot of friction, right?
So we can start generating these
reporting and all sorts of
other insights that can help
you as a company
streamline your operations,
things like spotting knowledge
gaps, right, helping you write, you know,
keeping your your
documentation up to date,
these are all things that
we can start also working on.
So this concept of
agent is very versatile.
The agents can solve issues, they can,
you know, answer emails,
but they can also start
being tasked of gathering
insights and producing new
knowledge and keeping your
system of record,
always up to date.
So there's really a lot of
things that need to happen.
You cannot approach the
problem just from one
angle, you really want to tie
the whole thing together, because this is
where you start creating
these positive feedback
loops, so called data flywheels, right,
that enable you to
start deploying the solution
and trust it that it's
operating really well.
So yeah, there's,
there's a lot of things to it.
That sounds fascinating, and also quite
complex in terms of the
different surface areas, etc.
So maybe we can go
slightly more technical.
And before we go into all of the Effect's
specifics, can you more from like an
engineering perspective
provide an overview over your system?
And which different high level components
it's composed out of?
Sure.
So this is a, you know, apart from the
machine learning things,
the LLM things, this is a
classical, you know, setup.
there's some servers, both
like short, like serverless instances for
API endpoints, then
there's some long running
servers to do things like, you know, pre
processing data, post
processing, interactions.
So yeah, there's a there's a bunch of
servers here, everything
is in node, everything is
or TypeScript.
The front end is a React application.
So both we have a dashboard, which is a V
duplication, there is
some integrations
inside of Salesforce,
Zendesk, Slack, and so on.
So it's a classical full
stack, React TypeScript application.
So you've built quite an expansive system
here and building all
of that in TypeScript
I think TypeScript is sort of notorious
for only scratching the
surface and only covering
the happy path.
There's a lot of things that can go wrong
in all of those systems, integrating with
different systems.
All of those things I think have led you
to take a more serious
look at Effect and ultimately
adopting it.
So can you share that journey of going
from a non-Effect code
base to converting step
by step to Effect?
Sure, so obviously there's the common
complexity increase that
happens when you're building
a product and putting it into production.
So as you add features and so on, then
you hit sort of a ceiling
and then you start assessing
the situation and then you might start
refactoring and so on.
And then next phase, you hit some other
limitations and then you
reassess again the situation.
So that's just the common thing of
working on an
increasingly growing code base.
But then there's LLMs and these are
non-deterministic systems and it poses
some completely new kinds
of challenges.
How do you monitor the behavior of the
system, for instance?
How do you deal with readable streams
instead of just sending over a payload?
This has the potential to making your
code base more complex
if you need to stream some
tokens down to the customer, but you
actually also need to
store these chunks on your own
for post-processing.
How do you deal with that?
TypeScript doesn't give you
some great answers to this.
How do you deal with reliability?
OpenAI and Anthropic and so on, they offer
a service which is in high demand.
We are in a period right
now with scarce resources.
So these APIs, they go down and they
actually go down more
than you would think.
And so how do you build fallback
mechanisms so that, say that
OpenAI is down as your main
provider, your customers cannot be down.
If OpenAI is down,
that's not their problem.
That's our problem.
So how do you build a
fallback system to Anthropic?
What happens a few weeks ago when OpenAI
was down for, I think,
five hours and then the
entire industry shifts towards Anthropic
that just has this
spike that they're unable
to deal with?
So they are experiencing outage or just
very degraded performance.
There's outage and then there's actually
performance, the
speed of tokens and so on.
All that, there's no good answer, no
clear best practice in
vanilla TypeScript to deal
with these things.
And then there's the fact that we are a
startup working in a
fast-paced environment where
things change all the time.
And we want to be efficient and we want
to be able to not deal
with all the complexity
and all the problems and down times and
reliability, but we need to
be effective in shipping new
features to constantly
meet up with the demand.
the zero to one of
building a company here.
So there's multiple factors that really
made us think hard about
what would be the foundation
for us to be able to operate really fast
with a small set of
resources from a startup and
be a highly reliable solution because we
are catering to larger
companies, enterprise
companies.
So reliability is just really key.
Obviously, I can talk a lot about the
whole telemetry part of
things, making sure everything
is properly monitored and that we can
actually spot when things
go wrong before our customers
even see it and then we can take action.
So there's a bunch of things here.
And Effect just came at a very good
moment where we still had
a sufficiently small code
base that we could actually start
seriously thinking about
completely converting it to
Effect.
So the timing was really good.
So you've mentioned in regards to
adopting Effect for the
sake of improving reliability,
one specific use case was that Markbrom
is using LLMs such as
the products provided by
OpenAI.
And if they go down, you need to fall
back to another system.
And in case that system also goes down,
fall back to yet another one.
All of that is helping to improve the
reliability of Markprompt.
Which other challenges did you face
similar to that one in
regards to reliability and
how did you improve
reliability through Effect?
I mean, there's lots of examples where
Effect has helped us
consolidate our system.
One example is one day we woke up and we
realized that we were under
attack by a malicious hacker
who was interacting with a public facing
deployment of Markprompt and
sending, I think, 20 million
requests per hour.
And this was across the globe, so it was
kind of hard to deal with.
So Ethan Niser, he spent the summer with
us and he's been working
on a lot of Effect-related
things and on his first day,
this attack started happening.
And so he built this great rate limiting
system in the middleware
that is capable of looking
at multiple facets of a payload.
And the interesting thing here is that
you cannot just rely
on IP and fingerprinting
and so on.
You actually also want to start looking
at the LLM payload itself.
What kinds of messages is there, pattern
that we can spot here in
the nature of the requests?
And so it's a multi-tiered rate limiting
system that is very easy
to automatically trigger
and also manually trigger.
So this is an example also where building
with composability has just allowed us to
build a fairly sophisticated
system here, which just works.
We are now very
confident that it just works.
And by the way, one of the reasons that
we're confident that it
works is that it's very
easy to test because the whole dependency
injection story is
just such a natural thing
to do.
You just operate a provider and you can
very easily simulate a
situation that is hard to
reproduce with other means.
So dependency injection, I would say, is
a huge factor to the
reliability because we
can restart testing all sorts of
different combinations of
non-happy paths and building
that with just change or service that you
provide is just very,
very easy with Effect.
Telemetry is obviously a thing.
So I think that the whole testing story
is not just about one thing.
You want to do unit tests.
You want to do integration tests with
dependency injection.
You also really want to have a full
understanding of what's going
on when you are in production.
And so being able to have good traces,
good logging of every
component of your system
is non-trivial.
And so we rely heavily on telemetry and
Effect allows us to just
add spans everywhere in
a very, very simple way.
And it just naturally fits
into the rest of the system.
And this really has allowed us to have a
very, very good
understanding of failure scenarios
before our customers even see them.
So these are just general patterns that
are very natural to do
in Effect that we don't
even think about because they actually
don't incur any sort of
additional thinking on our
part.
It's just adding a line
and then it just works.
Yeah, this makes a lot of sense.
I can totally see how reliability is
almost sort of like a emerging factor,
like a second order effect that comes
when you have a composable system,
and when you have observability, when you
can look into what's going on,
when you can eat your
vegetables, then you're healthy.
And so
Building a reliable system is going the
extra mile on all of those things.
This makes a lot of sense.
Given that you built
quite an expansive system,
maybe we can try to break it down a
little bit more with a technical lens on.
So I'm sure that you have
a client side application.
You've mentioned that it's like a single
page app built with Vite,
but more on the backend side.
Can you describe to me what are the
different subcomponents of your system?
Yeah, so in a very simple way, the main
subcomponents here,
there's the application server,
which is essentially an RPC
server that the client
application is speaking with.
Then we have a public API.
So Markprompt is built as
Stripe for customer support,
in the sense that every part of the
system can be
interacted with via our APIs
so that you can actually build deeper
product experiences
inside of your core product.
So we have a public facing API server.
Then we have the ingestion engine, which
takes in all your systems data.
So this can be public facing like a
website that we would
scrape on a regular basis.
It could be GitHub repository or
historical tickets from
Zendesk and Jira and Confluence
and databases from
Salesforce and whatnot.
So we do a bunch of things here.
We import data.
We chunk it up.
We monitor the whole structure of
documents through the AST
and then we build
embeddings for multi-levels.
And then we store them
in a vector database.
And then we have all the
application related servers,
so things like Discord or
Slack or Zendesk and so on,
which are more like simple servers here.
Got it.
Yeah, that makes a lot of sense to break
it up into those different pieces.
I'd actually love to learn a
little bit more about each,
maybe starting with
the application server.
So you mentioned that you're
using Effect RPC in that regard.
What did you use before
you were using Effect?
And what are the qualities of using
Effect for an
application for an RPC server here?
Yeah, sure.
I mean, this before was
just like simple API handlers.
So you would have two
completely decoupled systems.
And now pretty much all the
endpoints are in full RPC.
It starts with a schema, actually.
We define the input schema, the output
schema, and the error schema
for each of these endpoints.
And then it trickles down all the way to
the client application.
So we have this.
Ethan actually wrote this and I hope he's
going to publish it.
Sometimes soon, essentially React query,
but wrapped around with Effect
so that we have the full schema from the
backend all the way to
the client side code.
And we have nothing more to deal with.
So the whole thing is completely
streamlined in terms of the
communication between the client
and the server.
That makes a lot of sense.
And so before you
didn't have any schema layer,
anything to express the kind of data
you're sending across?
No, we didn't actually.
Got it.
Well, I suppose that through the process
of just modeling the schema,
I'm sure that there were a couple of code
paths that you didn't
even account for before.
So the--
Absolutely.
The public API server where you're
exposing an open API spec, I suppose,
how similar or different is that to
your application server?
Can you describe the public
API server setup a bit more?
Yeah, sure.
So this is more of a
standard HTTP server.
This is actually using Effect HTTP.
So again, we have the full--
Everything starts with the Effect runtime
taking over directly
at the request level.
So again, we can model
everything very nicely via schemas.
And one really nice thing of having
everything defined by schemas is
that we can, for instance, automatically
generate the open API spec.
So this is something that we now have
every single endpoint is
automatically documented
just via the schema.
Got it.
How much overlap does your
internal application server,
that RPC server with
the public API server,
does it have a lot of overlap?
Do you reuse a lot of code internally?
Or are those rather
fulfilling different purposes?
They're fulfilling
completely different purposes, yes.
So yeah, I mean, these things are
completely separate.
So to give you one very important
differentiator is that on the RPC server,
we provide permissions.
So you want to make sure that a given
user can perform an action
that they are allowed to.
And so we have a very nice system that
provides the level of
permissions of a user.
This is completely transparent.
We actually used RLS before,
role level security to
enforce these policies.
But there was a real performance impact.
It was just slowing down the crazy,
especially when you're dealing with
hundreds of thousands of data points.
So this had a real impact.
So we got rid of that.
And actually, everything
now is modeled in code.
So the permission, which I guess was
always how things have
been done without RLS.
And so this is, again, an example where
Effect at the type level,
you can immediately spot whether this is
a user that is allowed to
perform a single action,
because that service
requires these sets of permissions.
And so this is very,
very nice to model as well.
That sounds very interesting.
So you've built yourself an internal DSL
to model those permission rules.
Can you explain a little bit more how
that's implemented and how
that can be imagined from
like a type level
perspective or from an API perspective?
Yeah.
I mean, so every user comes
with a set of permissions, right?
And these permissions
can be modeled via types.
And services can declare what level of
permission they required in
order to perform an action.
And so we immediately know if within an
RPC endpoint, we're using a service,
but that the provided
permissions don't match.
Well, then we can flag it.
And so we can guarantee that there's no
code being shipped where
there's an inconsistency here.
I love it.
That sounds very elegant.
I would love to see some code for that.
Could you imagine that there could be a
more commonly reusable package for that?
Or is that only something that makes
sense within your code base?
Good question.
Obviously, we have been just crunching
through this code and
getting into production.
So we haven't thought too much about how
this would extend to a
more generic use case.
But I guess this could be
a very interesting topic.
Right.
Well, anyone in the audience is happily
invited to try to
replicate this by themselves.
Maybe that's even something that we
could, as part of the Effect org, also
look into providing.
To hear you describing
this use case in this DSL,
that makes a lot of
sense for a lot of use cases.
So moving then to the ingestion engine,
can you describe a little bit more of how
that looks like from a
lifecycle perspective?
So you've mentioned the ingestion engine
is mostly meant to bring in data from
various data sources,
whether it's
Salesforce, whether it's Notion.
Walk me through how that works and how
does that work in regards to Effect?
Do you use Effect streams for that?
Yeah.
How does that work?
So it's still pretty bare bones here.
This is something that we have been
discussing also with the
Effect team and the Effect cluster.
This is where we will probably be going.
So now we have our own system, which is
basically just some
long running node servers
that are able to go and fetch the data.
So that could be make a large request to
an SQL database or starting a scraper
that visits a website and when it's done
and purged all the pages,
it is then indexed in a
temporary cache on our side.
And then it goes into the post processing
pipeline, where again, we obviously we
parallelize all this
because every page
needs some post processing.
So we chunk it up and we try to get a
sense of the structure
of the document, the AST,
and then we build these embeddings for
each levels in the AST and
then we store it in a database.
And so the main challenge here is that
these are long running
tasks which can time out,
they can fail and whatnot.
And so now we rely on external systems to
do the whole workflow orchestration,
but this is something where we are going
to be investing some
effort as well because
Effect is the perfect solution here
because we have the full
sets of tools for dealing with
these kinds of situations.
Yeah, I'm really looking forward for
Effect Cluster and Effect
Workflows to mature further.
I think there's more and more use cases,
particularly when it
comes to AI applications
where you want to express
things as long running workflows.
And I think this is where Effect is
really like such a perfect foundation.
So yeah, I think we're going to see some
very interesting updates
throughout the year here.
So stay tuned for that.
Maybe changing gears slightly.
Initially, you've built all of the
system, I think mostly by yourself.
This is where you started
adopting Effect by yourself.
and then later you onboarded other
engineers who didn't have
prior Effect experience.
Can you share a bit more about that
story, how you onboarded
other engineers who didn't
have Effect experience to become
successful working on your code base?
I mean, this is such
an interesting question.
Because you would think that Effect is
mainly for a TypeScript audience.
Web developers who are proficient now in
TypeScript and realize they
need to go to the next level
in terms of building complex systems.
My take here is actually that having a
code base which is already
fully Effect will actually
lower the barrier to entry for people who
have less experience with TypeScript.
So we've seen that with Elliot, our
founding engineer who has
joined a few months ago,
with very little TypeScript experience,
was able to very quickly take
it up and then be productive.
We've run some trials with some
candidates in the last few weeks with,
again, little experience in
TypeScript.
The thing is that we are hiring a lot of
machine learning engineers.
And so they have typically already a lot
of experience with more
of the machine learning
sets of tools like Python
and very little on TypeScript.
But what we have seen is onboarding them
on our code base, they
were actually able to be
very productive, very early on.
And the great thing is that there's not
much room for bad
practice or the kinds of things
that you need to learn when you get
started with a new language.
So what I think is that actually
Effect will
allow them to be proficient and
build very reliable
code within a few weeks.
If it had not been an Effect and more
like vanilla TypeScript,
yes, sure, there's fewer
things to learn.
It's kind of easy and so on.
But you would build up this understanding
and intuition for the
language over the course of
years.
Right.
And then within years, you'll be able to
build very complex and
reliable systems because you
have been proficient in the language.
So Effect gives you these guardrails that
with the whole thing is
in Effect, sure, you need
to understand the
whole catalog of things.
And this you just take up
as you have new use cases.
But if you have a really good foundation,
like the full RPC
server, you basically just
you start writing schema, right?
You write a schema and then
you fit it into the system.
And then once it's there, it's done.
You don't need to change it.
You haven't done anything bad
or anything that will break.
Right.
So my take on this is that actually it
might actually have the
opposite effect of what
we would expect, which is to be able to
onboard non TypeScript
engineers much, much faster.
Yeah, that definitely sounds
a little bit counterintuitive.
And the first time I heard about this
anecdote, and you shared
this privately with me before,
and that person you've just mentioned,
Elliot, gave a brilliant talk at the last
Effect Meetup in San Francisco where he
shared his first hand
experience learning Effect.
And yeah, I think that is a
little bit counterintuitive.
But when you think about it more, it
makes a lot of sense because most
experienced TypeScript
engineers over years, they had to first
learn how to do anything
with TypeScript at all and then
later learn to ignore like those 80 paths
out of 100 how you could do
something with TypeScript,
but that will lead you
down a dangerous road.
Whereas Effect constrains you a little
bit more and puts you on a
path to success in a way.
That's also like it has been my
experience for I think for me
coming more from TypeScript,
I had to unlearn a lot of things how I
used to do things in TypeScript before.
But this is where for me, a good
foundation to pick up Effect
was actually dabbling in other
languages such as Swift and Rust, where I
already primed my mind more on more saner
engineering practices and learning the
right habits and
embracing the right abstractions.
And so I can totally see how someone who
doesn't have prior TypeScript experience
maybe experience in other languages,
which have more appropriate primitives
can be up and running with Effect in no
time. That is amazing to hear.
So can you walk me through a little bit
more through the process
how you went tactically
about migrating the code base as it was
before Effect one by one
step at a time to migrating
it to Effect. Rome wasn't conquered in a
day, so wasn't your code
base. So how did you go about it?
We had some great help. And this is where
I think the Effect community
is such a wonderful community
of people who are nice and helpful. You
kicked it off with us. So we've been
this for years. And immediately I saw the
importance of this, but
never found the time to
actually learn how to do it in practice.
I think this is also
maybe related to the lack of
documentation and concrete examples back
in the day. How do you
actually convert an API
endpoint to Effect? How do you just do
it? So I think this little
catalyzer was really important,
which allowed us, our team, to then get a
sense of where we could
start fitting it in. It was still
like very, very new. This was earlier
last year. And then we had the pleasure
of having Ethan with
us over the summer. And he wrote a lot of
very important code for both
the backend side of things,
schema, the database, the React query
part, which put us in a
really good foundation.
We had most of it basically sketched out.
And then in September, we decided to
do a big, big push and just get
everything migrated. So this was like a
bit slow in the beginning.
And then very, very quickly, within a few
weeks, we had the
whole thing migrated over.
Good. That makes a lot of sense. And
happy to have acted as
a little catalyst here,
but you've picked up everything so
quickly. So I think this was rather a
little bit of giving you
more courage to go this way. And I think
you figured out quickly
how to work with everything,
and you migrated everything in no time. I
would say, honestly,
the important thing is the
catalyzer here. Once you have the starter
blocks, then it's quite easy to
generalize. It's more
like where do you start? And the
documentation obviously is much better
now. So I think this is
a great thing. But I think this little
initial nudge is
what made the difference here.
If you think about your system as sort of
like a tree of different subsystems,
you can go about converting an existing
system to Effect in different ways.
Either you go first for
the leaves, and then you go up one step
at a time until your entire
program, your entire system is
refactored with Effect. But you can also
isolate specific sub nodes
and from the outside call the
effect and inside of it call other
things that are not yet Effect with
Effect.runPromise or
effect dot promise to run some effects,
some non-effect code yet.
Did you find any particular
topology adoption strategy for you to be
working well? So did you go first
all for the leaves? Or did you go
wherever you wanted to
refactor something anyway? How did you
go about the decision process of when to
refactor what with Effect?
Well, yeah, it was the leaves in
the beginning. And then it was the top
down afterwards, because
we just realized that this
was what we needed. And we just needed as
quickly as possible. So
then there was a combined push
from the whole team to just get the
migration done as quickly as
possible. So the RPC was written
really like top down, right? Because,
well top down in the
sense that one route would be
migrated to RPC, Effect RPC, then another
one and so on. But for
sure, this is a nice thing with
Effect is that you can do it on the
leave to start with and you're not
touching anything else
of your code base. But yeah, you
definitely feel the urge to
get the whole thing end to end.
The runtime should be initiated at the
very beginning of the
thing, right? So that you have
the full happy path, because you get to
love the happy path here, right?
Nice. So if I remember
correctly, I think it's now been pretty
much a year since you've been using
Effect in production.
looking back, what were some
of the things that were
like unexpectedly good with
Effect and also some things that were
more challenging that you thought or
unexpected in some ways?
I didn't expect the Schema package
to be, you know, I didn't have any
expectations that this
was really a core thing. In fact, because
this is really wonderful, how you can
control both like the
decoding part, the encoding part, how you
can pipe everything
together. This is one of the sort of,
it shifts away how you think about a
problem. Because usually,
now when I think of a problem,
I start thinking about what is the
schema, right? It is very
different from what is the actual
application code. You redesign your
schema, you think, well, how should it
transform the data here?
And then you work your way up to getting
it all the way in your
code. So I think that schema was
a very, very positive,
unexpected thing that came out of this.
To give you an example,
I always wanted to, again, in the quest
for extreme lining and being
as efficient as possible as a
startup, things like documentation, I
care a lot about documentation. And the
more we can put that
on like autopilot, the better, right? So
the ability to take
your public API endpoints,
and the other schemas generate an open
API spec is exactly the kind of things
that you want, right?
So this was a very, very nice discovery.
Right. That makes a lot of
sense. And I think that goes
hand in hand with what I've also found in
my development practices over the years,
is that I always started to think more
about the types of a system.
That's like typically what I
start out with. I just like described the
types. But then I've also
found myself thinking about,
actually, this data of this particular
type doesn't just live inside of this
boundary of the system,
but it needs to move across network
boundaries, across threat boundaries,
etc. And now you're,
you need to serialize, deserialize that
variable. And this is where
schemas come in. So I ended up
doing exactly the same as you. Like I
start thinking about like,
okay, what data am I dealing
with here? And I modeled it as schemas
and derived the types from it.
And well, we see this here as
like a big revelation turns out in other
programming languages, that's what you
always do. When you have
like in Rust, etc. When you have like a
struct definition, you can
derive schemas from it. But
I think for TypeScript, this is really
for most engineers, that's
something novel. And I think
this is a pattern that should be embraced
much more since I think
default is you just ignore
the existence of a schema and you JSON
stringify everything. And then you're
surprised what happens
to your classes or to your dates. So
yeah, that schema is such
an integral part of Effect,
I think is no coincidence. And I'm very
happy that it gets so
much attention for detail.
Yeah, I mean, I would say it's not only
the fact that you have like
decoding and encoding and so
on, it's the fact that it's so much in
harmony with the rest of
the whole like Effect way of
doing things, right? So it just feels so,
you know, an integral
part of effect itself.
Yeah, I fully agree. So in regards to the
more challenging aspects,
what were some of the challenges that you
found working in an Effect code base?
Sure. I would say it can give you a sense
of, you know
maybe a
false sense of security.
Because you think that everything
is dealt with at the
type level and you get no
flagged of compile time errors and so on.
And this is partly
true. But I think it's very,
very important. It's very important to
understand that this is not
the case. And it gives you
really good like foundation, but you're
not exempt of or die or catch all that
was put at the end of
your pipes. And so all the other errors
in between are, you know, just you don't
see them. And we had
an example where we had this
bug in our code base, and it
was quite hard to figure out
where it went wrong. So I think this is
something that, you know,
Effect is a very, very powerful
tool. And it brings out the best in us,
you know, engineers, I think.
But it also needs to be
dealt with knowing that, you know,
it's not like a safety guard
or anything, you can mess up
and you can do things that are very hard
to then investigate because
you've built up this, this
sense of everything is just working
smoothly. So this is one thing I think
that this is as important
to keep in mind when you're building, you
know, on top of Effect. I
think another thing that it gets,
a little bit hard to track is,
dependency
injection is just wonderful,
in Effect. And I think that's, that's, I
would mention this probably
as the second most important,
discovery or delight of
working with Effect. But,
you know, where do you put your
dependency? Where is it in your code? How
do you like chase the
dependency, you know, the service
that you provided, that is now affecting
a completely different
part of your code base,
that might be, you
know, some some tooling
to build inside of the
IDE itself, to be able to make it easy to
spot the connection, right.
But now we still like in this
very barebone situation where it's kind
of hard to understand
exactly, you know, the hierarchy of
your dependencies, right, and where they
are actually being used
in your code. So it can be
a little bit hard to navigate sometimes.
Right. Yeah, I found that
myself as well in the project
I'm working on. And it is actually
something that we're thinking
about, that we could possibly
enhance through a IDE integration, etc.
So stay, stay tuned for
for that. You said something
interesting just before, where you said
effect brings out the
best in us engineers. Can you
elaborate on that? What do you mean by that
Well, as an engineer,
you want to build features,
you want to build like real robust
systems, you want to spend
ideally all your time doing
that. You don't want to spend your time
fighting the language, fighting, you
know, building something
that you know, the language was not meant
for, to give you an
example, like concurrency and
TypeScript, right, these are things that
you know, you want a way to,
you know, build a concurrent
system, where you're thinking about what
is the system actually doing
rather than like setting up
all the boilerplate in order to have that
system in place. So I would
say Effect really gives you
the right premise. So one thing I'm
thinking about is, now that we've
completed this migration,
the full migration, it kind of feels like
this is the end game. I know
this might sound a bit like
naive or unrealistic, but it kind of
feels like more stable.
And at least if we continue
building our code base on TypeScript with
Effect, it doesn't feel
like we're going to spend
many more like cycles, like refactoring
the code, right, it feels
like this sort of timeless,
like code base now now now it's, it's,
it's there. Effect is
going to evolve, our code base is
going to evolve. It feels like this is
going to be like adjustments
and not like a full rewrite.
And this is what I mean by, you know, it
feels like kind of like
building a house with some really
good like walls and floor and ceiling and
so on. It's just there now.
And this is what we are all
I guess, striving for as engineers is to
have these, you know, the
right tools, right, the right
foundations, instead of having to fight,
you know, all these like small things
that degrade over time,
and then need to revisit them. And you
realize how silly you were
writing this code, you know,
last year kind of thing, it feels like
more harmonious and stable
Right. I love that. And
I think that is also something that has
really brought me to Effect.
And to flip it on its head,
something that I experienced so many
times and have observed so many other
teams as sort of like a
malfunction of engineering teams and of
projects that engineers are
afraid to make changes in a
project. And I've seen this time and time
and time again. And when
you're afraid to make changes to a
project, particularly if the code was
written by someone else, maybe
the person is no longer there.
That's terrible. Like if you're, if
you're afraid to change something, you
might not change it and
you might not improve it. Or if you
change it, you don't know
about the consequences, etc.
And effect allows you to, gives you the
confidence to change something and you
know what's going to
happen. It even makes it fun to do so.
And I think that's
sort of like a twin to the
aspect of composability, almost like the
other side of the same
coin, where if you can compose
things together, you can decompose things
that makes, that's, those
are just aspects of changing
something and changing not in a big bang
way, but just moving things
around almost feels like a,
like a dance. We're getting a little bit
too, too like pie in the sky
here, but that's really like
my reality as well, like working with
Effect and one of the things
that I like so much about it.
And another thing here may be looking a
little bit more into the
future, but we're increasingly
living in the future where we get, where
we do a lot of our code
development in an AI-assisted way.
Now it's maybe no longer just us, us as
engineers doing performing
changes on our code, but it's now
AI systems performing changes on our
code. And this is where the same
underlying dynamic is even more
severe. If there is a system, if there is
some actor here who's
changing our code, who is maybe
even less competent or less knowledgeable
about this, then maybe even
more scary to make changes
to a system. And if the material here,
the house that we're
building that you've mentioned,
is where it's inviting to have changes be
made, that sets an even better foundation
for those AI-assisted,
for this AI-assisted future. So that's
something I love thinking about.
Yeah, I agree. I think the guardrails
here are so important,
right? You want to make sure that,
whatever the LLM generates has as little
unintended side effects as possible,
because it could hallucinate some code.
So if you can make sure that
the blast radius is as tight,
small as possible, then I think you have
a really good foundation. Obviously, I
don't know where this
is heading. Maybe we're going to be
completely wrong and that we're not even
going to be thinking
about code in the same way in the future.
But it feels like it's a
really good way of ensuring
that whatever an LLM generates stays
within the intended radius.
Exactly. And I think just to round this
part out, Effect gives
you a middle ground between
a natural language like English that
might be more intuitive to
describe what a system should do,
but maybe less concise, as opposed to a
type system or code,
which can be more concise,
but maybe less intuitive. And I feel
Effect is the best of
both worlds, where you can,
through the help of a type system, and
here in this case,
TypeScript provides a marvelous type
system that allows you to give you the
correctness benefits that you wouldn't
get from English, yet
it gives you that more intuitive way to
express and compose things. I feel like
Effect is converging
towards something that's the best of both
worlds here, that is
actually a wonderful substrate
for AIs to also compose
systems out of.
Wish that the latest versions of LLMs
have even better
understanding of the latest developments
in Effect. But I think you mentioned that
you have a nice hack, which is to
clone the Effect repository in cursor and
then have it look at
that while you're coding.
Yeah, so that's actually a shout out to
our friend Tim
Suchanek. He taught me about
this little trick, and I've since applied
it many times also for other projects.
I think we're currently in this uncanny
valley, where LLMs always
lag behind on the latest and
greatest of actual information. The
Effect docs, for example, in
particular, they have really
advanced a lot, got a lot better over the
last couple of months. And
those LLMs that we're using
on a daily basis have not fully caught up
to that level yet, and
are also sometimes trained
on old code snippets, different syntax,
etc. And so when you ask
them, you might get an outdated
response. So what Tim is doing, and since
he's taught me about
this as well, is basically
check out the Effect code base on your
computer locally, open
Cursor in it, and then
Cursor uses the full Effect repo with
all the examples, tests,
etc. as the context. And then
inside of that, you use the Cursor chat
to ask your question, and
you're going to get a really
useful result. So I've been using this a
bunch myself, but we're
also working on some exciting
improvements to the Effect docs, etc.,
where we're trying to
combine the latest and greatest
documentation with some of our own AI
developments as well. So stay tuned for
that. You won't need
those little tricks for too much longer.
So Michael,
thank you so much for taking
out so much of your busy day to share
about your experience and journey with
Effect. Really greatly
appreciate that you share all of that
with the entire audience
here. Before rounding this out,
is there anything else you would like to
share with the folks
listening? I'm sure that you're
very busy building a lot of things, but
you're also interested in
bringing on more engineers.
Anyone in particular who
should reach out to you?
Yeah, so we are constantly hiring and we
are obviously, it's a full TypeScript
Effect code base, but we are also very
much looking for
engineers with an ML background.
And we would warmly welcome them even if
they don't have
specific TypeScript experience.
So yeah, if you're interested in working
on such a code base, which I
think is really, really nice
and working on some really interesting
LLM related topics,
then absolutely reach out.
Yeah, I couldn't give you a warmer
recommendation to work at
Markprompt. Michael, Marie,
Elliot, all the folks working at
Markprompt are A plus humans. If I
wouldn't be doing all of the
things I'm doing, I'd strongly consider
working with you all as
well. So whoever gets a chance
to work with you, I think you're making a
very smart decision. So
now moving back my focus to
you, Michael, thank you so much again for
taking time, sharing all of your
experience and all of
those anecdotes with us. I greatly
appreciate it. Well, thank
you, Johannes. Always a pleasure
to speak with you. Awesome.
Thank you so much. Thank you.