Explore how software engineers use Effect to build reliable, production-ready software in TypeScript.
we've been a small team so far
but we've been working
with Effect from scratch.
when I decided
the stack for Spiko
I chosed Effect because it was the natural
successor to fp-ts.
And for hiring people
most of the
employees didn't know Effect
before they came at Spiko.
and I trained them
and they also trained
with the documentation
and so far it's been okay.
I think the
documentation has really improved
there is still a learning curve
that is a bit higher than other libraries,
but I think it's okay.
Welcome to "Cause & Effect," a podcast
about the TypeScript library and
ecosystem called Effect,
helping engineers to build
production-ready software. I'm
your host Johannes Schickling,
and I've been building with Effect for
over four years. With this
podcast, I want to help others
understand the powers and benefits of
using Effect. In this episode, I'm
talking to Samuel Briole,
CTO at Spiko, which is a Paris-based
fintech startup trusted
with over 400 million euros
under management. In this conversation,
we dive deep into Spiko's
event-driven architecture,
built entirely with Effect and how they
leverage RabbitMQ as an
event broker to scale their
backend systems. Let's get into it.
Hey, Samuel, it's so great
to have you on the podcast.
How are you doing?
Hi, I'm good.
How are you?
I'm doing fantastic.
I'm quite impressed with
what you all are building
and I want to learn a lot more.
But before getting
into what you're building,
would you mind introducing yourself
and sharing what you do?
Sure.
So I'm Samuel.
I'm CTO at Spiko and we are a French
startup based in Paris.
So we are a FinTech,
especially in the finance sector.
And we are building a new infrastructure
for issuing regulated products,
regulated financial
products on public blockchains.
And we especially focus
on risk-free products.
we allow people to buy a bond of,
I don't know, from, of the French State
or of the US States.
So they can have a risk-free rate.
They can earn a
risk-free rate on their cash.
So basically if you're based in the US,
it's a very common thing to
have like a risk-free rate
yielding on your cash, but in Europe,
it's not very common to have that.
our mission is to
democratize this kind of product.
we launched Spiko one year ago
and the company was
created one year before,
so it was been working for two years.
And we launched our products one year ago
and today we have
around 400 millions of euros
under management, which
is quite a nice milestone.
And also last month we raised a Series A.
So we raised with Index Ventures,
which is famous for
supporting the Revolut
and the Robinhood in
the same financial sector.
That is awesome.
Well, first of all, congratulations
on those amazing milestones,
like 400 million under management
or whatever is the best way to put it.
That is impressive.
And then also congrats on the series A
with Index Ventures.
I'm curious to learn a little bit more
about like both your personal background,
how you arrived there, but also about
how you should think
about Spiko as a company,
how many engineers are working at Spiko.
So maybe you can give a
little bit more detail here.
Sure.
I did an engineering school
where I learned physics
and I almost started a PhD
in quantum physics.
But at some point I
decided that I didn't want
to do research and I switched to
software engineering.
So during my studies,
I was building like an
app for discovering electronic music.
So I learned to code a bit by myself.
And I was like, okay, it's so cool.
I want to do this as my job.
I started my career
in a small service company.
And quickly I wanted to
build my product from scratch.
So I worked for a company called Inato
so I was maybe the second engineer.
And then I tried to launch my own startup
and I failed two times.
And then I met the founders of Spiko.
And I found the mission really exciting.
And also the technological,
challenge at Spiko
using the blockchains
and also innovating both
on the tech side and
on the financial side
was very exciting.
So that's how I arrived at Spiko.
That is awesome.
And there's quite a bit of overlap there
with other people in
the Effect community.
You mentioned the folks at Inato.
I had the pleasure of
meeting some of them
throughout the last
two Effect Conferences.
And it seems like there's
quite the active Effect scene
in Paris, which I'm sure we'll talk
a little bit later about as well.
But yeah, that journey so far sounds
really, really cool.
I see a bunch of
parallels and similarities.
I'm currently still
working on a music app
for me personally
focused on electronic music.
So I wish it certainly
seemed to share that passion here
but now you're a CTO at Spiko.
So what does that entail
and how many other
engineers are you working with?
the engineering team at Spiko
is currently five people
and there is one joining soon.
And our goal is to reach around, I think
10 people next year.
So we'll split into probably two teams.
we've been a small team so far
but we've been working
with Effect from scratch.
So everyone in the team is using Effect
and everyone is doing
TypeScript and Effect in the team.
That's awesome.
So you mentioned to me before
that you joined as the first employee
besides the founders.
So you were there from the very beginning
and the other engineers
who joined afterwards,
did they have any prior Effect experience
or any other relevant
experience besides TypeScript?
Or I'm just curious
like how I should think
about the sequencing
now Spiko has been around
for two years.
You've probably written
some of the first lines of code
and then the other
engineers you brought on,
was it a prerequisite for
them to already know Effect?
Did you train them?
No, it's not a prerequisite.
I discovered the functional
programming at Inato
when I was working at
Inato and we used fp-ts.
And then when I decided to choose
like the stack for Spiko
I chosed Effect because it was natural
successor to fp-ts.
And for hiring people
most of the
employees didn't know Effect
before they came at Spiko.
I think none of them knew
Effect and I trained them
and they also trained
with the documentation
and so far it's been okay.
I think the
documentation has really improved
there is still a learning curve
that is a bit higher than other libraries,
but I think it's okay.
Also at Spiko, we have, I mean,
we are all Full Stack
and Dev Ops and we,
we do all kinds of things
and we also develop smart contracts.
And so far we don't have
specialized roles in tech.
So when we hire people,
we try to hire people
who can quickly jump
on new technologies and
learn this technology.
So, so far everyone has
been learning Effect.
That is awesome.
Congratulations on that.
I share that similar path to you
that I found my way to Effect through
functional programming,
but I do think that this
won't be what brings most people
to effect these days and
certainly in the future.
So I'm curious, like what is your
teaching methodology
when you bring on
someone new to the team?
Are you precisely
already looking for someone
with functional programming backgrounds
or are you rather like
not mentioning FP at all?
Like how you're
explaining Effect to new engineers?
It's a good question.
I don't mention
functional programming.
I just say it's inspired by functional
programming concepts,
but I don't think it's
a good way to put it to,
to say it's functional
programming because it's not.
I prefer
to describe Effect as
a framework
that has type level error
and included dependency management.
So basically in TypeScript by default,
you have, you would have the success type
of a function that is correctly typed,
but here you have, you have
also errors and dependencies.
So I think it's a simple way to put it,
like you will have also success,
the error and the dependencies.
So maybe we can switch gears slightly
and talk a little bit
more about the architecture
of the product that you're building.
So maybe we can start by you describing
which sort of like
consumer facing parts there are.
Is there maybe a mobile app?
Is there a web app?
Are there some other primitives?
And then switching gears
and describing more of like
the backend architecture.
So, so far we have three apps,
three front-end apps written in
TypeScript and React.
we are building a platform
that connects financial
product issuers, distributors,
and also we have like
the admin interface.
So we have three types
of users in our system,
investors, distributors, and admins.
So basically we built three apps
and they are all connected to three APIs.
And all these APIs are
connected to backend services.
And so the way we
designed our backend is we built,
I would say not microservices,
but I would say macroservices.
It's not a monolith, but
we have like a few services.
One service to manage, I don't know,
fund operations, one to manage orders,
one manages the blockchain stuff.
So that's how we designed our backend.
Got it.
And so given those different APIs,
I assume you're using
the various building blocks
that Effect provides for that.
For example, the Effect HTTP
packages from the Platform.
Absolutely.
So at the beginning, when we started,
there was no platform package in Effect.
I think it was two years ago.
And Platform
almost did not exist, I think.
So the first thing I did was wrapping
Express into Effect,
which came up as a very bad idea
because it was a mess
to manage dependencies, and
I had like some
non-Effect code in my code base,
which wasn't really nice.
So basically my entry
point wasn't an Effect.
It was an Express server.
And Effect was in each endpoint handler,
I had to Effect.runPromise my code.
But later the Effect
platform package came in Effect
and we switched to it.
And at the time there
was no HTTP API module.
So we used effect-http
which was a lib
written by Milan Suk.
And we used it for a while actually.
And I even met Milan at
the first Effect Conference
and it was awesome to meet
everyone at this conference.
But yeah, we used it for a while
and then HTTP API module arrived
and we switched to it.
We migrated everything to it then.
Yeah, I mean, I
remember those transitions well
similar to you where,
like I also used fp-ts in the past
where you got those like much more fine
granular primitives,
but not as high level of a thing
such as like an HTTP module.
And I still remember,
getting something really nice,
like getting those
primitives from Effect,
but having to give up on
higher level primitives
where you like imagining Express or Koa
or like all of those different things,
you arguably had better primitives,
like you had middlewares, et cetera.
And now you kind of needed to,
you got one thing,
but you had to give up of another thing.
And now you're kind of wondering,
what should I do about this?
And you did a similar thing as I did,
which is like trying to rebuild
those higher level primitives.
But now it's so nice
that it was just always
just a matter of time
until the Effect ecosystem
and the Effect Core
which give us some of those primitives.
And now finally the time has come over
the last year or so.
if you're going through that transition,
it's just so nice how
much code you can delete
and how everything
just fits like a glove.
So I wanna learn more about the backend,
but maybe briefly to wrap
up on the front end first,
how much or how little are
using Effect on the front end?
Are you stopping by
basically consuming the HTTP APIs
or are using Effect
in a more involved way
on the front end?
No, we use it only for data fetching.
So, but it's very convenient
because with the HTTP API module,
you can write APIs in
the declarative way.
So you have auto-generated
clients, which is type safe.
So basically we have a lib
that exports the HTTP API
specification and we have the clients,
we just import the client from the app.
But maybe in the future we could use it
even in a more involved
way, but it's not the case yet.
Got it, that makes sense.
So yeah, let's talk a little
bit more about the backend.
So you mentioned already
those like three pillars
of your application, like
if I remember correctly,
like the investor facing ones,
then the distributor
facing ones, and then the admin,
which I suppose is like your company.
so at the
minimum for each of those,
there seems to be an HTTP API,
but then you also
need to interface somehow
with like external services.
So is that all done from those HTTP APIs
or are there some
other parts in the system,
maybe like some message buzz or yeah,
maybe you can explain a little bit more
of like how the backend
works and how Effect helps there.
So as I told you, we have multiple
services now backend
and they sometimes need to communicate
between one and another.
So the first thing we did
was to build an internal API.
So for each service, we
built an internal HTTP API
because it was the same
technology as external APIs.
So it was easy for one
service to call another service.
We had the same,
auto-generative client,
everything was type safe.
So it felt very nice.
And at some point we had to,
run some, I would
say some background jobs,
for instance, an investor
wants to create an order
and we need to send him an email.
But before sending the
email, we need to generate PDF
and like a lot of stuff has to happen
and it will take maybe several seconds,
maybe 10 seconds. I don't know
But we want to do it as a background job.
So we can answer with
the HTTP response quickly.
So the first thing we did
was using our internal API
but we used something to
avoid blocking the HTTP response.
so we used Effect.forkDaemon
so we would fork the
execution of an internal API call
into the, I would say the global scope.
So it would not be blocking for answering
or for responding with the HTTP status
but the server would
still make the request
to another internal service.
And we did that for a while
especially for the emails,
every kind of background jobs.
And at some point
we had a lot of internal API calls
and we still have some, but at some point
I wanted to bring a new,
a better kind of
architecture to do these kind of things.
And what people tipically
do is bring a message broker
that is connecting to all microservices
so service can just
drop events or commands
in the message broker
and service can also listen
to events or commands and handle them.
Got it.
So let me just see
whether I understood correctly.
So you basically have
those HTTP endpoints.
You're running them, are
you running them on something
like AWS Lambda or on stateful servers?
Yes, everything runs in Kubernetes.
Okay, so you're
running that in Kubernetes
so you have the benefit
of long running servers.
So it's one of the
questions I would have had otherwise
is when you say fork deamon, now you
basically need to think
about like the request lifecycle
but otherwise also the
lifecycle of a server.
So if you have the lifecycle of a server
that makes that certainly easier.
But the more complexity
that you had sort of like
as your app develops and evolves,
typically it gets more
complex than simpler over time.
And so you always had that trade off of
like you want to
if the client does something, it should
get a request response
as quickly as possible.
And now you have that
tension where on the one hand
you want to keep the response snappy and
as quick as possible.
And on the other side, you
had more and more complexity,
more and more stuff and
features that should be handled.
So far, you've always kind of let that
run in the background
by basically through the
request response lifecycle
would already create a
Effect fiber for like a fork.
The details here don't matter too much.
But basically now the
question is like the little job
that now somehow runs through the event,
like JavaScript event loop that maybe
gets out of control.
Maybe like what if you need
to do error handling there,
maybe how many of those
are running at the same time.
So this is the situation you were facing.
And now you thought about, okay, what is
like the architectural,
architectually proper way to deal with that.
And that is where you arrived at.
Let's look into event
buses. Is that correct?
Yeah, absolutely.
So we wanted to have a bit more control
with this kind of background jobs.
Also for
the structure of the code base,
we wanted to avoid like
dependencies between services,
like some service, they
just want to drop an event
and they don't have to know
about the API of other services.
So it made more sense
to just drop an event
and not calling the service directly is
also like a best practice
I think in terms of software design.
When your system gets complex,
you want to have like better architecture
and dependencies enforcement.
That makes sense.
And so I just want to get a better
understanding of like the nature
of the things that should
kind of happen in the background.
Are those purely things
that need to be kicked off
but are no longer relevant for the user
experience in the front end?
So like if you click a
button in the front end,
you're like getting the request response
cycle back and a thing happens
and then you don't really care too much
what is happening in the background.
Or is there stuff that
happens in the background
that the front end might now be like
pulling for in some way?
Most of the time we don't call responses.
It's just like background like I don't
know, send an email or we call.
So we're also connected with blockchains
and blockchains are complex systems.
But basically, the registry of our funds
are written on the blockchains
and the blockchains evolves.
But yeah, a lot of events are
happening on the blockchains
and we basically pull the
blockchain continuously.
We index all these events
and sometimes when a
specific event happens,
we want to trigger some code.
So for these kinds of scenarios, it's
really nice to have an
event-driven architecture.
Got it. Yeah, so you're
mentioning an event bus.
So let's dive in. What did you do there?
Suppose there are many different options.
You're not the first one to embrace an
event-driven architecture.
So how did you go about that?
Did you build something like that before?
Did you have any sort
of relevant experience?
Walk me through sort of like the
You getting up to speed on that topic
and then evaluating different options to
what you've ultimately implemented.
So we had like a
discussion with all my team
discussion with all my team and
there are many options,
but for us, we prefer to
use an open source project.
So we didn't want to be tied to a
specific cloud provider,
like I don't know, SQS from Amazon, from
AWS or Google PubSub.
And basically, at the end,
we had three main choices.
So the first was Kafka, because Kafka is
widely used, big enterprise.
And it's probably the most used
in the financial sector.
So it would have been a good fit for us.
The second choice was
RabbitMQ, because it's battle-tested,
it's well-used, plenty of plugins, etc.
And the third was like
the new to the game is NATS.
It's more recent, but it's very
lightweight and
It seemed very nice.
So we decided to eliminate
Kafka because it was
heavy to run locally,
not easy to run on Kubernetes.
I think they are a great
option to run on Kubernetes,
but it was not the lightweight option.
Overall, we chose RabbitMQ
because it was more battle-tested,
because it had a UI
interface to play with.
So we were more confident with something
that is a bit battle-tested.
And we didn't need to have
something that is super fast
or that can handle
millions of message per seconds.
Also, it's not a very critical choice for
us because
All the data that goes into
RabbitMQ is not persistent.
It's temporary because once the message
is consumed, it disappears.
So at some point, if we want to switch to
another message broker,
we could. It wouldn't be so hard for us to switch.
So we were like, "Okay, RabbitMQ seems
nice. Let's go and let's try it."
But the adventure was a bit
rough after we made that choice
because we discovered a lot of things
that were not so nice
about RabbitMQ and AMQP.
And maybe if I knew them before, maybe I
would not have chosen something like that.
Let's definitely go into that.
For those folks who have never used
Rabbit, you also mentioned AMQP.
Would you mind giving a
quick explainer of what those are
and how they relate to each other?
Sure.
I'm not an expert, but
here is my understanding.
AMQP is a protocol, like HTTP.
It's a protocol that is turned out for
sending and consuming messages.
And there are several brokers that
implement this protocol.
RabbitMQ is one of
them, but
You have other types of message brokers
that implement AMQP.
For instance, I think ActiveMQ.
Everything that has an MQ in the name is
probably an AMQP client and server.
Actually, it's quite nice
because AMQP is widely used.
RabbitMQ is one of the most popular
implementation of an AMQP server.
I think I've used RabbitMQ in the past,
but it's been a little while.
And I'm dealing with other kinds of
event-driven systems these days
where I'm exploring event sourcing more
in a front-end context.
Some similarities, but also very
different when it comes to the use case
that you're describing.
So very curious to
learn more about yours.
You mentioned that you were running into
some issues with RabbitMQ.
So maybe you can describe a little bit
more chronologically
how you went about it,
what you were aiming for, which sort of
assumptions or hopes
that maybe turned out different than you
thought, and what did you arrive at?
It's an interesting question.
So when we started to implement,
the connection between RabbitMQ
and our back-end services, what we did
was we basically wrapped
the AMQP library into Effect.
So we used the official
library called AMQPlib.
the official implementation for Node.js.
basically when you connect to RabbitMQ
from Node.js, you create two objects.
One is the connection, so
you initiate the connection,
and then you initiate a channel which
represents a connection channel
between the client and the server.
So you initiate these two objects, and
what we did is we put
them into, I think, a layer,
an effect layer that was scoped so when
the dependency lifetime was ended,
we could close the
connection, close the channel.
So that's pretty basic.
Then we could just call our channel to
send message and subscribe to it.
What was a bit challenging is that at some
point, we discovered that our services
were not consuming messages anymore
or they were not sending
messages anymore to the server.
Why?
Because the connection and the channel
between client and server
was closed at some point
for any reason.
Maybe the connection between the service
and the server had a problem.
So we are running a
RabbitMQ in the Kubernetes cluster.
It's quite safe because we use
RabbitMQ as a cluster in
our Kubernetes cluster.
So we always have the persistence of
messages and replications of events.
But sometimes one node of
the RabbitMQ cluster goes down,
and then the connection between the
service and the RabbitMQ node goes down.
The problem with the AMQPlib is that
when the connection object is failing,
there is no autoreconnection.
So basically, the connection just drops.
It was a challenge for us to implement an
automatic reconnection of the service.
So we iterated a bit on this problem.
The first thing we did is that
In our Kubernetes
health check endpoint,
we would add a check to see if the
connection was alive.
If the connection was not alive,
so Kubernetes could just kill the bot,
we would start a new one
with a fresh connection.
But still, it was not a good solution
because we were using some events.
We had a lot of issues with that.
What we did is we tried to implement a
way to reconnect automatically.
That's how I ended up writing a library
called effect-messaging
to wrap the AMQP
connection, the AMQP channel,
and add autoreconnection to this.
In Effect, when you create a connection
object and you provide it in a layer,
then you can't mutate the object.
that is in the layer.
But in the AMQPlib
there is no function,
there is no method of
connection to refresh.
So you have to replace
the object completely.
This was an interesting problem.
When I started implementing
the effect-messaging library,
I digged a little bit into Effect
and into what data structure I could
choose to represent this.
I ended up using something
I think I would never use
in standard application.
I ended up using the Ref module to
represent a mutable state.
So what I did is instead of
providing a connection object
from the AMQPlib
I provided a Ref of the connection.
Then I could listen to events and to
close events from the connection
and then refresh the
connection and mutate the Ref.
People using the layer
would have the new connection.
I love that you walked us
through from the high level
to the lowest level
what you need to understand
and leverage some lesser known pieces of
Effect to implement this.
Even though it's very specific,
what I think is universal about that
experience that you went through
is that Effect allows
you to model the realities
of your application at this
point of your infrastructure.
Since this is what for me most of software
development is about.
You're trying to
figure out what is the goal,
what are the building
blocks, what are the materials,
what are the pieces that we can use here.
Then you need to develop what is
sometimes referred to
as mechanical empathy for the building
blocks that you have.
And here mechanical empathy is understanding,
"Okay, there's a connection.
In an ideal world, this
connection would suffice
and would never be super stable
and there's just one
computer is never going down.
Super duper easy."
But the tricky part here is we're dealing
with distributed systems
and with distributed systems
there arise a ton of problems
and now you need to
model for those cases.
The specifics is one thing here,
but I think what is
applicable to almost any situation
is you need to model your reality of your
application, of your system.
This is, I think, now
that you've figured out
what are the primitives of
Effect that you need to use,
you could use them and
model your domain perfectly.
In a way where it's
kind of a one-to-one fit
to reality and implementation.
Maybe the first time you
look at this you might think,
"Why are all of those things needed?"
But now that you understand
it, understand
each piece here has a reason.
This is what I also
appreciate about so many pieces
in the Effect
platform packages, et cetera,
where maybe you look at it
the first time you think,
"Why are those three things needed?"
Isn't one sufficient?
But then you realize,
"Oh, actually, those are here
for those various scenarios."
And they're split up into three that is
not a leaky abstraction.
So many things in the
JavaScript ecosystems
are designed in a leaky, abstracted way.
And this is what I
appreciate about Effect.
It kind of absorbs the reality of the
materials you're working with.
And I think this is
exactly what you've done
in the effect-messaging layer.
And also just on a personal note,
this definitely
resonates in regards to the work
I've been doing over
the last couple of years
where I'm diving very
deep into distributed systems
where reconnection, et
cetera, is the bread and butter.
So kudos to you to next to
building a FinTech product,
also dealing with distributed systems
and building a library in that regard.
So congrats, that is awesome.
Actually, it was super interesting.
And I think it's also part of our job to
have a resilient system.
And it's very important for our product.
But it was a really funny experience
to also do this kind of lower
I mean, it's lower than stuff that we
rarely do in a FinTech startup
because we often use existing libraries.
It's really fun, actually.
And now for someone else, there is an
existing library for that
where someone can look at this
and already absorb your
mental model of the world.
You had to do the modeling
carefully yourself before
where you needed to figure out,
"Is this kind of
abstraction modeling correct?
Should there be more pieces?
Should there cannot be condensed more?"
And now someone can look and say,
"Okay, event-driven sounds great,
RabbitMQ sounds great. Let me try that."
And if you've designed it well, they are
falling in a pit of success.
So that is amazing.
And now that you've created this
environment for your fellow engineers,
what is it like now to
work with that system?
You mean the developer
experience with that library?
Exactly. You've created the
world that you wanted to see.
You wanted to have
RabbitMQ as a message broker.
You wanted from your API servers,
wanted to publish events
onto the message broker.
So how is that working?
And then we should also
talk about the consumer side
of how those events are being processed
and any sort of interesting
things emerging from that.
Sure.
I think the library of effect-messaging
at the moment,
it mainly solves the
problem of reconnection
and having a strong connection layer.
And I also created
higher-level interfaces,
like publisher interface
and subscriber interface.
That can allow people to
subsctibe to events and publish.
So at the moment, I kept
this interface
very generic.
So we could add more
message brokers to the library
in the future if we wanted to.
But at Spiko, we use RabbitMQ,
so we used AMQP
publisher, AMQP subscribers.
I mean, we don't have any
more problem with connections
and failing connections.
but those are abstractions on top of this
publisher subscriber
where we have a strong
typing of events and payloads,
like event types payloads.
this is not in the
effect-messaging library.
It's something we built
on top in our codebase.
basically
when we want to create a new event,
we declare a new interface
with an event type and payload.
And that's it.
But maybe in the future
If we find the good abstraction for it,
maybe we could add these kind of features
to effect-messaging
the vision I
have maybe for this library
would be to do something that
is similar to HTTP API module,
but to describe events.
So I think it's a bit of an ideal world,
and it's not done yet,
but we could imagine
something where you declare
your messaging API
with events and payloads
in a declarative way,
and it would create both a subscriber
and the publisher for these events.
Also, another part
that is very interesting
is that with HTTP API module,
you can generate open API specs.
And I've done a bit of research,
and there is an equivalence
in the Async messaging world.
There is an equivalent of
open API called AsyncAPI,
I don't think it's very popular yet,
but it's a way to
expose Async messaging APIs.
So it could be interesting in the future.
Maybe we want to drop events from a
Node.js application,
and there is a new team
using the NOSO language
that wants to generate codegen,
a subscriber for these events.
It could use a generic interface.
It could use a
specification to generate a client
for these events.
Maybe in the future we'll have that.
No, this makes perfect sense,
and I do hope that Effect Platform
and more primitives inside
of the core Effect ecosystem
are moving in that direction,
similar to what we've seen already
with the Effect Platform package,
with the HTTP package, etc.
So I think there are many scenarios
where events are super
valuable primitive
here in combination with RabbitMQ,
but I think they're
more universal than that.
And I think there's
many, many related domains
and problems that
should be explored as well.
I'm not sure how relevant schema
evolution, for example,
here would be.
There are a lot of
problems and a lot of challenges
that are explored at scale
in the Kafka ecosystem, etc.,
and bring that more into
default with core Effect
I think would be super valuable.
So kudos to you for
already pioneering in that space.
I'm actually curious for the
events that you can publish
and consume through effect-messaging.
Are you already embracing
effect schemas for that?
I think the way we
describe events is with schema,
because we always use
schema for decoding everything
that comes from the outside world,
because we can't trust the outside world,
so we always decode everything,
everything that comes from the database.
We have a persistence response SQL, so we
use also Effect SQL,
but we decode with schemas,
and for messaging, we describe all the
events with schemas,
and make sure that we encode it and
decode it with schemas.
Got it.
Is the problem of schema evolution
already becoming an issue?
So you mentioned that you're not really
persisting the events
over a longer period of time,
but they're rather
like going onto the broker
until they're taken off, and that's it.
So the only schema evolution scenario
that I could think of
is that you have version
one of the service rolled out,
and then you're rolling out version two.
Somehow there's a period
of time where you still have
a backlog of many messages, which are
still of version one,
and now you've rolled
out version two of the app,
and now that expects
an event of version two,
but gets one of version one.
Have you faced that problem already,
and have you modeled around it?
Yeah, we already faced that problem,
and we're still a young company,
and we don't have millions of users,
so sometimes we take the risk to publish,
to do a breaking change,
because
we don't have a big use of message that
are being consumed,
and if we deploy a breaking change,
we have a very low
probability that we get an error
by breaking the schema.
If we really want to be careful,
what we do is we try to
avoid breaking changes,
and then we have, I think, two options.
Either we modify the existing
schema in a non-breaking way,
like for instance we
add a new optional field,
or we could just create a V2 or another
version of the event.
That makes perfect sense.
I'm just personally very interested in
that problem domain,
and so for example to say we just don't
do breaking changes,
I think that's a super valid way to go,
but I would like to get more
safety rails and guard rails
to make sure that that is actually done.
Maybe you're deep into some refactoring,
and you just have a type error over here,
and you jump to the schema
definition, you add something,
and you don't even think about it,
and TypeScript is
happy, the tests are passing,
and you do that
deployment on a Friday afternoon,
and poof, the weekend is ruined.
How do we get a little
bit more systematized help
that we say, "Hey, this is a schema,
and the schema doesn't
just exist in a vacuum,
but it exists in a chronological context,
and needs to be protected
with some special rules."
This is something that
I'm very curious about,
and I think it's
generally applicable to many,
many parts where you need a schema.
Obviously, to databases as well,
databases have their own
way of dealing with that,
but let's say you will have a little bit
of client-side storage
that you have a schema for
and you always test your app, or
rarely test your app
going from version 1 to version 2.
You test it from scratch in version 1
and from scratch in version 2,
but you rarely test those transitions.
This is just something
that I hope we get to explore
with effect schema as well.
That sounds super cool.
How ready is effect-messaging right now
if someone outside of Spiko says,
"Heck, I want to use that."
How ready for production is it?
Can people already use it?
I just want to rebound
on what you were saying,
because I think it's
also an interesting problem,
and then I answered your question about
the readiness of effect-messaging.
But avoiding breaking change
is something really important.
We discovered something called Optic
that can compare two
versions of an open API file
and that can prevent
breaking changes on open API specs.
I think I was really
happy to discover this
because in Effect you can
generate open API specs.
It's taken to the library.
Already we can profit off
the whole open API ecosystem,
including this tool.
It's a way to leverage the
existing tools in open API
to avoid breaking change.
I hope we can do this for
messaging in future as well.
To answer your questions
about effect-messaging,
I think we are probably the only company
using effect-messaging in production,
at least I don't know
any other companies who do.
But it's working fine for us.
It's production ready in the sense that
we are using it in production.
But of course, I think if
more people want to use it,
maybe we discover some
issues and it's open source.
so anyone can post
issues and contribute.
That is awesome.
It seems like you have a public roadmap
there laid out as well,
which I assume are ideas you
had along the way of thoughts
that it would be cool if at some point,
probably not promises that you're going
to ship those in the near future.
But that sounds very enticing.
It seems like you even considered
supporting other message brokers,
the one that we talked
about initially, Kafka,
NATS, Google PubSub, et cetera.
That is super interesting.
I think the vision is interesting.
Maybe I won't have time to implement.
Actually, I don't think I
will have time to implement this.
But it's open for everyone.
And if someone is interested
in writing a PubSub subscriber, PubSub publisher
I think I laid out a
very generic interface.
So it might be a good way to start.
But I see you already have one
contribution pending
from trusty GitHub Copilot
I launched the Copilot.
So I'm experimenting a
bit this week with agents
Yeah, so maybe let's use that as a segue
to talk about that topic specifically.
I mean, AI agents, coding agents, et
cetera, all the rage these days.
I'm spending a lot of time myself
exploring them, seeing
where they're good at,
where they're not yet good at, how can I
use them for my own benefits,
like for productivity reasons or just
quality of life improvements.
So I'm curious where you
found them most helpful,
and particularly also
in the context of Effect,
maybe areas where you found them to be
not great with Effect yet.
Good question
I can speak a bit about my
time with the agents.
So it's quite recent.
I was not a fan of Closo and not a fan of
AI until a few weeks ago,
a few months ago, I think,
because I didn't find it very productive.
Everything that was written
by the AI, I had to change it.
It was more a loss of time, I think, than
productivity benefits.
But after investing a bit of
time into these agents,
and I think good instructions.
So I've tested a few agents, a few things
like VS Code with Copilot, Closo,
and also like CLIs now, there was a
really interesting
project called opencode.
I don't know if you heard of it, but I
find it very interesting.
Actually, I start to see
some benefits using agents.
I mean, I don't delegate
everything to an agent,
but now it really helps
me to write code faster.
I read everything and I make sure that
it's written as I want it to be.
It comes with some
things way better than before,
because now with agents,
before doing an edit to a file,
it really researches a lot,
like it goes on the internet,
it fetches documentation.
So I set it up for the
Effect MCP for Effect docs.
I'm not sure it's very
often used by my agent,
but if the agent wants
to know more about Effect,
it has access to the
documentation, theoretically.
And now the fact that the agent is able
to grab a lot of context
from the code base from the internet,
it makes it really,
really better than before.
It's quite impressive now, I think.
Yeah, and I mean, it's worth keeping in
mind that this is
developing super quickly
and we're now recording
this at late August 2025.
If someone is watching this
like a couple of months later,
God knows what sort of AI
models are available by then.
And I'm pretty certain this is the worst
it's ever been as it is right now.
So things are developing and
improving very, very quickly.
And this is a very important topic to me
because I think it fundamentally changes
what programming means.
I hope that it will leverage us and
elevate us as system designers
to spend more time about trying to
understand the reality
and modeling the reality,
like expressing which parts of the
reality do we need to model
and which ones can we leave out.
Is it important to modelling that the
server that you're running this on
is like whatever, in a silver
casing or in a black casing?
No, that part of the
domain is not important.
But does it sit in that
one cluster or another?
You get the point.
And I think this is
more like modeling things,
expressing intent, that is
where I see things going.
And the act of implementation is probably
being taken away from us over time.
That's at least my
understanding of the future right now.
But I'm seeing it as glass half full
because what I see as the same as today
is that we're still looking for better
materials to build applications with.
And I'm convinced that AI is going to
have a very good time
building with Effect in the future
just because it is the best material to
build applications with.
But I think we still have a long way to
go to really help AIs
to be very good at working with Effect.
So you mentioned the Effect
MCP, that already helps a lot
and it will get a lot better
because we've recently
switched over the Effect docs search
to use a service and a
tool called Mixspread,
which does smart AI
indexing and retrieval.
So if you now look for interruption,
you actually now should
get a much more useful result
and we're going to wire up the same thing
for the Effect MCP as well.
So that will mean that if an AI needs to
look up interruption,
it gets better context. So
that's going to help a lot.
But the other thing
that AI really relies on,
delivering good results is that it's
trained on various little snippets
and examples similar to us humans.
Some of us are going to go to the source
and read the source code
and apply a lot of mental horsepower to
figuring out how
something should be used.
But I think the majority much rather
prefers to look for an example,
like how can I use Effect
Schema? How can I use Effect HTTP?
Or how can I use effect-messaging instead
of reading the source,
reading an example,
trusting that is correct
and then kind of like transfer applying
that on your own domain.
And this is where we still have a long
way to go to like cover all the API
use cases of Effect and
helping the coding agent,
giving it more training
material. And once we have that,
I expect a huge uptick in
how good the results are
that we're getting from
coding agents for Effect.
So that's something I'm
keeping a close eye on
and expect to see a lot of improvements
there in the future.
So as a last topic,
we can look back on many
years of Effect getting better,
but it's barely getting started. What are
you looking forward most
to in the future of Effect?
Okay. It's a good question. So first, I'm
excited to know that the
team is working on the V4.
I'm not sure what impact it will have
for us, but I'm excited.
I understand that it will have like
performance benefits.
I'm also excited to
try out the Effect AI.
So we were talking about about AI for
productivity with agents.
But I think we could also use AI for
application stuff like for automating
stuff in our business.
For instance, I give you a very simple
example, but when an investor wants to
withdraw
money from the platform,
he fills his bank number and
he has to send us a PDF or something from
a document from the bank to
justify that the correct number.
And at the moment, we have to
verify each document manually.
And this is something we
could automate with AI.
So in the future, we could, for instance,
have a service that calls an LLM sending
the bank account to check with the PDF.
And we could ask for a structured
response, like yes or
no, how confident are you?
And based on this response, we could take
a decision and programmatically validate
all of the bank accounts.
I mean, I'd highly
recommend you giving it a try.
I've used the Effect AI package for very
similar things myself.
I've built a little invoicing system for
myself that I use when I work with other
people or send out invoices myself.
I've used the Effect AI package,
particularly for that use case.
And it just worked first try.
So it is incredible
what is possible with that.
It's nowhere comparable to the amount of
effort it takes to build
effect-messaging from scratch.
But this will be a thing.
You'll be done after 15 minutes and it's
going to significantly improve the
capabilities of your products.
So don't sleep on that.
Yeah, definitely.
And actually, I think it's
a big shift of mindset also.
And I think I didn't realize it until a few
weeks ago, actually.
the way I was interacting with
LLM so far was just sending texts,
receiving texts, but
not structured texts.
Not texts I could decide to put in my
code and do an if-else on the results.
But here it's completely different.
It's a very shift of mindset.
And I think it's a
revolution for programming.
Most certainly.
Well, I'm looking forward to the next
time we have a chat to hear more about
how you're leveraging the AI packages of
Effect to ship new features.
But Samuel, I want to thank you for
taking the time and
sharing your entire journey here.
What you're building at
Spiko sounds super, super cool.
So it seems like you all are hiring.
So if you're based in Paris or want to
move to Paris, I think Spiko is probably
an interesting company
to consider applying at.
And yeah, also huge congrats on all the
milestones you've reached recently on
shipping effect-messaging.
I hope that a lot more folks are using
messaging patterns like that.
I think it's a wonderful fit with Effect.
And thank you for coming on the show.
Thank you so much as well.
It's a real pleasure for me to
participate in the podcast and also to be
part of the community.
So I want to thank you as well.
Perfect. Salut.
Salut.
Thank you for listening to the
Cause & Effect Podcast.
If you've enjoyed this episode, please
subscribe, leave a review
and share it with your friends.
If you haven't done so already, you can
join our Discord community.
And if you have any questions, feedback
or suggestions about this episode or
about Effect in general,
don't hesitate to get in touch.
See you in the next episode.