Postgres FM

Nikolay and Michael discuss index maintenance — how do we know if or when we need it, and what we need to do.

Show Notes

Important correction from this episode: amcheck promises no false positives, not no false negatives, sorry!

Here are links to a few things we mentioned: 


What did you like or not like? What should we discuss next time? Let us know by tweeting us on @samokhvalov and @michristofides or by commenting on our topic ideas Google doc.

If you would like to share this episode, here's a good link (and thank you!)

Postgres FM is brought to you by:

With special thanks to:

Creators & Guests

Michael Christofides
Founder of pgMustard
Nikolay Samokhvalov
Founder of Postgres AI

What is Postgres FM?

A weekly podcast about all things PostgreSQL

Michael: Hello, and welcome to progre FM, a
weekly show about all thingss Postgres curl.

I am Michael founder of PG mustard, and this
is my cohost, Nick I founder of pore AI.

Hey, Nick.


What are we gonna be talking about today?

Nikolay: Yeah.

Hello, Michael.

As we decided let's talk about index maintenance,
first of all, blood removal, but maybe not only right.

Michael: Yeah, we've alluded to this in previous episode
around vacuum, believe, or about blo specifically.

So yeah, excited to dive into this with you.

So should we start with how this occurs perhaps
or quick recap on is this always a problem?

If I, if I have a Postgre database, is it very
likely I'm suffering from this at the moment?

Or is it, is there a chance that it's fine.

Nikolay: Yeah, by the way you are right.

our episode was called vacuum, not below, but they're so close to each other.


Because usually, we talk about blood and lack of vacuuming or some
inefficient, vacuuming and so This is a great You asked, we were

just asked, if we tuned our auto vacuum to be quite aggressive.

Everything looks fine.

question is still, should we have index maintenance from time to time?

Is it inevitable?

And in my opinion, from my practice, the answer is
yes, due to many reasons and autocam won't solve

everything and you, some blood will still be accumulated.

Even if you have very aggressive vacuum.

And observing other databases, database systems,
for example, SQL server, Microsoft SQL server.

They also have index maintenance, a routine task for DBS DB.

like in my opinion, you, , you should still in very low that
growing large systems, you should indexes from time aggressive auto

vacuuming will only reduce the frequency of this need that comes
like a tool come less frequent, but still you need to recreate them.

Michael: I think that's a really good point on heavily loaded systems.

I think that probably the only caveat I would put is
if you've got a relatively light load on your Postgres

database, this might be something you don't come across.

If, if a, even if you haven't tuned auto vacuum, It
will be tidying things up as it goes along, freeing up

index pages, especially on later versions of Postgres.

There's some, there's some additional logic to make that
even less likely to bloat, but yeah, there's, I think it's

even worse than for tables that within indexes, right?

Like vacuums able to free up space and tables much.

Well, refer to a previous episode for more details, but it's, I think
on, in tables you can free up space and it's much more easily reused.

Whereas in a be tree index, if you get page splits vacuum can
free up those that space in those pages again, but it can't UNS

Nikolay: it doesn't rebalance between, right.

It doesn't Rebe that and, and.

I, I, agree with you.

Some systems might not need automatic index, recreation but I'm sure
everyone needs monitoring and analysis of blow, on regular basis.

So this is a must for everyone in my opinion, and question
is how to analyze blood because it's not a trivial task.

. All the scripts we have for fast blood analysis,
lightweight analysis, they are all wrong.

I mean, they can have some errors.

They are not precise.

For example, create a table with three columns small
li time timestamp GI, Z don't use timestamp without GI

Z and small li again, fill it with a few million row.

Create an index and use your script to estimate blood in table and
in index to actually actually it's two different scripts, right?

But still at least for table I'm I I'm sure you will see
terrible blow, but we know there is no blood there yet with

Justin Short of throw, we didn't delete updates or no blood.

You'll see something like 30% of.


So we need to keep in mind our scripts and, and they, our scripts
they have errors sometimes quite significant ones because they

don't take into account alignment padding, this experiment.

I just described it by purpose has zero bites gaps
between columns inside each page, column values.

I'm not sure by the way about index.

I, it should also have some blood.

Maybe not, maybe not, maybe I'm I I'm wrong here.

And it's, it's only about hip only,

Michael: maybe multi column indexes, but I don't, I haven't checked

Nikolay: right?

Any anyway estimation scripts are great because they are light.

But, I always correct everyone saying, blo, is this, I
say, estimated blood, is this because it's not fully agree.

The real number can be obtained by using Rista tubal.

Just a top, extension, by the way, I, I had no luck using it.

I had problems as well in the past show.

So I don't use it myself.

In my approach, since we work a lot colos of production environments,
I always say let's just vacuum and fool on the cl because why not?

And compare numbers before and after.

And this.

A reliable number of blood.

We, Like, this is a real, exact number
because why can fool show show it us to us?

So the clones are cool here as well.

But you need to wait a little bit of course.

Michael: Yeah, I guess this leads us to quite an interesting
part of the topic, which is when should you worry about this?

And I, I might even argue that 30% bloat is probably not that bad, you know?

Nikolay: no, not that bad.

but if it shows 60 and 30 of those, those 60 is an error you decided to
blow probably like it, it affects the, the fact that it's an estimate.

It affects our decisions anyway.

Michael: For sure, but just to give people peace of mind,
you know, when we are talking about badly bloated indexes,

they could easily be triple the size of a, of a reindexed,

Nikolay: Oh, this is great.

By the way, this is exercise.

I do usually people see 90% blow, but they say, is it bad?

But I say 90% blood means that your index size is
10 times bigger than it would be without blood.

99% means a hundred times.

a hundred times bigger.

It's already quite noticeable.

And by the way I wanted to mention those I say lightweight estimate scripts.

They sometimes are not light at all.

And we have many cases when they fail to finish during statement,
time out like 15 or 30 seconds because too many indexes.

And analysis takes time as well.

So this is not something you should put to monitoring to run each minute.

Probably you don't need it every minute.

You need it once per day, maybe because it doesn't change very fast.

But back to the question about we have 99%
blood meaning our index is hundred times.

Bigger question is, is it.

Michael: Or why?


Nikolay: we have this space, for example.

Yeah, oh yes.

It's bad, but why it's it's bad?

How, how is it bad?

Michael: Yeah, this is a fun, kind of very specific
thing that we came across working with query plans.

And it's, it's funny because, because this feels like a, a, we, we
discussed it last week, but macro analysis problem, you know, system level

what's going on, but you can spot it sometimes from a single query plan.

So if you, if you notice maybe your queries are slower they,
or they could degrading over time, the same query is maybe

doing an index scan, but that is getting slower over time.

And you look at buffers again, a previous episode, you can sometimes see
that, that those buffer numbers are way higher than they need to be for

the amount of data involved or gradually increasing each time you run it.

So it, it's not guaranteed that that's a sign of
bloat, but there's a really good chance that it is.

Nikolay: This is great, by the way, you, you are, you
apply this classification of micro and micro, and I

even didn't think about it, but it's exactly like.

Aligns with my, thoughts.

So we have macro effects and micro effects starting from micro effects.

Sometimes some particular queries and with particular parameters
might behave much worse for bloated in the case of bloated index.

Because for example, instead of dealing with a few
buffer, We need to deal with interest, which are sparsely

stored, and we need to involve much more buffers.

So we have, we can see degradation sometimes several
orders of magnitude it's like in extreme cases.

but it's tricky to find for example, you
checked your query for a few parameter sets.

You see, it's not bad compared to loaded versus UN bloated, right?

but you don't look at other parameters,
but for other parameters it may be worse.

So it's, it's a tricky how to automatically, check how
bad it is because actually Bry height grows quite slowly.

It's like AIF basis, very high, like, so it grows very slowly.

And if we go from, I don't know, like.

Thousand buffers to 10 to hundred thousand buffers
for overall index size or million buffers already.

We don't see a huge increase in, look up time because of of height because
we just a few, okay a couple of more hopes to reach the lift who, who cares.

So a couple of more IO doesn't.

So B three is excellent here.

Like it grows very slowly.

So searching in, let's find one row among a million
rows or let's find one row among a billion rows.

Well, difference is not huge, right?

It's not, it won't be noticeable.

Like it won't be 1000 times.

Difference it.

Small difference, a few more IO hops, but if you need to deal
with many, many interest and blood means that distribution

of them they're stored like sparsely in case of blood that.

Index, of course the difference will be amazing.

Michael: I guess that's covering micro a little
bit, but on the macro side, we've got things like

Nikolay: Macro in my opinion, it's much more interesting.

I feel it.

I feel it like if we have a 99% blood, it means we have
so many more pages to store the same data our index.

And it means that not only disc space occupied I worried
about it less, like disc space is interesting, i, I will

explain my thoughts in the second, but the most notice.

Performance negative effect from a high blood of indexes, in my opinion, is
We need to keep more pages in, cash, both in the buffer pool and page cash.

So it means that our cash effectiveness reduces.

I, have cases sometimes where.

That database or few databases they grown so quickly.

And the company may using this databases may already be a multibillion
company, ya unicorn, but never, nobody never was fighting with blog.

So for example, up to half of databases blocked both table
and index, and it means that shared buffers work much worse.

Michael: This is a great point, actually again, a caveat is this, this
applies of course, when your database exceeds the size or database,

including indexes, including bloated indexes exceeds the size of
shared buffers before then probably not gonna cause you any issues.

But most of us are probably running databases where we exceed

Nikolay: if, or if it exceeds cash size buffer, buffer size.

But if you eliminate blo, it, it fits again.


Michael: Yeah, but that would be, that would imagine that the
difference you would see then, like it would be stark the difference.

Nikolay: Yeah.

This macro effect is quite noticeable.

And also we can talk budgets here, like spending on hardware
or in the case of Aurora, where, where they charge for IO.

If we need to do much more IO.

save here.


Also this macro effect is very interesting.

I think it's maybe it's the most important one in my opinion,
in terms of performance, but third one and this third one, this

space occupied is, is usually the first thing that comes to mind.

When we talk about, we think about blo
blo means that we occupy much more space.

we pay for this, but not only we pay for this if we also recall
our previous episodes, we need to write more to wall, right?

Full page rights, for example index rights, also go there to
wall and more wall is generated and data files also bigger.

Wall is bigger.

It gives more work for backup system.

Backup is longer, but also replication, physical at least
the, it also slow, , more bites need to be transferred

to standby node notes, negative effects everywhere.


Michael: Yeah.

That's a really good point.


Nikolay: Check pointer also actually check pointer
also needs to take care about more dirty pages.

Michael: Yeah.

So the ideal world is not to grow your indexes 10
X, and then re-index them to shrink them back down.

In an ideal world, we'll stay on top of it.

So, it stays in a much more manageable range.

Firstly I guess through auto vacuum, but also as we've
discussed auto vacuum, Shrink it down once it's started to blow.

So we do need to do these occasional reindex, ideally reindex
concurrently I'm guess, or as you were gonna say, I guess PG repack.

Nikolay: right.

So we have a tool called Pogs checkup, which explains a lot
of details about both provides some recommendations and.

I mean, we ly say, yeah, we have this tool.

And it tries to explain what to do, but in general, the plan we recommend
especially for cases which are like example, database company was

super successful database grown, but is no proper processes in place.

We usually.

Recommend of course you can see the auto whack
settings a hundred percent, but not always.

This will help to like we, we just discussed it, index health, my degree
also, if you have long transactions also if you have large tables, let's

touch it once again, in, in a minute, auto won't help you a lot, but it's
still needed to make it more aggressive to eliminate datas faster then.

Run, index maintenance once and then prepare to run into maybe in
fully automated fashion during weekends, because index maintenance

means index circula creation, and it's definitely stress for dis.

And for wall as well.

And for replications also, it's definitely some stress.

So prepare it to run auto automatically.

Every for example, every, every weekend, for example, in
GitLab, we did GitLab is disclaimer, our client and they have

a lot of interesting information, automation and articles how
they automated and they run fully automated index recreation.


Michael: I should read those I haven't

Nikolay: right.

A question is how to recreate indexes.

It's a depending on your Pogs version from Pogs
12, it's possible to run a index concurrently.

Good earlier.

Well, the idea is okay, you can create in this drop old one.

but if this index is participating in some constraints,
like primary key, , it'll be quite a task, but there is

also pry park Can Repak indexes basically recreate them.

not touching tables because removal blow from table
is a bigger task than just recreational of indexes.

So Pak can work with any Pogo measure version.

I mean, not the, the old world one, but, but for example, 96,

Michael: Yeah.

Nikolay: It has some interesting caveats though.

For example, if you have deferred constraints, do I pronounce it right?

Deferred constraints, right.

So if you have deferred constraints might have issues with running PGE park.

But for, for table, actually not for index forensic, index's fine for table.

It, it can have issues.

Me mirror had problem set, but, and they, they wrote an excellent article.

Well, we can provide a link if you need to fight with
blo and table and you have deferred constraints, it's a

very good read, but index, there won't be any problem.

so, but, but modern, modern approach is just index concurrently.

Unfortunately, rend condex concurrently.

This feature had so many bug fixed.

All of them are fixed.


But history shows that many people, including
my me already think there might be more.

Found in this problem.

So I, I would recommend if you run index concurrently, it's
worth also having process of index certification for example,

using arm check, you can check for corruption periodically.

Weekly, for example, also after recreation, maybe it's a good idea to
double check if there is corruption or no, because this, recent bug,

which was discovered in may in POS August 14, we briefly discussed it.

So the interesting thing is that if you have huge tables,
like terabyte size, multi terabyte size, and they are

not We create index or we create index during all this time.

It can take hours.

OWA cannot delete that tops in any table, any index it's database
wide problem, and poss 14, there was attempt , to improve and.

XME horizon was not held during index creation or recreation,
but unfortunately back was discovered in may and in June,

August 14.4 had this functionality fixed optimization reverted.

So the rule of thumb don't allow your tables.

To grow more than a hundred gigabytes because index
maintenance will require more index maintenance.

Michael: Yeah.

Actually just while we're on that topic.

So I think if you're on a, a minor version of 14 lower than 14.4 it's
it's and you use reindex concurrently upgrade with the exception.

I think of RDS and Aurora who back patched it to

Nikolay: maybe I didn't know about it.

Mm-hmm and not only index can currently create index can
currently, also was a problem and everyone uses it because how

can create index on running system create index can currently.

So if you are running 14.0 till 14.3 you have urgent task.

I think everyone knows it, but just worth repeating anyway.

Michael: I've met somebody the other day who didn't unfortunately.

So I think it's worth repeating, but Awesome.

You'd have mentioned a couple of things that we probably should touch on.

So index corruption's another version of.

Maintenance that you might need to do.

I know of one time that's really famous for causing
corrupt indexes, which is operating system upgrades.

Nikolay: Any operating system is dangerous.

A operat system upgrade is dangerous it Glissy version of
great it may cause and corruption silently, and it's a problem.

So it's a big problem, unfortunately, and, and, there's no like easy solution
and it it's quite easy to get into trouble if you don't think about it.



Michael: Well, just wanted to say it because just so
in any case, anybody else wasn't aware of that one?

certainly wasn't a year or two ago, so,

Nikolay: yeah, you need to recreate indexes and
you need to do it inside maintenance window.

Michael: Yeah.

Nikolay: Also, unfortunately like genes can be bloated as well, and
there is no good way to estimate it and they can be corrupted as well.

Also, there is no good way official way to estimate a there are patches
for check that still are not applied and, but they already quite advanced.

And I used those patches to gene and indexes in a couple of places and it.

So, I mean, it didn't find anything.

So there's also always question, we had false positives there.

It was fixed, but yeah, so,

Michael: I think their promise is no force negatives, right?

am check,

Nikolay: right.


So if you, if you talk about bigger picture, imagine if you have this like
case successful company, very big database, but they have a lot of, blood.


And indexes removing table blood is also interesting, but indexes,
we discussed various problems, including this micro level when

particular queries are slow, removing that blood by the way, if
you have queue like pattern of working with table, you insert.

Probably updated a few times in delete because task was process.

this is a good way to get high blood.

And I saw many times that high blood at some point it's like fine, but at
some point a query is degrader very fast because of this micro level problem.

I mean, we have a memory.

We have big shared buffers.

Stable is quite small, like maybe few gigabytes only, but we see some
queries degrade because of these micro level issues we discussed.

So the question is where to start.

If you have many, many, many hundreds or thousands of
indexes, where would you start for this initial run?

Would, would you start from small indexes?

But those which have higher, like you, you put threshold like 90%
and, and take care of smaller indexes or large indexes, then you go

down or you start with indexes, which have more blow bites first.

Where would you start?

Michael: That's a good question.

I saw your tweet as well about asking people
about unused and redundant indexes as well.

And I, I know that maybe I'm cheating by having seen that,
but that felt like a really nice, like, especially redundant.

I'm not so sure about unused because I wonder if.

Bloated anyway, I'm not I'm it depends on statistics, I guess,
a little bit, but redundant being a redundant index, I guess.

easiest example of that is you've got the exact same index.

So maybe let's take a simple case of a
single column Bre index, but we've got two of

Nikolay: Well, redundant can be, for example, you have single column
you have two is of course single columns is redundant to two as if if

two is, has the same column in the first place, not in the second place.

This is


Michael: matters.

Nikolay: classic redundant, but the problem will be what if we try to
eliminate the first index because it's redundant, but the second index is

also unused and according to different report, we also decide to drop it.

You drop both indexes and it's not a good idea already.

Michael: Or it's so bloated that posts actually avoids using it and goes to

Nikolay: This is, this is interesting question.

So we discussed.

Problems micro macro, depending on which you can
consider is the biggest problem for you for your case.

see two options.

If you think about particular queries that have very degraded
performance because of blood, you probably should say, let's

reindex indexes with blood level more than 90% first.

Even if they, they are smaller ones and don't contribute a
To this macro level problem, like spamming our buffer pool.

But if you think macro level problem is bigger, you probably should start with
the biggest like the, the indexes, which have more blo advice estimated right.

Order by and go goat from top to down, even though some of them are unused

Michael: I might, can I make a potentially wrong
argument for always starting with the smaller ones?

I'll be interested in your thoughts.

If you, if you've got a macro level problem, if your database
is on fire and you're trying to load the smaller index

indexes, whilst they, they might be being used, like just
cuz they're smaller doesn't mean they aren't being used more.

So I wonder if you could also look at access.

If you started

with the, smaller ones, the, yeah, the other the other angle would be.

If I re-index a smaller index, it finishes faster.

And my system reduces its load slightly sooner than if
I, if I re-index a large one and it takes hours, I've got

hours more at the same level of higher disaster, I guess.

But if I start re-indexing smaller ones and they finish
faster, maybe I can reduce the load a little bit quicker.

Nikolay: Well, in my opinion, if, if we database on fire
as a whole, I will start from top to bottom fighting.

Withs I, I, I tried to think about it.

Should we look at usage stats for indexes?

I didn't see any big reason for that.

Like any way we want to, like, we, we didn't discuss the
threshold, but usually practical threshold is 30, 40.

somewhere there, if we have bigger blood, this

Michael: Yeah.

Nikolay: this is for blood.



If it's blood below 30%, 20%, for example, it's we don't care usually.

indexes by the way, are bloated by default 10%
by, on purpose because they have fuel factor 90,

Michael: query most bloat estimation queries factor in

Nikolay: yeah, they, yes, they, they, it this into account,
but anyway, I mean, 90% fill factor means we bought on

purpose because we want some room for updates inside pages.

If it's 90, if it's, if it.

Actually 80.


It's not that bad.

I mean, I mean 20% blood, but if blot already had like
half of it, it's maybe already time to take care of it.

And so I would go from top to bottom if we take care about hold database, but.

In some cases, database is fine, but some particular queries, for
example, with Q Q, like pattern we use, and we see this particular

queries dealing with the stable, they have very bad performance.

In this case, I would start from the most
bloated indexes regardless of their size.

Maybe I would go from top to bottom.

but I would, I would skip indexes, which have blood estimation 60, 70 for
first run to help as soon as possible those queries, which we know they,

maybe I would take particular tables sexually in this case because right.


Michael: But I would actually say, I I think I
understand now why that actually makes sense to the

strategy because bloat is not independent of usage.

Chances are, if it's a heavily with

Nikolay: good, good.

But imagine some indexes.

Oh, by the way, if indexes unused, they, they,
we should apply extreme solution for blo.

We just drop index.


know, it's unused everywhere on standbys everywhere, and we
observe quite long, I usually recommend a statistics age to be

more than one month because we had cases which index is unused.

Several weeks we drop it, but in the beginning
of it's not, it's headed and mentioned.

He mentioned the case when they headed.

And when first of next month, Analysts are waiting for some
report and they don't see report because index was dropped.

So some indexes are used only once per month and they are very important.

So usage numbers, if they are not zero, I don't know how to use them.

Maybe I'm wrong.

I, I try to like how to join blood and usage more rights
more it, but imagine the case, you, you have an, a new index.

You don't use it for index scans, but.

Question doesn't contribute to this macro problem.

Spamming, spamming our CAEs.

Michael: Probably not.

It's probably been long since evicted.

Oh, you think

Nikolay: yes.

Any update, unless, unless it's hot hip, he on apples.

It'll need to update this index to update the index.

We need to load this page to memory.

Michael: And so, and, and right.

Overhead as well, but

Nikolay: We don't use index, but still it occupies some space in our CAEs.


Michael: interesting.

Nikolay: maybe it also has some effects at micro levels.

I don't know.

Maybe not.

So there are many, many, many interesting things here.

, but I, I hope this discussion will help someone to understand it better.

And anyway, just fight with block in indexes and
prepare not only to do it once manually, but automated.

we using Pak or index can currently carefully
with some index can currently might lead to.

Well, well, right now, a lot of bug fixed people use our index concurrently.

Many people use it, many projects, large projects.

So it works.

So I, I don't want to, to be blamed for like Nikolay told us
not to use index concurrently, use it, but just keep in mind

that many bug were fixed and maybe there are some bug in future.

I would just automate it, but also automate analysis of corruption using check

least work weekly.


Michael: AMEC available managed services or?

Nikolay: Yeah, it's it's it's standard country It's
it's a country model it's available everywhere,

Michael: all the Contra modules are available everywhere.

So I'm glad I But the, yes.


So you mentioned indexes briefly, and I think, I, I don't
think, I don't know enough about maintenance of them,

but I think there, I read a really good blog post by,

Nikolay: genes.

There are Betric.

small, for posting list and for keys, as I
understand, like two types, maybe I'm wrong.

I, I like it's, it's already past few
years since I touched the gene internals.

So maybe I'm wrong here, but definitely there are three.

I, I know that from developers first some Beri didn't exist and
the performance was not, was not good for, for larger scale,

but between side and that's why they can be also corrupted.

When you switch to new gripy version and the rules character order.

colation also changes.

So so gene can be just, of course can be
also corrupted because J is three as well.

Michael: Yeah.

Nikolay: So, but IM check is doesn't exist.

How to eliminate, how to estimate blood.

Also, we don't know.

My, my rule use CLS use vacuum full from time to time and see actual

Michael: actual,

Nikolay: reliable number, like brute force.

Michael: Yeah.

I like it.

It's the First good use case for a vacuum for on a, on an active system I've

Nikolay: of course it'll take many hours if you have
many terabytes in size, but maybe you should have.

Partitioning is a reminder, right?

Like don't allow your tables to grow over a hundred gigs.

And then if you have partitioning, you
can run, check and parallel, by the way.

It's not, not a trivial task.

have some scripts, automation, scripts developed for GitLab, I guess, and run.

We can run up check and parallel.

We can run back in full and parallel on very temporarily.

Clone, which has a lot of power and this automation is good to

Michael: Nice.

Is that in recent versions or has that been around for a while?

Vacuum for in parallel.

Nikolay: No, no, no, no.

What can from parallel doesn't exist and I'm checking parallel doesn't exist.

You need to script it.

Michael: Oh, okay.

I understand now because of the partitions.

Nikolay: it's it's, by the way, it's interesting
people random check in, in single thread.

And I also did it until some someone from
GIK think, think, thanks for this question.

Asked why we ran.

Like we have so many course why.

It was, it was an excellent question.

Of course it should be in parallel because
it it'll produce results much faster.

And if you use, if you are in the cloud and you use temporary
clone, for example, of course you want to make a job faster,

even if it applies all CPUs, a hundred percent of them because
you, you pay for minutes or seconds of usage in edible.

not for, not for hundred percent or 50%, it doesn't matter, but for
time and you want to make your job faster, you, you run, run full speed

Michael: yeah.

Same in maintenance windows, I guess.

you are against the

Nikolay: maintenance windows.

Yes, but not the same for regular index maintenance on
production, because there you probably want a single

thread or a few threads of index concurrently, maybe just.

Because it's still, it's still already some stress and you
don't want to, make it full fulfilled, by the way, again,

re encourage our listeners to read the articles GitLab blog.

they have good materials.

I recommend.

I, I remember also Peter Gagan came and learned something interesting from
their experience and then working on arid, dead duplication in Pogo 13 and 14.

So it's recommended material for it.

Michael: Awesome.

I suspect that's all we've got time for today.

Thank you everybody.


everyone who keeps giving us feedback, keeps
sending us suggestions and shares it online.

We really appreciate it.

forward to seeing you next week.

Cheers, Nick.

Nikolay: Yeah.

Thank you, Michael.

See you next week.


Michael: take care.