1
00:00:00,099 --> 00:00:05,199
Michael: Hello, and welcome to Postgres
FM a we show about all things, PostgreSQL.

2
00:00:05,379 --> 00:00:07,169
I am Michael founder of pgMustard.

3
00:00:07,449 --> 00:00:10,089
This is my cohost Nikolay founder of Postgres AI.

4
00:00:10,509 --> 00:00:10,969
Hey Nikolay.

5
00:00:10,989 --> 00:00:11,679
How are you doing today?

6
00:00:12,309 --> 00:00:13,149
Nikolay: Hello doing great.

7
00:00:13,209 --> 00:00:13,629
How are you?

8
00:00:14,189 --> 00:00:15,209
Michael: I am doing well.

9
00:00:15,209 --> 00:00:16,002
Thank you very much.

10
00:00:16,336 --> 00:00:18,856
Today, we are gonna talk about buffers.

11
00:00:18,968 --> 00:00:24,157
And I know this is a topic you care a lot about and
I've enjoyed reading your opinions on them in the past.

12
00:00:24,630 --> 00:00:28,343
So , really excited to talk about why they're important.

13
00:00:28,343 --> 00:00:30,148
Maybe we can start, with what they are

14
00:00:30,318 --> 00:00:32,378
Nikolay: You know what let's start slightly off topic.

15
00:00:32,437 --> 00:00:38,887
SQL can be written in upper case or lower case, or
like maybe some, maybe there are many more options.

16
00:00:39,337 --> 00:00:47,277
So I prefer lower case, because I write a lot of SQL code,
so it's like, I, I write SQL code much more than other code,

17
00:00:47,277 --> 00:00:51,537
like any other language I use over the last many years.

18
00:00:51,567 --> 00:01:00,297
And so I don't like to scream and use ACA at all,
but when I type buffers, I enjoy typing to uppercase.

19
00:01:01,492 --> 00:01:02,002
Because

20
00:01:02,002 --> 00:01:06,442
I, I, yeah, because, because it's so important to use them

21
00:01:06,741 --> 00:01:14,781
Michael: So just to check, do you write, explain open brackets,
analyze comma, and then, and then caps slack on buffers,

22
00:01:14,871 --> 00:01:15,231
and then you

23
00:01:15,366 --> 00:01:20,826
Nikolay: I, I mean, I, still, I still use
upper case when I explain things to people.

24
00:01:20,826 --> 00:01:25,166
And when you embed small parts of SQL in.

25
00:01:25,791 --> 00:01:26,781
Regular text.

26
00:01:27,051 --> 00:01:29,841
It makes sense to still use upper case.

27
00:01:30,261 --> 00:01:37,761
That that's why probably I, I, I remember, but like
buffers, I, I just couple of times per week, I explain

28
00:01:37,821 --> 00:01:41,121
how important it is to use buffers to other people.

29
00:01:41,661 --> 00:01:45,861
So , and this is the one the, I enjoy typing upper.

30
00:01:47,211 --> 00:01:48,981
Michael: That's so funny on that.

31
00:01:49,041 --> 00:01:54,851
I think I mix my use of lowercase and uppercase all the time in blog posts.

32
00:01:54,856 --> 00:01:55,991
It's really difficult to know.

33
00:01:55,991 --> 00:02:02,351
Sometimes it's not clear that you're talking about a keyword
or, or code, and sometimes the code formatting is not great.

34
00:02:02,351 --> 00:02:10,326
So yeah, I, when I'm, especially when it's in line, I do sometimes
use capitals, just to show that I'm talking about the keyword,

35
00:02:10,326 --> 00:02:14,291
not just a normal word in the sentence, but I'm very inconsistent.

36
00:02:14,321 --> 00:02:17,261
While we are talking about consistency of how we write things.

37
00:02:17,741 --> 00:02:20,621
Do you always write Postgres sometimes PostgreSQL?

38
00:02:20,651 --> 00:02:22,241
Do you have like a rule on which one you

39
00:02:22,576 --> 00:02:27,666
Nikolay: 90% pause just for 8 letters instead of

40
00:02:29,876 --> 00:02:31,046
Michael: just for the shortness.

41
00:02:31,646 --> 00:02:34,606
I find myself doing the same, probably similar 90%.

42
00:02:34,936 --> 00:02:39,106
I tend to use PostgreSQL if like it's a super formal use.

43
00:02:39,106 --> 00:02:45,656
So maybe if I'm talking about a version number in a formal
setting, I might say PostgreSQL, but I don't have any better

44
00:02:45,896 --> 00:02:55,721
Nikolay: Yeah, I will be trying to to pull us to off topics, you know, like
it's so bad that Pogo says eight letters, not seven because in California

45
00:02:56,171 --> 00:03:00,941
you, you can have a custom driver license plate and the limited seven letters.

46
00:03:01,511 --> 00:03:06,594
So, imagine being in the car with a license plate, Postgre without.

47
00:03:08,574 --> 00:03:09,564
Michael: Well, so that would

48
00:03:09,564 --> 00:03:10,614
be pretty funny.

49
00:03:11,274 --> 00:03:15,504
Yeah, there are people out there that would see that as a hate crime, I think.

50
00:03:16,169 --> 00:03:16,589
Nikolay: Mm-hmm

51
00:03:17,004 --> 00:03:17,274
Michael: anyway.

52
00:03:17,274 --> 00:03:17,424
Yeah.

53
00:03:17,424 --> 00:03:20,034
So back back to a shorter word.

54
00:03:20,803 --> 00:03:21,324
Buffers.

55
00:03:22,284 --> 00:03:25,484
So, in case anybody's not sure what we're talking about.

56
00:03:25,514 --> 00:03:33,674
This is a measure of, well, I guess it's not quite
strictly this, but it's a rough measure of IO in the query.

57
00:03:33,674 --> 00:03:42,464
So terms of number of blocks being read or written by various
parts of the query and it shows up in multiple places.

58
00:03:43,126 --> 00:03:45,304
I'm aware of it being in explain.

59
00:03:46,314 --> 00:03:56,764
So explain analyze mostly, but also now explain as of recent
versions and of course, as columns in PG stat statements,

60
00:03:56,764 --> 00:04:03,094
in terms of telling us how much IO different queries are
doing, are there, are there other places that it's showing

61
00:04:03,227 --> 00:04:09,874
Nikolay: Yeah, well, ex let's let's explain, explain a little
bit very, very briefly because it's very confusing sometimes for

62
00:04:09,874 --> 00:04:15,241
new people explain is just to check what planner thinks about.

63
00:04:15,293 --> 00:04:18,593
Future query execution, but it does not execute the query.

64
00:04:18,923 --> 00:04:26,439
It's only like shows what plan thinks right now
for given data statistics and parameters of posts.

65
00:04:26,503 --> 00:04:28,213
Explain analyzes for execution.

66
00:04:28,213 --> 00:04:32,473
Buffers makes sense where one lean execution.

67
00:04:32,533 --> 00:04:43,336
So explain analyze, or in both it's I think like, you know, like
since Postgres 13, it also makes sense for planning stage as well.

68
00:04:43,366 --> 00:04:43,726
Right?

69
00:04:43,756 --> 00:04:48,346
Because it can use some buffers for planner to make it work.

70
00:04:48,346 --> 00:04:48,586
Right.

71
00:04:49,145 --> 00:04:49,775
Tricky question.

72
00:04:49,775 --> 00:04:51,065
I don't remember a hundred percent

73
00:04:52,012 --> 00:04:56,646
Michael: Well, I guess it's always been
possible for the planning stage to read data.

74
00:04:57,353 --> 00:05:01,326
But I guess we've not had the ability to ask it how much it's doing before.

75
00:05:01,376 --> 00:05:03,415
Nikolay: Since POS 13 is possible, I guess.

76
00:05:03,415 --> 00:05:03,625
Right.

77
00:05:03,625 --> 00:05:06,445
So there is four planner stage.

78
00:05:06,445 --> 00:05:13,165
It's also shows how much, how many buffers were heat red or, or, and so on.

79
00:05:14,845 --> 00:05:23,259
right, but what I also wanted to, like, I I'm, there are many
confusing places in database area fielded in general and positives,

80
00:05:23,259 --> 00:05:31,929
particularly, for example, there is also analyzed keyword, which
is another absolutely another thing it's command to ate statistics.

81
00:05:32,079 --> 00:05:42,342
Well, it's, it's not a hundred percent far from getting the plan, because if
you run, analyze on a table, you, you can fix the plan, for example, because

82
00:05:42,342 --> 00:05:51,162
you will have fresh statistics, but it's it's, it can be confusing because
analyze after explain means very different thing, then analyze a table.

83
00:05:51,167 --> 00:05:51,402
Right?

84
00:05:52,032 --> 00:05:55,522
So it's like basics for people who start with podcast.

85
00:05:55,802 --> 00:05:56,642
Michael: Yeah, absolutely.

86
00:05:56,642 --> 00:06:03,992
So in probably today, we're only gonna be talking
about analyzing the context of the explain parameter.

87
00:06:04,452 --> 00:06:04,782
Yeah.

88
00:06:05,047 --> 00:06:05,347
Nikolay: All right.

89
00:06:05,432 --> 00:06:05,882
Michael: Cool.

90
00:06:05,912 --> 00:06:11,091
So, what, is there anything in particular you wanted to
make sure we, like, where did you wanna start with this?

91
00:06:11,254 --> 00:06:17,967
Nikolay: Maybe we should discuss what, 1000 buffers hit or red means, right?

92
00:06:18,167 --> 00:06:20,957
Like it's I, I, I found working.

93
00:06:21,542 --> 00:06:25,172
Over many years working with various engineers.

94
00:06:26,271 --> 00:06:35,076
And I like most interesting in this case are backend
engineers who are the authors directly or indirectly of SQL.

95
00:06:36,366 --> 00:06:42,486
they either write a sequel directly or they
use some ORM, or something that generates SQL.

96
00:06:42,696 --> 00:06:46,836
And, but they have, they have the biggest influence on the result.

97
00:06:47,346 --> 00:06:57,696
And I noticed that they, most of them don't understand what, like they
don't, they may understand, but they don't feel like thousand buffer hits.

98
00:06:57,696 --> 00:06:58,656
What, what is it?

99
00:06:58,967 --> 00:06:59,327
Michael: Yeah.

100
00:06:59,597 --> 00:07:00,047
Awesome.

101
00:07:00,107 --> 00:07:06,617
So when we are talking about this, we're saying,
let's say I've done explain, analyze on a query.

102
00:07:06,617 --> 00:07:08,807
That's a bit slower than I'm expecting it to be.

103
00:07:09,287 --> 00:07:16,018
And because I've been told, I always should use buffers from some
helpful people down the years, maybe they listen to a podcast.

104
00:07:16,573 --> 00:07:21,403
And they're now using buffers, but they see under, let's say an index scan.

105
00:07:21,544 --> 00:07:27,034
They see shared hit equals 500 red equals 500.

106
00:07:27,124 --> 00:07:33,146
So in total we've got 500 blocks that are
shared hits and 500 blocks that are shared red.

107
00:07:34,016 --> 00:07:37,422
And this in total is a thousand blocks and these re.

108
00:07:38,477 --> 00:07:41,237
Each one of these is an eight kilobyte Reed.

109
00:07:42,480 --> 00:07:52,637
The hits being from Postgres buffer cash and the reeds being from,
well maybe from disk, but maybe from the operating system cash.

110
00:07:52,637 --> 00:07:54,692
We don't, unfortunately we don't know which one.

111
00:07:54,939 --> 00:08:01,479
Nikolay: By the way, it'll be so great to see, like we
have for, for macro analysis, just as statements we have

112
00:08:01,479 --> 00:08:06,369
extension additional P Kash, which can show you real disc IO.

113
00:08:06,789 --> 00:08:08,649
But for explain, we don't have anything.

114
00:08:08,649 --> 00:08:11,139
It would be so good somehow to hack.

115
00:08:11,594 --> 00:08:14,269
Michael: We have, I've forgotten the actual word wording for it.

116
00:08:14,269 --> 00:08:15,266
Is it  IO timing.

117
00:08:15,476 --> 00:08:17,816
So we can, we can do show IO timing

118
00:08:18,236 --> 00:08:18,806
Nikolay: Yeah.

119
00:08:19,016 --> 00:08:19,526
Michael: key word.

120
00:08:19,966 --> 00:08:26,210
Nikolay: track_io_timing parameter in post, but it won't
show you like number of buffers is the amount of work.

121
00:08:26,240 --> 00:08:33,806
Somebody mentioned this phrase in Twitter, we had discussion about
buffer yet another discussion about buffers in explain on Twitter.

122
00:08:33,806 --> 00:08:36,476
And somebody mentioned the amount of work.

123
00:08:36,536 --> 00:08:38,949
This is exactly like, this is great   des.

124
00:08:39,314 --> 00:08:43,564
Of uh, this information and timing is not amount of work.

125
00:08:43,894 --> 00:08:45,295
It's a duration of work.

126
00:08:45,481 --> 00:08:48,781
Why we are interested in the amount of work we we'll discuss it later.

127
00:08:48,781 --> 00:08:49,021
Right.

128
00:08:49,681 --> 00:08:57,716
While it's maybe more interesting than timing, but what I,
first of all, I double checked explain buffers without analyze.

129
00:08:57,716 --> 00:08:58,886
It makes sense.

130
00:08:59,096 --> 00:09:10,466
I have pauses 14, but I, I, I believe it's since PO 13, when it explain,
got this planning stage and I see buffers hits and, and reads there.

131
00:09:10,946 --> 00:09:17,546
So, so 1000 buffers is a big or not that big, how to feel.

132
00:09:18,196 --> 00:09:22,096
Because developers in mind, they, they may understand.

133
00:09:22,126 --> 00:09:22,486
Okay.

134
00:09:22,486 --> 00:09:24,736
One buffer is eight ki ki bytes.

135
00:09:24,736 --> 00:09:27,136
By the way, your article is old school.

136
00:09:27,256 --> 00:09:30,496
It says kilobytes and old school way ki bytes.

137
00:09:30,496 --> 00:09:35,656
It's like cause it's not 1000 it's one, 1024, but it's another off topic.

138
00:09:36,101 --> 00:09:39,851
So we have block size eight KB bytes.

139
00:09:40,571 --> 00:09:45,997
Is it big to have 500 hits and 500 hits for buffers?

140
00:09:46,141 --> 00:09:46,461
total.

141
00:09:46,776 --> 00:09:49,776
Michael: My arithmetic is awful at this kind of thing.

142
00:09:50,496 --> 00:09:57,966
That's one of the reasons why in the tool we make we,
we display that for people to try and make that easier.

143
00:09:58,296 --> 00:10:03,546
Nikolay: So we take number of blocks, multiply by eight divided by 1,001,000.

144
00:10:05,106 --> 00:10:08,868
Of course one, 1000 blocks is eight, maybe bite.

145
00:10:09,793 --> 00:10:13,693
It's not that big, it's quite small number, but it depends.

146
00:10:13,693 --> 00:10:24,453
Of course, if you just need to read very, very small row consisting of couple
of numbers, probably it's too much to read like tiny row with two columns.

147
00:10:25,053 --> 00:10:29,223
So, it's not that big, but what I'm trying
to tell that that you're absolutely right.

148
00:10:29,223 --> 00:10:31,263
Like converting to bites.

149
00:10:31,443 --> 00:10:38,528
It encourages engineers to think about, like, to
imagine how, how big is that this data volume.

150
00:10:38,948 --> 00:10:44,018
So if they hear to read this couple of roles we needed to, to deal.

151
00:10:45,308 --> 00:10:53,773
even heating, not, not reading heating a gigabyte, so it makes
them to think, oh, it's something not optimal is happening here.

152
00:10:53,778 --> 00:11:01,823
I should find a better way to  improve this query, for example,
to have a better index option or, or something like that.

153
00:11:02,423 --> 00:11:08,123
But there is a trick here when we talk about res if we, for example, okay.

154
00:11:08,183 --> 00:11:09,163
1000 re.

155
00:11:09,938 --> 00:11:22,257
Of buffers is can, can be converted to eight Miyes, but 1000 hits can be
tricky because some buffers can be hit multiple times in, in the buffer pool.

156
00:11:22,286 --> 00:11:25,136
Michael: Well, yeah, so I think this is contentious.

157
00:11:25,181 --> 00:11:34,373
I've chosen to mostly ignore this and if we get some double
counting then actually in some ways, progress is doing duplicate

158
00:11:34,563 --> 00:11:34,913
Nikolay: Right.

159
00:11:34,943 --> 00:11:38,873
Michael: is some duplicate work going on and
if we are using it as a measure of work done,

160
00:11:39,783 --> 00:11:40,233
Nikolay: Yeah.

161
00:11:40,443 --> 00:11:41,533
Michael: with the double counting.

162
00:11:42,033 --> 00:11:42,393
Nikolay: it's.

163
00:11:42,393 --> 00:11:43,473
I also think it's okay.

164
00:11:43,473 --> 00:11:51,636
So we, we can have stored much less data in memory, but if
we need to heat one the same buffer multiple times, we still

165
00:11:51,636 --> 00:11:55,133
count it  the same way as it would be separate buffers.

166
00:11:55,138 --> 00:11:58,193
And we, we just need to understand how much work we need to do.

167
00:11:58,743 --> 00:12:07,368
We can imagine the cases when the buffer hits converted
to bites you can see the amount of work bigger than to be

168
00:12:07,373 --> 00:12:11,478
hit buffers to be it exceeds the buffer pool size, maybe.

169
00:12:11,478 --> 00:12:11,718
Right.

170
00:12:12,368 --> 00:12:15,715
Michael: Well, it could even exceed the
amount of data you have in the database.

171
00:12:15,715 --> 00:12:17,035
Like it's totally possible.

172
00:12:17,155 --> 00:12:17,845
Nikolay: Theoretically.

173
00:12:17,899 --> 00:12:28,039
Michael: Well, I saw an example, I think it was from through test data,
but Ryan, did you see the blog post by Ryan Lambert on H three indexes?

174
00:12:28,173 --> 00:12:29,973
It was a, it was a few weeks back.

175
00:12:30,140 --> 00:12:31,993
And it was really interesting to me.

176
00:12:31,993 --> 00:12:39,371
They were type of geospatial index  and one of his example, query plans, he
was looking at, he was doing an aggregation on a lot of the data, I believe.

177
00:12:39,791 --> 00:12:43,091
And it was doing something like 39 gigabytes of buffers total.

178
00:12:43,991 --> 00:12:45,221
And that's, that's a lot.

179
00:12:45,221 --> 00:12:45,641
Right.

180
00:12:45,641 --> 00:12:49,871
But he, it really shocked him because his
data set, he knew was smaller than that.

181
00:12:49,991 --> 00:12:54,611
Nikolay: 35 gigabytes of buffers or buffer hits.

182
00:12:54,685 --> 00:12:55,635
Michael: Buffer's total.

183
00:12:55,650 --> 00:13:05,410
Nikolay: So buffers work total, because like, if we, if we, say some number
of bites of data, it feels like a storage, not amount of work to be done.

184
00:13:05,616 --> 00:13:05,946
Michael: Yeah.

185
00:13:05,946 --> 00:13:06,576
Good point.

186
00:13:07,161 --> 00:13:11,561
Nikolay: so like, I mean, it can lead to
confusion much easier compared to the case.

187
00:13:11,561 --> 00:13:14,950
When we mentioned hits and res all the time, buffer hits, buffer res.

188
00:13:15,700 --> 00:13:17,080
So I think we shouldn't IIT.

189
00:13:17,410 --> 00:13:21,280
If we convert to bites, we shouldn't meet these like action words.

190
00:13:21,515 --> 00:13:22,530
Michael: That's a good, interesting point.

191
00:13:22,690 --> 00:13:26,750
Do you mean like hit and reads or do you mean the,
make sure we still mention that they're buffer.

192
00:13:27,685 --> 00:13:27,835
Nikolay: Yeah.

193
00:13:27,865 --> 00:13:38,815
I, I mean, if we say number of bites, buffers, we can provoke the
confusion to, to, to think about it as a number of bites stored in memory.

194
00:13:38,815 --> 00:13:45,895
But if we keep mentioning heats and res we avoid
this confusion, like maybe it's just some opinion.

195
00:13:46,725 --> 00:13:47,015
Michael: Yeah.

196
00:13:47,020 --> 00:13:47,215
Yeah.

197
00:13:48,290 --> 00:13:57,385
Nikolay: Your post also mentions other types of buffers, not only
shared buffers, but also local and temp and additional confusion

198
00:13:57,385 --> 00:14:06,423
that can be made there because local is used for temporary
tables, temp buffers, we used are for some other operations.

199
00:14:06,423 --> 00:14:13,143
And like, it's, it's interesting that it can, can lead to
confusion, but I found like most of the time we just work with

200
00:14:13,143 --> 00:14:21,333
shared buffers when we optimize the query and let's discuss why
it's more interesting to focus on buffers than just on timing.

201
00:14:21,475 --> 00:14:22,128
Michael: Yes.

202
00:14:22,174 --> 00:14:23,944
I did do a blog post on this recently.

203
00:14:23,944 --> 00:14:25,924
I'll link it up in the, show notes.

204
00:14:26,074 --> 00:14:32,944
This is something I think I learned mostly from listening
to you speak in the past, but the people, people that are

205
00:14:32,944 --> 00:14:39,937
super experienced in Postgres performance work do often
tell me that they focus a lot on buffers at the start.

206
00:14:39,942 --> 00:14:42,697
And it took me a while to really work out why that was.

207
00:14:42,817 --> 00:14:47,039
But the super important parts are that timing's alone.

208
00:14:47,099 --> 00:14:51,669
So if we just get, explain, analyze, and
don't ask for buffers, there are a few.

209
00:14:52,529 --> 00:15:00,479
Slight issues with that one is we can ask for the same query
plan a hundred times and get a hundred different durations.

210
00:15:00,699 --> 00:15:06,657
You might get slightly slower ones, some slightly fast ones than
they mostly around the same time, but it's different each time.

211
00:15:06,835 --> 00:15:08,361
That's one floor

212
00:15:08,731 --> 00:15:09,401
Nikolay: Especially.

213
00:15:09,751 --> 00:15:11,178
It alone on this server,

214
00:15:11,228 --> 00:15:11,528
Michael: yeah.

215
00:15:11,628 --> 00:15:14,517
,
Nikolay: if, and we almost always, we are not alone.

216
00:15:14,690 --> 00:15:15,710
Michael: Yeah, really good point.

217
00:15:15,735 --> 00:15:17,980
So conversely, why is it different for buffers?

218
00:15:18,640 --> 00:15:26,590
The number of shared hits might change and the number of shared
reads might change, but in combination, unless you change something

219
00:15:26,595 --> 00:15:34,541
else, chances are, if you run the same query a hundred times, those
two numbers summed together will sum to the same number each time.

220
00:15:35,201 --> 00:15:41,226
So, that is a more consistent number than timings,
even if the individual numbers there change.

221
00:15:41,436 --> 00:15:49,745
So that leads on to issue number two, which is if you're looking at
timings, the first time you run a query that data might not be cashed.

222
00:15:49,925 --> 00:15:52,427
And as you run it a few, yeah, exactly.

223
00:15:52,533 --> 00:15:56,741
It might, or it might not, but you don't
necessarily know without  Buffer's information.

224
00:15:57,051 --> 00:15:59,896
So timings can fluctuate quite a lot based on.

225
00:16:00,631 --> 00:16:08,426
Cash state again, buffers whilst the number of hits and reads would change
the sum of those two won't change, depending on the state of the cash.

226
00:16:08,518 --> 00:16:15,246
And then the third one I pointed out in this blog post doesn't
come up as much, but I think it's quite important that the Postgres

227
00:16:15,276 --> 00:16:20,166
query planner not trying to minimize the number of buffers.

228
00:16:20,166 --> 00:16:23,046
What, what it's trying to do is minimize the amount of time.

229
00:16:23,406 --> 00:16:30,986
And sometimes it will pick a plan that is inefficient
in terms of buffers, if it could make it faster.

230
00:16:30,991 --> 00:16:37,076
So the most obvious example of this, I think maybe
the only one I'm not sure is through parallelism.

231
00:16:37,316 --> 00:16:44,848
So if it can spin up multiple workers to do the work
quicker and sequentially scan through the entire table.

232
00:16:45,028 --> 00:16:53,898
Maybe it'll choose to do that even though on a pure efficiency
play, you might have been able to do less work on a single worker.

233
00:16:54,408 --> 00:17:00,931
So, yeah, I'm not sure I see many examples of that, but
it does feel like a flaw of looking at timings alone.

234
00:17:01,053 --> 00:17:01,713
Nikolay: Yeah, exactly.

235
00:17:01,893 --> 00:17:03,153
I agree with all points.

236
00:17:03,543 --> 00:17:15,135
I also like if you think about time of course you want to minimize that
this is your final goal, but indeed if you check the query on a clone, for

237
00:17:15,135 --> 00:17:20,955
example, which has different hardware, maybe even a file system and so on.

238
00:17:21,037 --> 00:17:23,250
And it, makes you think about timing.

239
00:17:23,250 --> 00:17:30,865
Like you deal with time and  it doesn't match production and you think,
oh, it's not possible, we need the same level of machine and so on.

240
00:17:30,870 --> 00:17:33,425
But then the process became very expense.

241
00:17:33,713 --> 00:17:36,443
but it's still possible to keep the process cheap.

242
00:17:36,543 --> 00:17:43,143
You just need to focus on buffers, forget about
timing for a bit optimize based on the amount of work.

243
00:17:43,143 --> 00:17:49,886
And if we focus on buffer numbers of course, we on, we
focus on, on row numbers, but it's like more logical.

244
00:17:50,216 --> 00:17:56,666
You have rows, but you don't understand how many Ravions
were checked and how many dead tubes were removed.

245
00:17:57,561 --> 00:18:02,631
Explain doesn't show it, but buffers can, can
help you understand the amount of work to be done.

246
00:18:02,716 --> 00:18:10,390
And this is exactly what optimization should be
about because any index is the way to reduce IO.

247
00:18:10,667 --> 00:18:10,907
Right?

248
00:18:11,117 --> 00:18:19,697
To just reduce the amount of work instead of sequential scan, for
example, on large table, where, when we need to read a lot of pages

249
00:18:20,117 --> 00:18:24,197
index helps us to read a few pages and reach the target quicker.

250
00:18:24,587 --> 00:18:28,187
So the index is to the way to reduce the amount of work.

251
00:18:28,187 --> 00:18:30,407
And that's why timing is also reduced.

252
00:18:30,467 --> 00:18:31,337
It's a consequence.

253
00:18:31,577 --> 00:18:34,817
So when you optimize something, analyze a query, optimize it.

254
00:18:37,162 --> 00:18:37,452
Deal.

255
00:18:37,502 --> 00:18:43,322
Why to deal with sequences instead of like the core
of optimization, the amount of works or buffers.

256
00:18:43,610 --> 00:18:47,890
Michael: I think I completely agree with
you, but I do have a couple of questions.

257
00:18:48,020 --> 00:18:57,710
I think people really click when they see that they didn't have an
index before sequential scan, it read 500 megabytes of data maybe.

258
00:18:58,490 --> 00:19:05,750
And then when they add an index, it's able to look up the
exact same row in 24 kilobytes or something, you know, of

259
00:19:05,760 --> 00:19:06,180
Nikolay: Right.

260
00:19:06,210 --> 00:19:06,510
Right.

261
00:19:06,720 --> 00:19:10,304
And instead of seeing how timing reduced and thinking, oh good.

262
00:19:10,304 --> 00:19:14,576
we see how buffers are reduced and understand why timing was also reduced.

263
00:19:14,576 --> 00:19:17,022
Like we, we see the reason of this reduction of timing.

264
00:19:17,087 --> 00:19:17,893
Michael: Exactly.

265
00:19:17,893 --> 00:19:21,553
I think there's a risk that people think they see an index scan.

266
00:19:21,553 --> 00:19:23,233
They think, oh, index is a magic.

267
00:19:23,238 --> 00:19:24,193
That's why it's fast.

268
00:19:24,283 --> 00:19:25,603
It's like, oh no, it's not magic.

269
00:19:25,603 --> 00:19:29,023
It just lets you look it up much more efficiently and therefore faster.

270
00:19:29,323 --> 00:19:31,183
So I completely with you on that.

271
00:19:31,648 --> 00:19:38,731
But where I lose you a little bit is that there
are expensive operations that don't report buffers.

272
00:19:38,731 --> 00:19:48,855
So for example, a sort in memory or some aggregations, for
example, maybe uh, these were count as CPU intensive rather

273
00:19:48,855 --> 00:19:51,905
than IO and that maybe that's far less often the bottleneck.

274
00:19:52,218 --> 00:19:56,093
but we don't get any buffers reported for them if they're done in memory.

275
00:19:56,663 --> 00:20:00,818
I like getting both timing and buffers and using them in combination

276
00:20:01,024 --> 00:20:06,184
Nikolay: Yeah, of course we still have other
information in the plan so we can understand.

277
00:20:06,334 --> 00:20:06,694
Okay.

278
00:20:06,994 --> 00:20:11,254
IO was quite low buffers, four buffers hit and that's it.

279
00:20:11,254 --> 00:20:14,284
But we have a hundred millisecond what's happening here, right?

280
00:20:14,313 --> 00:20:17,473
Like it's, of course sometimes, but quite rare.

281
00:20:18,463 --> 00:20:25,213
you agree, like most often we the reason of slow
query is a lot of value happening under the hood.

282
00:20:25,753 --> 00:20:26,083
Right.

283
00:20:26,219 --> 00:20:27,959
Michael: Well, even with the sort case, right?

284
00:20:27,959 --> 00:20:30,149
Like why is sort taken so long?

285
00:20:30,209 --> 00:20:39,539
It's because you are sorting a million rows and if you could instead
sort 10, the first 10 that you need maybe you're you're imaginating or

286
00:20:39,539 --> 00:20:47,275
something, you can get those ordered from an index you're gonna massively
reduce the IO therefore not need to sort as many rows in the first place.

287
00:20:47,275 --> 00:20:54,865
So even when it's not the bottleneck, I think it's often the
solution even if you SP up that sort of a million rows, it's still

288
00:20:54,865 --> 00:20:59,325
gonna be a, a lot, lot slower than only fetching and sorting term.

289
00:20:59,555 --> 00:21:05,255
Nikolay: Yeah, we, we also may think about
like our like efficiency in the following way.

290
00:21:05,255 --> 00:21:05,885
Like we, okay.

291
00:21:05,885 --> 00:21:08,375
We need to return 25 rows or 10 rows.

292
00:21:09,095 --> 00:21:11,915
How many buffers were involved in whole query?

293
00:21:12,245 --> 00:21:14,105
And the, the buffer numbers are important.

294
00:21:14,105 --> 00:21:15,755
I accumulate your form.

295
00:21:15,755 --> 00:21:17,285
So you can look at the.

296
00:21:17,701 --> 00:21:23,145
Of the query and seed total number for everything included underneath.

297
00:21:23,145 --> 00:21:26,745
So the question will be how many buffers?

298
00:21:26,745 --> 00:21:27,915
So we.

299
00:21:28,770 --> 00:21:30,840
Involved to return our 10 rows.

300
00:21:31,260 --> 00:21:32,940
If it's 10 buffers, it's quite good.

301
00:21:33,210 --> 00:21:34,680
If it's one it's excellent.

302
00:21:34,680 --> 00:21:40,540
So means we had  scan of just one buffer one page
and all rows happen to be present in this page.

303
00:21:40,773 --> 00:21:43,233
So few buffers is good to return 10 rows.

304
00:21:43,683 --> 00:21:44,613
Thousand already.

305
00:21:44,613 --> 00:21:45,033
Not so.

306
00:21:45,573 --> 00:21:45,933
Right.

307
00:21:45,993 --> 00:21:50,043
We, we discussed that it's just eight, maybe bytes, but return 10 rows.

308
00:21:50,043 --> 00:21:50,403
Probably.

309
00:21:50,403 --> 00:21:51,573
It's not that efficient.

310
00:21:52,113 --> 00:21:56,356
But also two slightly deeper comments related to explain.

311
00:21:56,976 --> 00:21:57,576
It's interesting.

312
00:21:57,576 --> 00:22:01,776
Like, as I mentioned for Ji statements, we have gist K cash extens.

313
00:22:02,466 --> 00:22:12,636
unfortunately not available almost uh, managed pore uh, services
like RDS, but available for all people who manage pores themselves.

314
00:22:13,116 --> 00:22:23,226
So this excellent extension, it adds you information like
about CPU and real dis Cayo and CPU can even distinguish

315
00:22:23,226 --> 00:22:26,856
user and, and the system CPU time also can text switches.

316
00:22:27,681 --> 00:22:32,271
Excellent, but for expand, we don't have and simple idea.

317
00:22:32,271 --> 00:22:34,801
Like we could still get this information.

318
00:22:34,806 --> 00:22:39,151
If we have access to slash pro on the host with no process ID.

319
00:22:39,151 --> 00:22:48,661
Even if we have parallel workers, we, we can extract their process IDs and
we could take very interesting information about real dis Cayo happened.

320
00:22:48,712 --> 00:22:52,222
And also CPU you, you mentioned CPU intensive work.

321
00:22:52,492 --> 00:22:54,172
It could be present in explain.

322
00:22:54,457 --> 00:23:01,777
Somehow, like additional extension or something, or
maybe like some hacked S for non-production environments?

323
00:23:01,777 --> 00:23:03,487
I think it's quite interesting.

324
00:23:04,027 --> 00:23:08,551
The area to explore and improve observability of single query analysis.

325
00:23:08,864 --> 00:23:09,164
right.

326
00:23:09,164 --> 00:23:14,084
And it can be helpful to see that you like it's very CCP intensive work.

327
00:23:14,324 --> 00:23:15,254
I was low.

328
00:23:15,524 --> 00:23:16,904
That's why query was low.

329
00:23:17,504 --> 00:23:20,684
You just see how much CPU go spent or something like this.

330
00:23:20,834 --> 00:23:22,274
And of course, real this Coyo.

331
00:23:22,279 --> 00:23:23,594
It's also interesting to see.

332
00:23:23,646 --> 00:23:35,521
And another thing I lack the ability to understand the second best
and third best plan in explain you see, because the planner makes

333
00:23:35,521 --> 00:23:40,467
the decision based on virtual cost like something abstract, right?

334
00:23:41,067 --> 00:23:46,857
Which is of course can be tuned according to
parameters, like uh, a random page cost that page cost.

335
00:23:46,857 --> 00:23:47,137
And so.

336
00:23:48,477 --> 00:23:56,478
but uh, you can tune costs uh, and, and planner
never think about what the, the CPU is used.

337
00:23:56,658 --> 00:23:58,098
It doesn't doesn't think about it.

338
00:23:58,608 --> 00:24:01,548
And how many gigabytes we have doesn't think about it.

339
00:24:01,731 --> 00:24:04,532
Michael: Well, it has, so it does factor those into the costs, right?

340
00:24:04,532 --> 00:24:07,562
Like it does CPU, top costs and things like that.

341
00:24:07,562 --> 00:24:08,612
But I think I know what you mean.

342
00:24:08,612 --> 00:24:11,592
It doesn't factor in the server parameters.

343
00:24:11,757 --> 00:24:14,277
Nikolay: The planner doesn't know what hardware we have

344
00:24:14,572 --> 00:24:14,862
Michael: Yeah.

345
00:24:14,867 --> 00:24:15,362
Yeah, sure.

346
00:24:15,387 --> 00:24:21,897
Nikolay: and the planner, even we can, we can fool the planner and
we do it for, for queer optimization in non-production environments.

347
00:24:21,897 --> 00:24:28,167
So when we have, for example, on production, we
have almost a terabyte of Ram on, on non-production.

348
00:24:28,297 --> 00:24:30,087
We, we don't want to pay for it.

349
00:24:30,087 --> 00:24:30,717
We, we have for.

350
00:24:31,677 --> 00:24:34,292
I dunno, 32 gigabytes of Ram.

351
00:24:34,292 --> 00:24:37,492
And the buffer pool is  much smaller than a production.

352
00:24:37,492 --> 00:24:38,522
It's not a problem.

353
00:24:38,882 --> 00:24:42,112
The planner doesn't even look at the shared buffers.

354
00:24:42,607 --> 00:24:44,477
Uh, Setting value at all.

355
00:24:44,717 --> 00:24:47,897
It uh, only looks at uh, effective cash size.

356
00:24:47,897 --> 00:24:54,647
So you can say we have terabyte of my, so we, we
said like three fourth of that usual, usual approach.

357
00:24:54,977 --> 00:25:00,167
So you, you trick the planner and it behaves
exactly like on production, choose the same plan.

358
00:25:00,527 --> 00:25:08,717
But what I'm telling, like sometimes we see, okay, planner thinks
this is the best option to execute the query based on cost.

359
00:25:09,407 --> 00:25:12,382
Which depends on statistics and our settings.

360
00:25:12,432 --> 00:25:14,646
But we see a lot of fire happening.

361
00:25:14,706 --> 00:25:16,366
Buffers option shows it.

362
00:25:17,056 --> 00:25:23,536
Why, what if we had, like, what else we
have on the plate plan ahead on the plate.

363
00:25:23,536 --> 00:25:24,106
We don't see it.

364
00:25:24,106 --> 00:25:31,366
Unfortunately, I've heard Mongo has this capability
to explain this and provide the second option as well.

365
00:25:32,476 --> 00:25:34,196
So what do we usually do?

366
00:25:34,256 --> 00:25:35,226
We apply a trick.

367
00:25:35,231 --> 00:25:45,492
We say, okay, we had like BIAB scan here said enable
BIAB scan to off and try to check what other option was.

368
00:25:45,582 --> 00:25:50,432
So we put a penalty to bitmap scans or so we see the second possible option.

369
00:25:50,437 --> 00:25:51,122
Probably second.

370
00:25:51,122 --> 00:25:52,990
We are not sure, but this is a trick.

371
00:25:53,324 --> 00:25:55,322
Michael: Well, that's what I wanted to ask.

372
00:25:55,322 --> 00:25:58,502
Like how I think it's a really difficult problem.

373
00:25:58,502 --> 00:26:02,252
I've not looked into it myself, but what do we mean by second best?

374
00:26:02,252 --> 00:26:05,562
Do we mean second best plan that's sufficiently different.

375
00:26:05,662 --> 00:26:06,712
What if it did a bitmap?

376
00:26:06,754 --> 00:26:07,574
Nikolay: slightly worse cost.

377
00:26:07,972 --> 00:26:08,182
Michael: Yeah.

378
00:26:08,182 --> 00:26:12,862
So I understand what you mean, but I think we
might end up with not quite what we wanted.

379
00:26:12,922 --> 00:26:22,512
So if we actually want to see what would this do with an, with an index
scan of the same table, maybe disabling bit scan is the perfect way to go.

380
00:26:22,722 --> 00:26:23,362
But what if.

381
00:26:23,862 --> 00:26:26,202
The second best plan posts could have chosen.

382
00:26:26,202 --> 00:26:28,662
Would've been a bitmap scam of a different index.

383
00:26:28,834 --> 00:26:30,094
would we want to see

384
00:26:30,487 --> 00:26:30,837
Nikolay: Right.

385
00:26:31,012 --> 00:26:31,618
Good point.

386
00:26:31,722 --> 00:26:39,042
Michael: like, what if, if it just changed the join order a little
bit, or the index scan direction, or like there there's so many minor

387
00:26:39,267 --> 00:26:39,777
Nikolay: Yes.

388
00:26:40,405 --> 00:26:40,795
I agree.

389
00:26:40,855 --> 00:26:41,395
I agree.

390
00:26:41,695 --> 00:26:45,297
But my intent is to understand what were other options.

391
00:26:45,327 --> 00:26:50,827
Several of them may be to understand their
costs and their buffer like their IO as well.

392
00:26:50,827 --> 00:26:51,397
In compare.

393
00:26:51,397 --> 00:26:58,267
Sometimes cost can be slightly different where like
on some edge case buffers are drastically fewer.

394
00:26:58,270 --> 00:27:03,100
So we start thinking maybe we need to adjust our settings for the planner.

395
00:27:03,460 --> 00:27:11,636
For example, random page cost default four should go
down to sequential sec page cost, which is one and,

396
00:27:11,636 --> 00:27:15,676
and this like exactly understanding the second option.

397
00:27:15,776 --> 00:27:16,046
Okay.

398
00:27:16,046 --> 00:27:17,036
Maybe you're right.

399
00:27:17,036 --> 00:27:18,806
Maybe there are many options in between.

400
00:27:18,806 --> 00:27:19,766
So this second, maybe.

401
00:27:20,441 --> 00:27:21,071
10 already.

402
00:27:21,101 --> 00:27:30,029
I don't know, but this is what I lack in explain two
things, uh, real physical operations, like CPU and IO,

403
00:27:30,059 --> 00:27:34,139
disc, real disc, and also second, third other options.

404
00:27:35,069 --> 00:27:36,959
What were their costs?

405
00:27:36,996 --> 00:27:37,353
Right.

406
00:27:37,391 --> 00:27:37,643
Michael: Yeah.

407
00:27:37,698 --> 00:27:45,843
Nikolay: So it would be good to like you, you mentioned somewhere
that it's already to complex to, to complicated, to read explain.

408
00:27:46,083 --> 00:27:50,943
It requires a lot of experience, but it
still likes many interesting points in my.

409
00:27:51,568 --> 00:27:53,964
Michael: I think this is such an interesting trade off though, right?

410
00:27:53,994 --> 00:27:57,439
And this takes us onto the last topic I did wanna make sure we discussed.

411
00:27:58,013 --> 00:28:06,103
I think there's a trade off between being useful for people that are
new to Postgres versus being useful for super experienced people.

412
00:28:06,553 --> 00:28:14,203
And I'm not sure exactly where we should be drawing that line
or where the people in charge should be drawing that line.

413
00:28:14,643 --> 00:28:17,413
And we've talked for quite a while.

414
00:28:18,823 --> 00:28:21,223
Defaults and what should be on by default.

415
00:28:21,313 --> 00:28:31,737
So uh, explain itself is fairly simple, but explain, analyze
once we have timings the extra let's say Penalty of also

416
00:28:31,737 --> 00:28:38,247
asking for buffers, maybe even for Bose but other parameters
and definitely buffers based on our whole conversation today.

417
00:28:38,667 --> 00:28:40,107
Should that be on by default?

418
00:28:40,107 --> 00:28:47,787
So when anybody asks for explain, analyze, they also get those buffer
statistics, even if they don't know about them, don't ask for them.

419
00:28:48,497 --> 00:28:49,367
You can turn them off.

420
00:28:49,367 --> 00:28:57,834
Maybe if you're advanced user, you know, you don't want them for some
reason, but you have the ability to shape what beginners ask for.

421
00:28:57,834 --> 00:29:03,806
So if they're reading some guide from three years ago that says,
use, explain analyze then they'll get buffers on by the Default

422
00:29:04,224 --> 00:29:05,034
Nikolay: is very important.

423
00:29:05,124 --> 00:29:05,424
Yeah.

424
00:29:05,604 --> 00:29:08,514
Do you have some stats uh, about your users?

425
00:29:08,514 --> 00:29:11,484
How, how many of them have buffers included?

426
00:29:12,012 --> 00:29:12,414
Michael: Yeah.

427
00:29:12,414 --> 00:29:17,864
Last time I checked, it was 95% do include buffers, but 95%.

428
00:29:17,999 --> 00:29:18,569
Nikolay: that's

429
00:29:19,419 --> 00:29:24,849
Michael: well, 95% also include for Bose,
not the exact same 95%, but almost the,

430
00:29:25,024 --> 00:29:28,624
Nikolay: because I, I guess your documentation it, right?

431
00:29:28,779 --> 00:29:35,169
Michael: not just that we offer it, our tool does not
support the text format of explained so automatically.

432
00:29:35,174 --> 00:29:42,369
If somebody tries to get explained, analyzed and paste into
PG mustard it will tell them we need at minimum format, Jason.

433
00:29:43,029 --> 00:29:47,319
But by that point, we are also saying, please
ask for, explain, analyze buffers for both

434
00:29:47,576 --> 00:29:47,966
Nikolay: Right.

435
00:29:48,026 --> 00:29:49,854
So you, propose to, use it.

436
00:29:49,884 --> 00:29:50,934
That's why they use it.

437
00:29:51,494 --> 00:29:58,969
If you check the publicly available plan on explain.depesz.com,
for example, or dalibo.com, I I'm sure more than 50% will be

438
00:29:58,969 --> 00:30:02,155
without buffers, unfortunately, because this is default behavior.

439
00:30:02,155 --> 00:30:10,489
And I think I interesting enough, like the, there is a consensus based
on what I saw in hacker smelling list, I didn't see big objections.

440
00:30:11,099 --> 00:30:17,639
Looks like people think that it should be owned by default,
but still somehow the patch, the patch needs review.

441
00:30:17,639 --> 00:30:20,805
Actually right now, there are several iterations already.

442
00:30:20,805 --> 00:30:22,635
And, uh, Let's include the link.

443
00:30:22,640 --> 00:30:31,645
Also, if someone can, can do review, it would be great help for
community, because I think we should have buffers enabled by default.

444
00:30:31,650 --> 00:30:34,196
I hope we convinced people, right?

445
00:30:34,276 --> 00:30:35,836
That buffer should be used.

446
00:30:35,836 --> 00:30:40,552
We said that it's important sometimes to convert numbers, to bite.

447
00:30:40,987 --> 00:30:43,967
To, have a feeling how big is that?

448
00:30:44,470 --> 00:30:52,290
We discussed some liking features of explain that probably
are tricky to develop, but still like would be good to have.

449
00:30:52,860 --> 00:31:01,280
And also we discussed that it's possible to run, explain, analyze
with buffers, of course, on a different environment than on product.

450
00:31:01,618 --> 00:31:11,842
And in this case, I also would like to mention that our tool, that Database
Lab Engine and additional um, chat bot, can uh, can be executed in, can

451
00:31:11,842 --> 00:31:18,468
be run in slack or It's called the Joe bot and it also converts to bites.

452
00:31:18,468 --> 00:31:24,858
And it allows you to have a very good workflow of
SQL optimization where you don't touch product.

453
00:31:25,171 --> 00:31:25,651
Michael: really cool.

454
00:31:25,651 --> 00:31:35,031
it even estimates how, like, if let's say the timing is milliseconds,
it even estimates how much fast that would be on production too.

455
00:31:35,031 --> 00:31:35,251
Right.

456
00:31:35,329 --> 00:31:35,569
Nikolay: Yeah.

457
00:31:35,629 --> 00:31:37,249
Well, this is, this is tricky.

458
00:31:37,339 --> 00:31:40,009
We, this option is ex experimental.

459
00:31:40,009 --> 00:31:41,449
It's very tricky to develop.

460
00:31:41,449 --> 00:31:44,759
We still don't consider it as, as like final version.

461
00:31:45,299 --> 00:31:52,846
But it's not like very needed people are fine with just seeing buffers,
different timing because different file system, different state of

462
00:31:52,846 --> 00:32:03,274
CAEs and so on but buffers, if we have this shift in mind to focus on
buffers when perform an optimization, this is perfect place to play with

463
00:32:03,274 --> 00:32:09,721
queries and Database Lab Engine also provides ability to create an index.

464
00:32:10,131 --> 00:32:12,531
Not disturbing production and your colleagues.

465
00:32:12,561 --> 00:32:13,581
This is very important.

466
00:32:13,851 --> 00:32:24,491
And to see if it is helpful to reduce the amount of work, so buffer
numbers and therefore to reduce timing in, in the end of the day.

467
00:32:25,061 --> 00:32:32,665
So I recommend checking this on postgres.ai,
and of course, pgMustard understanding plans.

468
00:32:32,804 --> 00:32:33,554
Maybe that's it.

469
00:32:33,554 --> 00:32:33,794
Right.

470
00:32:33,794 --> 00:32:37,544
So we discussed everything we wanted, right,

471
00:32:37,644 --> 00:32:38,034
Michael: Yeah.

472
00:32:38,094 --> 00:32:47,602
So final thing is if you or anybody, you know, any of your friends are
able to review Postgres patches please, please, please do uh, check

473
00:32:47,626 --> 00:32:51,041
out the, the one at the moment, the way Postgres development works.

474
00:32:51,101 --> 00:32:54,435
Um, There's a new version of Postgres due out back end of this.

475
00:32:55,051 --> 00:32:59,071
Postgres 15, that's already, uh, frozen.

476
00:32:59,341 --> 00:32:59,551
Yeah.

477
00:32:59,551 --> 00:33:01,831
So that's already uh, feature freeze.

478
00:33:01,831 --> 00:33:10,279
So even if we do manage to get this committed soon, it's
still at, at best will come out in just over a year's time.

479
00:33:10,339 --> 00:33:14,482
So even if it makes it into Postgres 16, so these things can take years.

480
00:33:14,592 --> 00:33:18,776
So don't expect fast results, but if you can, that'll be wonderful.

481
00:33:18,871 --> 00:33:19,342
Thank you.

482
00:33:19,480 --> 00:33:24,172
Nikolay: Current commit Fest closes in July 31st.

483
00:33:24,172 --> 00:33:26,092
So like in, in five days.

484
00:33:27,412 --> 00:33:28,222
So, so

485
00:33:28,288 --> 00:33:29,308
Michael: So get your skates on,

486
00:33:29,752 --> 00:33:30,052
Nikolay: right.

487
00:33:30,057 --> 00:33:32,092
But there, there will be one more commit Fest.

488
00:33:32,272 --> 00:33:32,842
Definitely.

489
00:33:32,842 --> 00:33:35,176
So a few actually for Pogs.

490
00:33:35,248 --> 00:33:36,748
A few of course.

491
00:33:37,118 --> 00:33:38,588
Okay, good.

492
00:33:38,758 --> 00:33:39,388
It was interesting.

493
00:33:39,388 --> 00:33:39,478
I

494
00:33:39,963 --> 00:33:40,503
Michael: I hope so.

495
00:33:40,868 --> 00:33:43,931
Nikolay: I hope everyone likes our podcast.

496
00:33:44,141 --> 00:33:45,855
We need your help please.

497
00:33:45,968 --> 00:33:48,008
Like subscribe and please, please.

498
00:33:48,998 --> 00:33:54,458
The links in your social networks and groups where
you discuss positive database engineering and so on.

499
00:33:54,682 --> 00:33:56,632
Thank you everyone for listening.