1
0:0:0,14 --> 0:0:2,42
Nikolay: Hello, hello, this is
Postgres FM.

2
0:0:2,54 --> 0:0:6,02
My name is Nik, Postgres.AI, and
as usual, my co-host is Michael,

3
0:0:6,02 --> 0:0:6,52
pgMustard.

4
0:0:6,72 --> 0:0:7,54
Hi, Michael.

5
0:0:8,24 --> 0:0:9,22
Michael: Hello, Nik.

6
0:0:9,5199995 --> 0:0:10,42
How's it going?

7
0:0:11,04 --> 0:0:11,82
Nikolay: Good, good.

8
0:0:12,0199995 --> 0:0:17,02
So, you proposed, I proposed this
topic, honestly, right?

9
0:0:18,22 --> 0:0:22,4
I saw you worked on updating your
glossary on the pgMustard

10
0:0:23,68 --> 0:0:27,939999
website glossary for EXPLAIN and
I thought maybe it's a good

11
0:0:27,939999 --> 0:0:32,62
idea because I have some feeling
I'm missing something in understanding

12
0:0:32,8 --> 0:0:34,58
what's new in Postgres 18.

13
0:0:35,14 --> 0:0:38,18
So I would like to learn about
details and if you have fresh

14
0:0:38,46 --> 0:0:40,34
look at this, why not, right?

15
0:0:40,44 --> 0:0:41,6
Let's talk about that.

16
0:0:41,98 --> 0:0:44,2
Michael: Yeah, it's a good idea.

17
0:0:44,76 --> 0:0:47,12
It is what I've been working on
the last couple of weeks which

18
0:0:47,12 --> 0:0:50,4
is updating our EXPLAIN glossary.

19
0:0:51,2 --> 0:0:55,4
So the documentation on EXPLAIN
is good in the Postgres docs

20
0:0:55,4 --> 0:0:59,28
but it's not very extensive and
when I was first learning about

21
0:0:59,28 --> 0:1:4,44
all this you know 7 8 years ago
it was really hard to learn what

22
0:1:4,44 --> 0:1:8,4
various of the phrases, like various
node types exactly were,

23
0:1:8,4 --> 0:1:12,86
what some of the fields were, and
we basically started looking

24
0:1:12,88 --> 0:1:16,2
it up for each 1 and then started
making a note of it and that

25
0:1:16,2 --> 0:1:20,9
started as a blog post and ended
up as about 160 pages of this

26
0:1:20,9 --> 0:1:21,4
glossary.

27
0:1:22,16 --> 0:1:22,98
Nikolay: How many?

28
0:1:23,04 --> 0:1:23,4
I think

29
0:1:23,4 --> 0:1:25,68
Michael: it might be a bit more
now actually it's definitely

30
0:1:25,68 --> 0:1:26,5
over 150.

31
0:1:28,14 --> 0:1:28,5
Nikolay: Okay.

32
0:1:28,5 --> 0:1:32,8
Michael: So yeah it's quite extensive
and As a result actually

33
0:1:32,8 --> 0:1:35,54
now have to update what I don't
have to update each year But

34
0:1:35,54 --> 0:1:38,68
try to update each year when the
major versions come out so that

35
0:1:38,68 --> 0:1:39,78
it's up to date.

36
0:1:40,12 --> 0:1:43,5
Nikolay: That's great Yeah, so
it also highlights how complex

37
0:1:43,7 --> 0:1:45,64
explained plans are right?

38
0:1:46,520004 --> 0:1:50,28
Michael: Yeah, yeah exactly So
yeah, we've got a page for each

39
0:1:50,28 --> 0:1:54,02
parameter, a page for each operation
type and a page for each

40
0:1:54,02 --> 0:1:54,9
field type.

41
0:1:55,06 --> 0:1:59,18
So it is, they're not all operation
types thankfully, but yeah

42
0:1:59,18 --> 0:2:0,68
it is a little bit complex.

43
0:2:0,78 --> 0:2:1,76
Nikolay: I see, yeah.

44
0:2:1,76 --> 0:2:3,54
So what's new in 18?

45
0:2:4,54 --> 0:2:9,12
Michael: Right yeah well the first
1 I put on the list is 1 we've

46
0:2:9,12 --> 0:2:13,36
spoken about many times which is
BUFFERS being on by default

47
0:2:13,38 --> 0:2:14,12
with ANALYZE.

48
0:2:15,04 --> 0:2:18,48
Long-term listeners will be very
familiar with this topic but

49
0:2:18,48 --> 0:2:23,54
until this version we had to specify
EXPLAIN (ANALYZE, BUFFERS)

50
0:2:23,54 --> 0:2:27,88
if we wanted to get information
about the data read written whether

51
0:2:27,88 --> 0:2:32,54
it came from shared buffer cache or not so yeah that was the

52
0:2:32,54 --> 0:2:34,28
change I was most happy about making.

53
0:2:34,28 --> 0:2:36,06
I think I posted about that.

54
0:2:36,18 --> 0:2:39,8
The only thought that I might not have mentioned before is that

55
0:2:39,8 --> 0:2:41,92
this isn't true for auto_explain.

56
0:2:42,34 --> 0:2:46,28
So in auto_explain, if you want buffers, I don't mind it.

57
0:2:46,28 --> 0:2:48,12
I think it's not as bad.

58
0:2:48,12 --> 0:2:51,36
So in auto_explain, if you want buffers, you have to specify.

59
0:2:51,82 --> 0:2:57,42
Nikolay: Yeah, so for regular EXPLAIN ANALYZE, it means that

60
0:2:57,52 --> 0:3:2,14
we can expect, since BUFFERS are by default, we can expect to

61
0:3:2,14 --> 0:3:6,32
rely more on those numbers are present in analysis.

62
0:3:6,42 --> 0:3:10,2
So it should shift methodologies a little bit towards direction

63
0:3:10,2 --> 0:3:12,9
we discussed over more than 2 years right?

64
0:3:13,62 --> 0:3:16,38
Michael: I think so I think hopefully people will be curious

65
0:3:16,38 --> 0:3:18,96
why these numbers are showing up if they're used to just running

66
0:3:18,96 --> 0:3:21,82
EXPLAIN ANALYZE or if they're reading a guide or a blog post

67
0:3:21,82 --> 0:3:25,12
that only suggests running EXPLAIN ANALYZE if they're on version

68
0:3:25,12 --> 0:3:28,58
18 or above which is not many people at the moment but obviously

69
0:3:28,58 --> 0:3:31,92
will grow over time then hopefully people will be curious what

70
0:3:31,92 --> 0:3:36,06
those numbers are and what they mean and not only will it help

71
0:3:36,06 --> 0:3:39,48
them but if they if they can't solve their problem and they have

72
0:3:39,48 --> 0:3:43,78
to ask for help on the mailing list or to a consultancy or to

73
0:3:43,78 --> 0:3:48,62
a tool or Yeah, wherever the people that are more familiar with

74
0:3:48,62 --> 0:3:51,88
those numbers or the tools that use those numbers to give tips

75
0:3:52,28 --> 0:3:53,82
can get them automatically.

76
0:3:54,12 --> 0:3:57,1
So get them on the first try without having to ask the person

77
0:3:57,1 --> 0:3:59,9
to run it again and maybe having to guess at what the problem

78
0:3:59,9 --> 0:4:0,36
is.

79
0:4:0,36 --> 0:4:4,46
So yeah, I think it's a big win for people who are trying to

80
0:4:4,46 --> 0:4:7,08
solve their own problems, but also people that are asking others

81
0:4:7,08 --> 0:4:7,74
for help.

82
0:4:8,26 --> 0:4:8,76
Nikolay: Right.

83
0:4:8,8 --> 0:4:9,58
That's great.

84
0:4:9,96 --> 0:4:10,46
Yeah.

85
0:4:10,46 --> 0:4:13,94
Those who listen to our podcast for quite some time know how

86
0:4:13,94 --> 0:4:15,2
we appreciate buffers.

87
0:4:16,5 --> 0:4:18,78
So such big relief it's now by default.

88
0:4:19,9 --> 0:4:21,98
We had several episodes about that.

89
0:4:23,42 --> 0:4:28,06
Michael: Yeah, and only 4 or 5 years until we can just stop telling

90
0:4:28,06 --> 0:4:30,04
people to run it with buffers at all.

91
0:4:30,9 --> 0:4:31,74
Nikolay: Yeah, yeah, yeah.

92
0:4:34,36 --> 0:4:38,6
So like TLDR version is like just use buffers because in most

93
0:4:38,6 --> 0:4:43,34
cases query is slow because of I/O and buffers brings you I/O focused

94
0:4:43,36 --> 0:4:43,86
analysis.

95
0:4:44,76 --> 0:4:45,12
Yeah.

96
0:4:45,12 --> 0:4:48,66
Without it, it's like you don't see the important piece.

97
0:4:49,4 --> 0:4:50,14
Yeah, great.

98
0:4:50,64667 --> 0:4:54,56
And I think you, do you plan any, like, no, I think, I question,

99
0:4:54,82 --> 0:4:56,26
I have a question to you.

100
0:4:56,28 --> 0:5:2,06
Do you plan to focus on buffers even more in pgMustard to do something,

101
0:5:2,24 --> 0:5:4,9
some changes with Postgres 18 release or not?

102
0:5:5,8 --> 0:5:8,74
Michael: Not with buffers, but with all the others, yes.

103
0:5:9,72 --> 0:5:14,92
Buffers, because we have a string in pgMustard where you can copy

104
0:5:14,92 --> 0:5:19,96
from, we automatically suggest people gather buffers, and most

105
0:5:19,96 --> 0:5:20,46
do.

106
0:5:20,68 --> 0:5:24,06
Maybe like, it's not quite 95% but it's close.

107
0:5:24,52 --> 0:5:31,3
So we already focused quite a lot on buffers and I don't think

108
0:5:31,3 --> 0:5:33,42
this is gonna massively change that.

109
0:5:33,42 --> 0:5:35,86
The only change I'm looking forward to making is dropping it

110
0:5:35,86 --> 0:5:36,74
from that string.

111
0:5:36,74 --> 0:5:42,38
In about 4 or 5 years' time, once Postgres 17 is out of support,

112
0:5:42,54 --> 0:5:45,76
I think we'll drop buffers from the string because we'll be fairly

113
0:5:45,76 --> 0:5:48,48
confident that if they've got, well, if they've got ANALYZE

114
0:5:48,48 --> 0:5:49,4
they'll get buffers.

115
0:5:49,48 --> 0:5:52,32
So that's the 1 change I'm looking forward to making.

116
0:5:53,08 --> 0:5:55,46
Nikolay: Yeah, great, great, great, yeah, yeah.

117
0:5:56,0 --> 0:5:59,92
Okay, that's, yeah, pretty straightforward, let's move on, what

118
0:5:59,92 --> 0:6:0,42
else?

119
0:6:0,56 --> 0:6:1,06
Yeah.

120
0:6:1,74 --> 0:6:3,28
This I knew very well.

121
0:6:4,02 --> 0:6:4,56
Michael: Yeah, yeah.

122
0:6:4,56 --> 0:6:6,52
Nikolay: Like, you know I knew, yeah.

123
0:6:7,12 --> 0:6:9,24
Michael: There's 2 that I'm not sure which is the most important

124
0:6:9,24 --> 0:6:10,12
or most exciting.

125
0:6:10,64 --> 0:6:12,24
Let's go most important first.

126
0:6:12,24 --> 0:6:18,3
I think actual rows being now rounded to, or being reported,

127
0:6:18,38 --> 0:6:23,8
sorry, to 2 decimal places is really really big so until this

128
0:6:23,8 --> 0:6:30,04
version so in particular 17 the number of actual rows is a is

129
0:6:30,04 --> 0:6:35,72
an average number per loop and that's mostly fine except when

130
0:6:35,72 --> 0:6:40,46
you have lots of loops and specifically like many many many loops

131
0:6:40,64 --> 0:6:46,44
so for example a nested loop that has 10,000 loops because if

132
0:6:46,44 --> 0:6:50,32
it's being rounded especially at low numbers that number can

133
0:6:50,32 --> 0:6:54,6
be quite far off, especially between 0 and 1 so if that 10,000

134
0:6:54,64 --> 0:7:0,58
loops is returning fewer than 5,000 rows in total then it would

135
0:7:0,58 --> 0:7:6,34
be reported as 0 actual rows And that's a big difference from

136
0:7:6,34 --> 0:7:6,84
5000.

137
0:7:7,3 --> 0:7:12,28
So yeah, that for me is a huge change in terms of seeing the

138
0:7:12,28 --> 0:7:16,16
details in some quite common performance issues.

139
0:7:16,16 --> 0:7:19,0
So it's quite common that a performance issue would show up where

140
0:7:19,0 --> 0:7:21,42
you're getting lots of loops either because looping was a bad

141
0:7:21,42 --> 0:7:24,68
idea in the first place or because there isn't like the best

142
0:7:24,68 --> 0:7:30,8
index available so that 1 I think will really help in terms of

143
0:7:30,8 --> 0:7:34,64
giving a more sensible number once you multiply actual rows by

144
0:7:34,64 --> 0:7:35,72
a number of loops.

145
0:7:35,8 --> 0:7:38,54
Nikolay: Is it only in nest loops or somewhere else?

146
0:7:38,8 --> 0:7:40,36
It's only in nest loops, right?

147
0:7:40,68 --> 0:7:43,26
Michael: Definitely in parallel plans as well.

148
0:7:43,26 --> 0:7:49,78
So if you've got like 2 workers, Then the number of actual rows

149
0:7:49,78 --> 0:7:53,22
you need to multiply it by 3 to get You know leader plus the

150
0:7:53,22 --> 0:7:58,06
2 workers because it gets average per work process as well So

151
0:7:58,34 --> 0:8:2,14
I think there are other cases as well Maybe not

152
0:8:2,52 --> 0:8:6,48
Nikolay: so this this basically bottom line is just this helps

153
0:8:6,48 --> 0:8:9,34
us reduce the error, right?

154
0:8:9,72 --> 0:8:10,12
Yeah.

155
0:8:10,12 --> 0:8:11,4
Yeah, that's it, right?

156
0:8:11,68 --> 0:8:14,94
Because it can be big if a lot of loops happening.

157
0:8:15,66 --> 0:8:16,86
Michael: Exactly, exactly.

158
0:8:18,42 --> 0:8:27,2
More accurate in a lot of cases and a huge improvement for specific

159
0:8:27,6 --> 0:8:28,54
performance issues.

160
0:8:28,66 --> 0:8:32,08
Nikolay: Does it help with some kind of analysis when, for example

161
0:8:32,08 --> 0:8:35,58
pgMustard recommends users what to do no

162
0:8:36,04 --> 0:8:37,2
Michael: a little bit like

163
0:8:37,2 --> 0:8:38,18
Nikolay: numbers right

164
0:8:38,44 --> 0:8:42,4
Michael: I mean if if for example you were so it helps in a few

165
0:8:42,4 --> 0:8:45,86
ways right you can get a more accurate difference between the

166
0:8:45,86 --> 0:8:51,24
estimate and the actual rows slightly but estimated rows are

167
0:8:51,24 --> 0:8:56,98
always an integer so it's not a that's not a huge deal the times

168
0:8:56,98 --> 0:9:1,46
where it's most useful is like the tips making sense like if

169
0:9:1,46 --> 0:9:5,52
you're trying to tell people this many rows were filtered and

170
0:9:5,54 --> 0:9:7,94
if you're telling them all of the rows were filtered that doesn't

171
0:9:7,94 --> 0:9:10,4
quite make sense if they're getting some rows returned by the

172
0:9:10,4 --> 0:9:15,1
query so it helps a little bit but you're still going to get

173
0:9:15,1 --> 0:9:17,88
the tip regardless of this, It's just more a case of like

174
0:9:17,88 --> 0:9:18,78
Nikolay: making sense.

175
0:9:19,76 --> 0:9:23,62
Maybe it should be helpful at different level when you have a

176
0:9:23,62 --> 0:9:28,82
bunch of queries, like say hundreds of queries, and you need

177
0:9:28,82 --> 0:9:37,74
to prioritize and choose top 5 which you which you need to focus

178
0:9:37,74 --> 0:9:39,06
first on.

179
0:9:39,06 --> 0:9:40,28
Michael: Yeah that's a good point.

180
0:9:40,52 --> 0:9:45,98
Yeah so let's take our case where we had 10,000 loops.

181
0:9:46,58 --> 0:9:50,74
The difference between returning 5,000 rows and returning 5

182
0:9:50,74 --> 0:9:51,88
rows is huge.

183
0:9:52,02 --> 0:9:52,72
Like that.

184
0:9:53,42 --> 0:9:56,68
Depending on the size of the table, it might be that a totally

185
0:9:56,68 --> 0:9:59,86
different index scan is like, for 1, a bitmap heap scan might

186
0:9:59,86 --> 0:10:3,36
be really efficient, and for the other, you might actually want

187
0:10:4,02 --> 0:10:5,06
an index scan.

188
0:10:5,16 --> 0:10:9,16
So it might actually matter for how much optimization potential

189
0:10:9,16 --> 0:10:11,84
there is as well if you're returning 5,000 rows there's only

190
0:10:11,84 --> 0:10:15,54
so fast you can make that if you're returning 5 there's a different

191
0:10:15,54 --> 0:10:18,96
maybe an order of magnitude more like optimization potential

192
0:10:18,96 --> 0:10:20,26
so yeah that's a good point

193
0:10:20,58 --> 0:10:24,9
Nikolay: yeah so more like more precise numbers can help you

194
0:10:24,92 --> 0:10:32,3
to have more precise prioritization when you're dealing with

195
0:10:32,3 --> 0:10:33,36
a lot of plans.

196
0:10:34,28 --> 0:10:38,1
You know, my point of view, more and more we should think about

197
0:10:38,1 --> 0:10:42,38
more automated way of processing and because we have a lot of

198
0:10:42,38 --> 0:10:43,94
instances and so on.

199
0:10:45,02 --> 0:10:50,66
And plan flip analysis, we have sometimes using pgMustard for

200
0:10:50,66 --> 0:10:50,94
that.

201
0:10:50,94 --> 0:10:55,12
So it's great if you have more
precise numbers.

202
0:10:55,84 --> 0:11:0,08
And yeah, it can give you precision
and prioritization as well.

203
0:11:0,08 --> 0:11:1,68
What's what matters more?

204
0:11:1,68 --> 0:11:2,78
What matters less?

205
0:11:3,58 --> 0:11:4,62
Good, good, good.

206
0:11:4,62 --> 0:11:5,46
Okay, what?

207
0:11:5,46 --> 0:11:7,7
That's it in this topic or something
else

208
0:11:7,94 --> 0:11:8,8
Michael: I think so

209
0:11:8,8 --> 0:11:10,12
Nikolay: let's move on then right

210
0:11:10,38 --> 0:11:13,38
Michael: index searches I've got
next do you remember the conversation

211
0:11:13,38 --> 0:11:16,88
we had with Peter Geoghegan about skip
scan of course

212
0:11:17,52 --> 0:11:21,14
Nikolay: yeah 2 meanings of it
also

213
0:11:21,98 --> 0:11:25,6
Michael: right yeah yeah well yeah,
you mean with loose index

214
0:11:25,6 --> 0:11:26,1
scan?

215
0:11:26,64 --> 0:11:28,58
Nikolay: This is not, this is the
other meaning.

216
0:11:29,1 --> 0:11:30,22
Michael: This is skip scan.

217
0:11:30,42 --> 0:11:33,34
Nikolay: Basically let me, like,
what I remember, In the topic

218
0:11:33,34 --> 0:11:36,96
of redundant indexes, if, before
we always said, if you have

219
0:11:36,96 --> 0:11:42,08
index on column A and another index
on column A and B, this is

220
0:11:42,08 --> 0:11:45,3
definitely a redundant case, you
can drop, safely drop an index

221
0:11:45,3 --> 0:11:46,3
on column A.

222
0:11:46,92 --> 0:11:51,82
But if it's column B and you have
AB, you cannot drop an index

223
0:11:51,82 --> 0:11:56,1
on column B because it's on the
second place on that second index.

224
0:11:56,52 --> 0:11:57,62
It won't be helpful.

225
0:11:58,14 --> 0:12:1,14
Here in some limited number of
cases, it can be helpful.

226
0:12:1,56 --> 0:12:4,9
And it can skip the first column,
basically, right?

227
0:12:6,04 --> 0:12:10,64
And use index on column A, not
on 2 column index on columns A

228
0:12:10,64 --> 0:12:15,06
and B to work only with queries
which only have filters on B,

229
0:12:15,06 --> 0:12:16,72
not on A at all.

230
0:12:16,72 --> 0:12:18,28
Michael: Yeah, exactly.

231
0:12:18,34 --> 0:12:22,66
If there's relatively few values
in A or if it knows the set

232
0:12:22,66 --> 0:12:28,78
of values of A, like via a different
condition, an index on A,

233
0:12:28,78 --> 0:12:34,84
B can relatively efficiently be
used for a query on B without

234
0:12:35,08 --> 0:12:38,46
any information about A or with
a set of values for A.

235
0:12:38,48 --> 0:12:41,18
And there's a couple of optimizations
in the last couple of major

236
0:12:41,18 --> 0:12:42,24
versions of Postgres.

237
0:12:42,74 --> 0:12:47,8
So this index searches tells you
how many unique descents of

238
0:12:47,8 --> 0:12:49,34
the index there were.

239
0:12:49,34 --> 0:12:54,78
So how many times did it loop through
the index to check each

240
0:12:54,78 --> 0:12:56,1
value of A.

241
0:12:58,66 --> 0:13:3,34
So 1 example could be like, you
know a phone book where you have

242
0:13:3,34 --> 0:13:6,58
it listed by last name and then
first name.

243
0:13:7,06 --> 0:13:10,58
If you live in a country with not
that many last names, you can

244
0:13:10,58 --> 0:13:13,88
imagine saying like, how many people
are there called Nikolay

245
0:13:14,18 --> 0:13:17,58
in, I don't know, San Diego.

246
0:13:18,18 --> 0:13:22,12
And you could look at all of the, you could look per last name

247
0:13:22,12 --> 0:13:25,76
and just jump straight to N each
time, straight to N-I-K and

248
0:13:25,76 --> 0:13:29,24
see how many... and then you don't
keep searching through all of

249
0:13:29,24 --> 0:13:33,78
the Smiths before you move on to
the, I don't know.

250
0:13:34,02 --> 0:13:37,62
Nikolay: I like how you avoided
pronouncing my last name.

251
0:13:37,66 --> 0:13:39,06
It's a good idea, okay.

252
0:13:39,92 --> 0:13:41,78
Michael: Yeah, yeah, to get to the Samokhvalovs.

253
0:13:42,72 --> 0:13:47,34
Nikolay: Good, yeah, so it's, So, but I don't understand what's

254
0:13:47,34 --> 0:13:49,9
happening in EXPLAIN plans for in this area.

255
0:13:50,5 --> 0:13:53,7
Michael: So we now see how many descents happened.

256
0:13:54,02 --> 0:13:59,48
So in 17, in 17, we had some of these optimizations, but we couldn't

257
0:13:59,48 --> 0:14:1,4
see where the Postgres was using them.

258
0:14:1,4 --> 0:14:5,52
I mean we got clues like the execution time dropped and the buffers

259
0:14:5,52 --> 0:14:11,68
were fewer, but in exactly indirectly and now we can see like

260
0:14:11,68 --> 0:14:15,8175
if it is using this then index searches will be greater than

261
0:14:15,8175 --> 0:14:15,83
1.

262
0:14:15,83 --> 0:14:17,92
If it's not using the optimization index searches will equal

263
0:14:17,92 --> 0:14:18,42
1.

264
0:14:18,94 --> 0:14:19,6
Nikolay: Good, good.

265
0:14:19,6 --> 0:14:22,4
And how does it help for like analysis?

266
0:14:22,94 --> 0:14:23,66
Michael: Good question.

267
0:14:23,74 --> 0:14:28,62
So we at the moment most of the time like by the nature of this

268
0:14:28,62 --> 0:14:32,6
work these are optimizations put in right so most of the time

269
0:14:32,6 --> 0:14:36,0
if you're getting these your queries already faster than it would

270
0:14:36,0 --> 0:14:40,72
have been in previous versions of Postgres so it tends to come

271
0:14:40,72 --> 0:14:44,6
up when the queries relatively fast generally now that's not

272
0:14:44,6 --> 0:14:47,56
that's not always going to be true But I haven't seen that many

273
0:14:47,56 --> 0:14:50,08
cases of this where this was the problem yet.

274
0:14:50,08 --> 0:14:54,44
And I expect to eventually come across quite a few.

275
0:14:54,44 --> 0:14:55,94
But for now, I haven't.

276
0:14:56,46 --> 0:15:0,62
Nikolay: You mean when you see this mechanism being involved,

277
0:15:0,62 --> 0:15:4,48
and you think, OK, this may be a sign that we don't squeeze the

278
0:15:4,48 --> 0:15:7,3
best performance possible, because we could have different index,

279
0:15:7,3 --> 0:15:13,98
which doesn't use this feature, but uses regular index scan mechanics.

280
0:15:14,24 --> 0:15:16,5
Michael: So yeah, but you're right.

281
0:15:16,72 --> 0:15:21,68
If you see this in place, I believe if index searches is above

282
0:15:21,68 --> 0:15:25,58
1 You're almost guaranteed because this only works for pg_index

283
0:15:25,58 --> 0:15:28,46
at the moment I think you're almost guaranteed that there is

284
0:15:28,46 --> 0:15:30,78
a better index definition for that query.

285
0:15:30,88 --> 0:15:31,56
You will be

286
0:15:31,56 --> 0:15:32,7
Nikolay: able to have more.

287
0:15:33,6 --> 0:15:38,56
Michael: Yes exactly so I think it is a way of saying that there

288
0:15:38,56 --> 0:15:44,2
is there is potentially more to squeeze out of this but it's

289
0:15:44,2 --> 0:15:49,6
efficient enough in a lot of cases and the idea of the feature

290
0:15:49,6 --> 0:15:53,94
is only to be used when it's a better option, that I haven't

291
0:15:53,94 --> 0:15:54,96
yet seen query plans.

292
0:15:54,96 --> 0:15:57,74
Imagine a query plan is actually quite complex and you imagine

293
0:15:57,74 --> 0:16:0,72
this part, this is going on in your query, But there might be

294
0:16:0,72 --> 0:16:2,8
a different part that's actually the bottleneck.

295
0:16:3,08 --> 0:16:8,2
So I haven't seen this, like, where this is the problem be the

296
0:16:8,2 --> 0:16:9,3
bottleneck yet.

297
0:16:9,52 --> 0:16:12,54
But it will be, and for very, very simple queries, it's very

298
0:16:12,54 --> 0:16:13,26
easy to show it.

299
0:16:13,26 --> 0:16:17,36
In fact, I blogged about it just last week, showing how you can

300
0:16:17,36 --> 0:16:18,62
then optimize it further.

301
0:16:18,68 --> 0:16:21,22
Nikolay: And Skips can also introduce Postgres 18.

302
0:16:21,22 --> 0:16:23,86
So the feature comes with observability bit.

303
0:16:24,18 --> 0:16:27,62
This is good demonstration how developers should think about

304
0:16:27,62 --> 0:16:28,12
development.

305
0:16:28,18 --> 0:16:31,76
They should think about like not only features, but also how

306
0:16:31,76 --> 0:16:32,82
they will be observed.

307
0:16:33,5 --> 0:16:37,96
I mean, in a regular DevOps approach, you think how you will

308
0:16:37,96 --> 0:16:39,06
monitor your feature.

309
0:16:39,16 --> 0:16:43,22
In this case, it's not monitoring, it's micro-level optimization,

310
0:16:43,98 --> 0:16:47,62
micro-optimization, but still it's great for transparency.

311
0:16:48,7 --> 0:16:52,54
I guess it helps all, including to develop this feature, you

312
0:16:52,54 --> 0:16:53,6
need to see it, right?

313
0:16:54,72 --> 0:16:57,48
So, in tests maybe as well, yeah?

314
0:16:59,14 --> 0:17:2,64
In regression tests, Postgres has, maybe also this is used, I

315
0:17:2,64 --> 0:17:3,14
guess.

316
0:17:3,92 --> 0:17:5,4
Yeah, good, cool.

317
0:17:7,4 --> 0:17:10,5
Michael: By the way, on the monitoring front, there is you can

318
0:17:10,5 --> 0:17:11,7
actually see these

319
0:17:12,34 --> 0:17:16,98
Nikolay: already yeah yeah I missed that

320
0:17:16,98 --> 0:17:20,74
Michael: not added yeah not added in 18 this was always this

321
0:17:20,74 --> 0:17:22,84
was available previously.

322
0:17:23,86 --> 0:17:25,36
Nikolay: This was available previously?

323
0:17:25,84 --> 0:17:26,78
What's the column?

324
0:17:27,8 --> 0:17:28,94
Michael: Let me find it.

325
0:17:29,28 --> 0:17:31,88
Oh actually, No, maybe not pg_stat_statements, sorry.

326
0:17:31,88 --> 0:17:36,64
pg_stat_user_indexes and similar, you know, those views.

327
0:17:37,1 --> 0:17:38,445
Yeah, you can see.

328
0:17:38,445 --> 0:17:41,08
Nikolay: All indexes also, it's just at all indexes.

329
0:17:41,26 --> 0:17:41,86
Michael: Yeah, exactly.

330
0:17:42,44 --> 0:17:45,74
And there's an idx_scan column.

331
0:17:46,02 --> 0:17:47,34
Nikolay: But this is old 1.

332
0:17:48,06 --> 0:17:51,1
Michael: Yeah, but it counts these individual descents.

333
0:17:51,78 --> 0:17:58,22
Nikolay: So if for 1 call we have multiple index cans yeah that's

334
0:17:58,22 --> 0:18:3,54
interesting that's super interesting because you know we develop

335
0:18:3,9 --> 0:18:8,32
quite advanced monitoring lately, and we touched this piece very

336
0:18:8,32 --> 0:18:8,82
recently.

337
0:18:10,46 --> 0:18:15,4
And I now think, what do you think about how this could be used?

338
0:18:16,22 --> 0:18:20,72
Usually we just display 2 things, in our approach at least.

339
0:18:20,98 --> 0:18:26,66
Top ends, so top end indexes by some metric, by index scans for

340
0:18:26,66 --> 0:18:27,16
example.

341
0:18:27,98 --> 0:18:30,36
These are top 10 indexes by index scan.

342
0:18:30,36 --> 0:18:32,72
And also details about for individual index.

343
0:18:32,72 --> 0:18:36,42
Okay, we have this index and we have a lot of graphs including

344
0:18:36,54 --> 0:18:38,9
in the scans and all other stuff.

345
0:18:40,24 --> 0:18:45,16
So I don't understand how we could use what you're saying.

346
0:18:45,16 --> 0:18:50,24
We need basically to understand how many queries involve this

347
0:18:50,24 --> 0:18:54,64
index, and somehow highlight that it's not 1 to 1, it's 1 to

348
0:18:54,64 --> 0:18:55,42
n, right?

349
0:18:56,58 --> 0:18:57,24
But how?

350
0:18:57,24 --> 0:18:59,7
Michael: Yeah, but it depends what the problem is you're trying

351
0:18:59,7 --> 0:19:0,86
to solve, doesn't it?

352
0:19:2,42 --> 0:19:4,18
Nikolay: At macro level I don't
see how.

353
0:19:4,18 --> 0:19:10,08
I guess it's possible but we need
to somehow to understand all

354
0:19:10,08 --> 0:19:15,04
the query IDs and their calls in
pg_stat_statements that this index

355
0:19:15,04 --> 0:19:16,22
is involved there.

356
0:19:16,56 --> 0:19:19,38
How to do that, it's an interesting
question.

357
0:19:21,5 --> 0:19:25,22
Because pg_stat_statements basically,
like it forgets about plans.

358
0:19:26,32 --> 0:19:31,5
Maybe something Lukas Fittl,
pg_stat_statements is working on lately.

359
0:19:32,44 --> 0:19:37,32
I know they just recently presented
yet another attempt to have

360
0:19:37,34 --> 0:19:39,34
plans properly tracked, persistent
plans.

361
0:19:39,34 --> 0:19:44,08
Maybe there, if we look at this
and here, this is interesting

362
0:19:44,16 --> 0:19:44,66
direction.

363
0:19:45,14 --> 0:19:46,7
Quite advanced, I would say.

364
0:19:47,38 --> 0:19:50,58
But at micro level it's pretty
obvious, like we have 1 call.

365
0:19:51,34 --> 0:19:52,92
Explain it's always 1 call.

366
0:19:52,92 --> 0:19:55,26
So if we see multiple index scans.

367
0:19:55,88 --> 0:20:1,06
Okay thank you for some advanced
stuff here, it's food for additional

368
0:20:1,06 --> 0:20:1,56
thoughts.

369
0:20:2,52 --> 0:20:5,32
Michael: Yeah while we're thinking
about the macro level, 1 idea

370
0:20:5,32 --> 0:20:10,68
I had was, if you have, let's say,
like a really right heavy

371
0:20:10,68 --> 0:20:14,7
workload, and you you really want
to minimize the number of indexes

372
0:20:14,7 --> 0:20:19,78
you have, you might want to expand
your search of these overlapping

373
0:20:19,82 --> 0:20:20,08
indexes.

374
0:20:20,08 --> 0:20:24,02
So you mentioned for example what
you define as an overlapping

375
0:20:24,12 --> 0:20:25,36
or redundant index.

376
0:20:26,04 --> 0:20:31,36
You might want to expand that search
to consider indexes that

377
0:20:31,36 --> 0:20:33,84
have the same columns but in a
different order.

378
0:20:33,84 --> 0:20:38,64
So if you've got an index on ABC
and on BAC to serve different

379
0:20:38,9 --> 0:20:43,44
reads, especially if any of those
columns like A or B are low

380
0:20:43,44 --> 0:20:48,62
cardinality, so like don't contain
that many unique values, There's

381
0:20:48,62 --> 0:20:51,6
a chance that you won't have to
pay too high a cost on your reads,

382
0:20:51,6 --> 0:20:54,02
and you might be able to reduce
the number of indexes you have,

383
0:20:54,02 --> 0:20:55,12
which I thought was interesting.

384
0:20:55,52 --> 0:20:59,72
Nikolay: So you want what I think,
if idx_scan column in pg_stat_all_indexes,

385
0:20:59,86 --> 0:21:4,94
pg_stat_user_indexes,
If this is registering multiple,

386
0:21:5,38 --> 0:21:9,88
like it increments by not 1 but
by some number when we have a

387
0:21:9,88 --> 0:21:11,44
call which uses skip scans.

388
0:21:11,44 --> 0:21:15,36
In this case it tells us that probably
there should be another

389
0:21:15,36 --> 0:21:24,28
column which would show distinct
queries were there.

390
0:21:24,28 --> 0:21:26,3
So in this case, we would have
2 counters.

391
0:21:26,38 --> 0:21:31,96
1 would be just how many queries,
how many statements use this

392
0:21:31,96 --> 0:21:35,42
index, and another is how many
index scans were there.

393
0:21:35,74 --> 0:21:40,16
If it's, if these numbers are different,
more than 1, ratio is

394
0:21:40,16 --> 0:21:44,74
more than 1, that means there is
a room for, for like some.

395
0:21:45,94 --> 0:21:51,04
Interesting that our direction
of thought is to avoid skip scans,

396
0:21:51,04 --> 0:21:53,16
so avoid using this feature somehow.

397
0:21:54,34 --> 0:21:57,1
Michael: Well, I was just proposing
a time when you might want

398
0:21:57,1 --> 0:22:0,8
to go the other direction, actually
embrace skip scans, and you

399
0:22:0,8 --> 0:22:3,48
would lose some performance on
some reads, but you'd gain some

400
0:22:3,48 --> 0:22:4,78
performance on some writes.

401
0:22:4,92 --> 0:22:9,08
So I think there are some folks
who would happily pay that trade

402
0:22:9,08 --> 0:22:13,2
off in certain workloads, like,
you know, Internet of Things,

403
0:22:13,2 --> 0:22:17,06
ingesting like loads of data, just
wanting to have minimal indexes

404
0:22:17,36 --> 0:22:18,36
on a table.

405
0:22:18,58 --> 0:22:21,38
You probably already got minimal,
or you've got very few, but

406
0:22:21,38 --> 0:22:24,36
you might be able to reduce it
by 1 or 2 more with this.

407
0:22:26,4 --> 0:22:31,02
But yeah, with most workloads being
read-heavy, I think it makes

408
0:22:31,02 --> 0:22:31,36
sense.

409
0:22:31,36 --> 0:22:35,1
Normally we can afford 1 extra
index if it makes a really important

410
0:22:35,58 --> 0:22:38,04
query or really important endpoint
much faster.

411
0:22:40,4 --> 0:22:45,48
Nikolay: Yeah definitely there's
like some feeling that things

412
0:22:45,48 --> 0:22:47,62
like observability can be improved
here.

413
0:22:49,32 --> 0:22:49,98
We are far from

414
0:22:49,98 --> 0:22:50,02
Michael: it.

415
0:22:50,02 --> 0:22:53,4
Currently, if it helps, I've thought
about it a little from the

416
0:22:53,4 --> 0:22:53,94
micro level.

417
0:22:53,94 --> 0:22:55,38
Like you're thinking from the macro
level.

418
0:22:55,38 --> 0:22:59,14
From the micro level, I'm currently
thinking there are wins here,

419
0:22:59,14 --> 0:23:1,6
but they're unlikely to be like
super common.

420
0:23:1,6 --> 0:23:3,96
They're unlikely to be that huge.

421
0:23:4,04 --> 0:23:7,24
And I'm still seeing loads of queries
out there that can't use

422
0:23:7,24 --> 0:23:11,16
any indexes or you know have really
abysmal performance so like

423
0:23:11,16 --> 0:23:16,88
this yeah so it depends like it
depends how often this is coming

424
0:23:16,88 --> 0:23:21,1
up and how much optimization potential
there is when it does.

425
0:23:21,28 --> 0:23:25,92
Nikolay: Yeah, well, depending
on the project stage, if the stage

426
0:23:25,92 --> 0:23:29,94
is early, usually people focus
on adding indexes.

427
0:23:30,18 --> 0:23:34,9
We deal more with slightly grown
startups, and they come to us

428
0:23:34,9 --> 0:23:37,68
with problems, and we observe a
lot of unused redundant, and

429
0:23:37,68 --> 0:23:39,14
index bloat, and so on.

430
0:23:39,14 --> 0:23:41,54
That's why we focus right now in
opposite direction.

431
0:23:42,38 --> 0:23:45,28
Michael: Yeah, but redundant, like
I've seen, you probably see

432
0:23:45,28 --> 0:23:49,08
this all the time, but the exact
same index, like 3 times.

433
0:23:49,08 --> 0:23:52,72
So there's like, there's often
even lower hanging fruit.

434
0:23:52,74 --> 0:23:57,14
Nikolay: But you see it at micro
level and explain how to identify

435
0:23:57,28 --> 0:24:1,82
this problem in aggregated state
like at macro level yeah this

436
0:24:1,82 --> 0:24:3,2
I don't understand yeah

437
0:24:3,72 --> 0:24:7,88
Michael: probably probably it's
going to show up in number of

438
0:24:7,88 --> 0:24:8,38
buffers.

439
0:24:8,74 --> 0:24:14,38
Nikolay: Which, sorry, what I do
understand, idx_scan is now

440
0:24:14,38 --> 0:24:17,44
what, like, meaning of it shifted
for me.

441
0:24:17,44 --> 0:24:20,34
So it's not how many queries, like...

442
0:24:20,86 --> 0:24:21,66
Michael: Well, it kind of

443
0:24:21,66 --> 0:24:22,08
Nikolay: used to

444
0:24:22,08 --> 0:24:22,66
Michael: be, right?

445
0:24:22,66 --> 0:24:26,54
It used to, because index searches,
I'm not sure of the exact

446
0:24:26,64 --> 0:24:27,14
semantics.

447
0:24:27,26 --> 0:24:29,88
I think there have been some optimizations
in the past that might

448
0:24:29,88 --> 0:24:33,36
have used this, but in the past
it was pretty much one-to-one.

449
0:24:34,02 --> 0:24:34,52
Well,

450
0:24:34,82 --> 0:24:38,6
Nikolay: it might happen multiple
times even, it might, it could

451
0:24:38,6 --> 0:24:41,96
be, it was possible even before
if the same index is used in

452
0:24:41,96 --> 0:24:45,08
different parts of queries independently,
multiple index scans.

453
0:24:45,66 --> 0:24:49,74
So it can be, for example, CTE,
right?

454
0:24:49,74 --> 0:24:53,3
So different stages can use the
same index multiple times.

455
0:24:53,3 --> 0:24:53,66
Union.

456
0:24:53,66 --> 0:24:58,12
So it's definitely not 101, but
now it's even more not 101.

457
0:25:0,4 --> 0:25:1,88
So it's interesting.

458
0:25:2,22 --> 0:25:4,84
Anyway, I will be thinking about
this, thank you.

459
0:25:4,84 --> 0:25:8,64
As I said, this is the food for
additional thoughts.

460
0:25:9,28 --> 0:25:10,02
Yeah, yeah.

461
0:25:10,64 --> 0:25:12,66
There should be some process here,
yeah.

462
0:25:13,1 --> 0:25:14,98
Michael: Shall we go through a
few more?

463
0:25:15,06 --> 0:25:15,56
Yeah.

464
0:25:15,72 --> 0:25:16,52
Window functions.

465
0:25:16,92 --> 0:25:23,56
So in the past, if you specified
verbose, you get output at each

466
0:25:23,56 --> 0:25:25,08
stage of a query plan.

467
0:25:26,54 --> 0:25:30,14
And window functions would just
say like, over question mark.

468
0:25:30,66 --> 0:25:34,54
So you, if you've got multiple
of them in your query it could

469
0:25:34,54 --> 0:25:37,32
start to get a little bit tricky
to see which 1 was which.

470
0:25:37,54 --> 0:25:41,5
Obviously that's quite a niche
case but if you're sharing it

471
0:25:41,5 --> 0:25:44,74
with somebody else they might want
to see what the...

472
0:25:45,06 --> 0:25:47,68
I think verbose is especially helpful
when you're sharing plans

473
0:25:47,68 --> 0:25:49,58
with other people or tools.

474
0:25:51,76 --> 0:25:53,56
Now that has been improved.

475
0:25:53,56 --> 0:25:58,78
So we still get, we get over like
W1 now and then there's a separate

476
0:25:58,78 --> 0:26:2,54
field that reports what W1 is,
which window function that is.

477
0:26:3,52 --> 0:26:8,22
Nikolay: So if in the past, I remember
we said,

478
0:26:8,22 --> 0:26:8,72
EXPLAIN (ANALYZE, BUFFERS).

479
0:26:9,22 --> 0:26:13,54
But on your website, I saw actually,
EXPLAIN (ANALYZE, BUFFERS)

480
0:26:13,58 --> 0:26:16,92
were both settings on, right?

481
0:26:17,22 --> 0:26:18,54
Settings or just settings?

482
0:26:19,02 --> 0:26:19,66
You can skip that.

483
0:26:19,66 --> 0:26:20,44
Michael: Just settings, yeah.

484
0:26:20,44 --> 0:26:21,06
Both work.

485
0:26:21,06 --> 0:26:22,76
Nikolay: And what else?

486
0:26:23,56 --> 0:26:24,06
WAL?

487
0:26:26,14 --> 0:26:28,86
Michael: Yeah, WAL and WAL's
good but look I don't find it

488
0:26:28,86 --> 0:26:30,82
useful that often like most of
the queries.

489
0:26:30,92 --> 0:26:31,74
Nikolay: It's useful.

490
0:26:32,32 --> 0:26:33,18
It's useful.

491
0:26:33,42 --> 0:26:35,44
Michael: Serialize, I think, is...

492
0:26:35,44 --> 0:26:38,1
Nikolay: Ah, this is Postgres 17
feature, right?

493
0:26:38,1 --> 0:26:38,82
Or 16?

494
0:26:38,86 --> 0:26:39,36
17.

495
0:26:39,44 --> 0:26:40,46
Michael: Yeah, 17.

496
0:26:41,08 --> 0:26:47,24
Nikolay: So now mantra is
EXPLAIN ANALYZE

497
0:26:47,52 --> 0:26:49,5
Buffers not needed, buffers not
needed.

498
0:26:49,5 --> 0:26:50,78
Settings, verbose.

499
0:26:52,26 --> 0:26:52,76
Serialize?

500
0:26:53,74 --> 0:26:54,24
WAL?

501
0:26:54,7 --> 0:26:55,54
Michael: Depends, right?

502
0:26:55,54 --> 0:26:58,26
Like, I think if you're typing
them out by hand and you're in

503
0:26:58,26 --> 0:27:0,66
a rush and there's a production
incident, sure, just go ahead

504
0:27:0,66 --> 0:27:2,64
and grab whichever is quickest
to type.

505
0:27:2,64 --> 0:27:5,8
But if you're not, then why not
just paste them all?

506
0:27:5,8 --> 0:27:6,3
Like...

507
0:27:7,12 --> 0:27:7,66
Nikolay: Which all?

508
0:27:7,66 --> 0:27:8,68
Like, there's...

509
0:27:9,52 --> 0:27:10,18
Like everything.

510
0:27:11,4 --> 0:27:13,34
This should be a pre-explain everything.

511
0:27:13,88 --> 0:27:17,68
We discussed that and I think you
proposed something in hackers.

512
0:27:18,3 --> 0:27:21,84
Michael: Yeah, I ended up settling
on proposing that

513
0:27:21,96 --> 0:27:24,88
EXPLAIN (ANALYZE, VERBOSE) should
give you everything.

514
0:27:25,52 --> 0:27:27,86
Like everything should be on by
default with verbose because

515
0:27:27,86 --> 0:27:31,46
it kind of means like I want like
verbose output.

516
0:27:32,46 --> 0:27:33,68
It didn't go very far.

517
0:27:33,68 --> 0:27:35,46
But yeah.

518
0:27:36,22 --> 0:27:39,28
Nikolay: So what, like, there should
be some 1 recommendation

519
0:27:39,52 --> 0:27:42,94
you usually tell people when they
come with problems.

520
0:27:46,24 --> 0:27:49,28
Michael: Yeah, the 1 that you copy
and paste from our app is

521
0:27:49,28 --> 0:27:52,24
EXPLAIN (ANALYZE, BUFFERS, VERBOSE)
settings.

522
0:27:52,36 --> 0:27:55,12
I think we haven't included the
17 ones, because not enough people

523
0:27:55,12 --> 0:27:56,42
are on 17 yet.

524
0:27:56,5 --> 0:27:57,42
Nikolay: In format JSON.

525
0:27:57,74 --> 0:27:59,08
Michael: I think we dropped that.

526
0:27:59,16 --> 0:28:2,06
Nikolay: That's great, Because
I hate dealing with JSON.

527
0:28:2,22 --> 0:28:5,5
People send us JSON and okay, I
can go to some tools, but I cannot

528
0:28:5,5 --> 0:28:6,34
read it myself.

529
0:28:6,6 --> 0:28:8,04
We have actually a converter.

530
0:28:8,1 --> 0:28:11,74
We have a converter from JSON to
text, which is good because

531
0:28:11,84 --> 0:28:14,98
I, myself as a human, consume text,
not JSON.

532
0:28:15,72 --> 0:28:17,94
But for tools, I understand JSON
is better.

533
0:28:18,76 --> 0:28:19,72
OK, good.

534
0:28:19,72 --> 0:28:23,34
So now it's explaining why it's
verbose settings.

535
0:28:24,92 --> 0:28:25,42
OK?

536
0:28:25,9 --> 0:28:27,48
4 words only, right?

537
0:28:28,04 --> 0:28:29,1
Michael: Yeah, not too bad.

538
0:28:29,1 --> 0:28:32,32
And you might want to use the others,
like WAL, if you're dealing

539
0:28:32,32 --> 0:28:36,58
with data modification queries
or you suspect it's an issue like

540
0:28:36,58 --> 0:28:39,22
if you're running out of ideas
why not try them all.

541
0:28:40,08 --> 0:28:44,06
Nikolay: And verbose also brings
compute query ID thing, query

542
0:28:44,06 --> 0:28:44,54
ID.

543
0:28:44,54 --> 0:28:45,04
Yeah, query ID.

544
0:28:45,04 --> 0:28:48,22
Michael: And schema qualifications.

545
0:28:49,74 --> 0:28:51,72
Nikolay: You don't recommend serialize
yet?

546
0:28:51,96 --> 0:28:54,36
Michael: Only because not many
people are on 17 yet.

547
0:28:54,52 --> 0:28:58,76
Nikolay: Okay, so you think you
will switch some future.

548
0:28:59,18 --> 0:29:0,76
Michael: Yeah, in a few years time.

549
0:29:2,02 --> 0:29:3,98
Nikolay: I would definitely include
WAL.

550
0:29:5,02 --> 0:29:6,24
There's also memory.

551
0:29:6,82 --> 0:29:9,72
Michael: Again it's not all supported versions have WAL.

552
0:29:10,68 --> 0:29:11,4
Nikolay: I see, yes.

553
0:29:11,4 --> 0:29:11,86
Michael: I think.

554
0:29:11,86 --> 0:29:12,96
Or maybe they do now.

555
0:29:12,96 --> 0:29:15,7
Maybe it's maybe it's that I can't remember which version that

556
0:29:15,7 --> 0:29:16,24
was added in.

557
0:29:16,24 --> 0:29:21,5
Nikolay: Well last week we just said goodbye to version 13, right?

558
0:29:21,5 --> 0:29:27,04
So now it's version 14, which is the latest, and WAL is included

559
0:29:27,04 --> 0:29:27,54
there.

560
0:29:28,18 --> 0:29:29,12
Michael: Nice, okay.

561
0:29:30,06 --> 0:29:32,02
Nikolay: I think a WAL is worth having.

562
0:29:32,22 --> 0:29:33,84
Michael: Great, and it's nice and short.

563
0:29:34,34 --> 0:29:39,56
Nikolay: Good, okay, so back to the thing you told us about VERBOSE.

564
0:29:39,86 --> 0:29:43,76
While we were chatting about all these options, I already forgot.

565
0:29:44,14 --> 0:29:46,1
So VERBOSE brings what exactly?

566
0:29:46,38 --> 0:29:49,2
Michael: The actual 18 changes don't require VERBOSE which is

567
0:29:49,2 --> 0:29:54,56
a little bit confusing because it adds another field for window

568
0:29:54,58 --> 0:29:58,76
definitions or the yeah the function the over

569
0:30:0,52 --> 0:30:1,02
Nikolay: okay

570
0:30:1,78 --> 0:30:5,6
Michael: oh sorry what's it called yeah okay I have to go back

571
0:30:5,6 --> 0:30:6,36
to my glossary

572
0:30:7,12 --> 0:30:10,12
Nikolay: there's memory which is new but I think it's from 17

573
0:30:10,12 --> 0:30:10,62
maybe

574
0:30:10,68 --> 0:30:15,94
Michael: yeah that's 17 oh but but this is related We do now

575
0:30:15,94 --> 0:30:17,86
get memory information on like a...

576
0:30:17,86 --> 0:30:18,54
So that was...

577
0:30:18,54 --> 0:30:19,7
So memory in...

578
0:30:19,7 --> 0:30:22,12
As an EXPLAIN parameter gives you information about the memory

579
0:30:22,12 --> 0:30:23,94
used during planning time specifically.

580
0:30:24,34 --> 0:30:24,84
Nikolay: Mm-hmm.

581
0:30:25,68 --> 0:30:26,88
And it's from 17.

582
0:30:26,98 --> 0:30:27,56
Memory is from 17.

583
0:30:27,56 --> 0:30:27,88
And that's

584
0:30:27,88 --> 0:30:28,64
Michael: from 17.

585
0:30:29,76 --> 0:30:30,92
But we do now get...

586
0:30:30,92 --> 0:30:36,64
There was another piece of work that adds memory or disk usage

587
0:30:36,94 --> 0:30:39,44
details to a bunch more operation types.

588
0:30:39,44 --> 0:30:41,58
So we already got them, for example, you might be used to seeing

589
0:30:41,58 --> 0:30:43,76
them in sorts and a few other operations.

590
0:30:44,44 --> 0:30:47,64
But now You get them in Materialized nodes, in window aggregates

591
0:30:47,64 --> 0:30:52,28
as well, so for window functions, and CTEs, maybe others as well.

592
0:30:52,28 --> 0:30:54,26
Nikolay: So memory was extended as well?

593
0:30:55,12 --> 0:30:58,74
Michael: Not the parameter memory, but memory information in

594
0:30:58,74 --> 0:30:59,94
EXPLAIN, yes.

595
0:31:0,12 --> 0:31:3,78
Nikolay: Well I mean the memory in 17 brings you less in 4 than

596
0:31:3,78 --> 0:31:7,56
in 18 right this is this is the right thing to say right

597
0:31:8,88 --> 0:31:10,74
Michael: they're unrelated yeah

598
0:31:10,94 --> 0:31:13,44
Nikolay: unrelated so it's only about their boss

599
0:31:15,06 --> 0:31:18,18
Michael: This actually I think is on with ANALYZE, not even...

600
0:31:18,4 --> 0:31:20,04
Nikolay: Why is it so difficult?

601
0:31:20,66 --> 0:31:23,6
Michael: It is difficult, this is why I have to, like, I don't,

602
0:31:23,6 --> 0:31:26,04
I struggle to remember all of this, so this is why I've written,

603
0:31:26,04 --> 0:31:27,48
like, why I write it down.

604
0:31:27,92 --> 0:31:30,04
Nikolay: There is UX issue here, definitely.

605
0:31:30,76 --> 0:31:33,78
Michael: Yeah, the field is called window.

606
0:31:34,54 --> 0:31:40,02
So you'll get a window ag operation with a field called window

607
0:31:40,16 --> 0:31:42,68
within the definitions of each of your windows.

608
0:31:43,18 --> 0:31:45,36
So anything you've got in the over clause.

609
0:31:45,36 --> 0:31:46,62
Nikolay: Why is it called window?

610
0:31:47,06 --> 0:31:49,62
Michael: So window functions are over something.

611
0:31:49,66 --> 0:31:52,94
Nikolay: Ah, so you need to see details about window, ok.

612
0:31:53,18 --> 0:31:53,68
Michael: Exactly.

613
0:31:55,32 --> 0:31:58,82
Nikolay: And this is now, if you use verbose you see it, without

614
0:31:58,82 --> 0:32:0,26
verbose you don't see it.

615
0:32:1,24 --> 0:32:4,78
Michael: So, with verbose you see the output.

616
0:32:6,28 --> 0:32:6,78
Nikolay: Ok.

617
0:32:8,04 --> 0:32:11,38
Michael: I thought you would only see window with the Bose as

618
0:32:11,38 --> 0:32:11,88
well.

619
0:32:11,98 --> 0:32:14,42
So you see the output and the window.

620
0:32:15,3 --> 0:32:19,0
But in testing I found that you could see the window without

621
0:32:19,0 --> 0:32:21,3
the output by just running EXPLAIN ANALYZE

622
0:32:22,44 --> 0:32:22,76
Nikolay: Okay,

623
0:32:22,76 --> 0:32:23,64
so it's unclear.

624
0:32:23,72 --> 0:32:28,2
Michael: Yeah, it's In fact, I've written it perhaps confusingly the window

625
0:32:28,2 --> 0:32:30,66
field is always shown but output is only shown when the proposed

626
0:32:30,66 --> 0:32:31,74
parameter to choose

627
0:32:31,8 --> 0:32:34,58
Nikolay: I don't understand why we don't have give me everything.

628
0:32:35,58 --> 0:32:37,26
Michael: Yeah, it's becoming clear.

629
0:32:37,64 --> 0:32:39,18
Nikolay: There should be some give me everything.

630
0:32:39,18 --> 0:32:44,56
I don't want to, I want short, EXPLAIN ANALYZE everything.

631
0:32:46,24 --> 0:32:47,3
It should be done.

632
0:32:49,46 --> 0:32:53,1
Because if it's development server, I mean, not production server,

633
0:32:53,1 --> 0:32:54,66
I would like to see everything.

634
0:32:57,62 --> 0:33:0,56
And in my optimization process to compare all the details.

635
0:33:1,22 --> 0:33:4,74
Even those which I don't understand yet, because maybe if you

636
0:33:4,74 --> 0:33:8,4
ask LLM or some tooling, it will understand it, right?

637
0:33:8,4 --> 0:33:8,9
Michael: Yeah.

638
0:33:9,66 --> 0:33:10,62
Or pgMustard.

639
0:33:10,92 --> 0:33:12,08
Nikolay: pgMustard, exactly.

640
0:33:12,54 --> 0:33:16,86
So why not bring everything if it's possible, if it's not production.

641
0:33:17,04 --> 0:33:19,96
I understand that production, like maybe observer effect might

642
0:33:19,96 --> 0:33:23,74
be an issue and you probably don't want everything sometimes.

643
0:33:23,96 --> 0:33:27,6
Michael: Well yeah, I don't know, like yeah, I can see an argument

644
0:33:27,6 --> 0:33:28,38
either way.

645
0:33:28,68 --> 0:33:29,18
Nikolay: Okay.

646
0:33:29,72 --> 0:33:30,76
Should we crack on?

647
0:33:30,9 --> 0:33:32,54
Yeah, it was hard.

648
0:33:33,74 --> 0:33:38,76
Michael: Do you remember we discussed that the enable like enable

649
0:33:38,76 --> 0:33:41,5
seqscan and enable index scan like these...

650
0:33:41,98 --> 0:33:43,3
Nikolay: There are changes there.

651
0:33:43,66 --> 0:33:44,06
Instead of

652
0:33:44,06 --> 0:33:44,38
Michael: a big

653
0:33:44,38 --> 0:33:49,22
Nikolay: penalty now it's actual
disabling right?

654
0:33:49,46 --> 0:33:51,06
Michael: Yes well...

655
0:33:51,06 --> 0:33:53,16
Nikolay: And now you see it on
plans if it's disabled.

656
0:33:53,16 --> 0:33:53,5
Michael: Yes.

657
0:33:53,5 --> 0:33:54,72
That's, yeah.

658
0:33:54,72 --> 0:33:55,9
So super quick 1.

659
0:33:56,12 --> 0:33:58,94
But you'll only see in text format,
you'll only see disabled

660
0:33:59,06 --> 0:34:4,54
true for nodes that have been explicitly,
I say disabled, but

661
0:34:4,54 --> 0:34:8,04
let's say enable seqscan equals
off, you'll see on any sequential

662
0:34:8,04 --> 0:34:10,94
scans in that plan, you'll see
disabled true.

663
0:34:12,04 --> 0:34:12,54
Okay.

664
0:34:12,88 --> 0:34:13,88
So simple 1.

665
0:34:14,18 --> 0:34:17,92
Nikolay: Yeah, but I remember some
thought I have, I had when

666
0:34:17,92 --> 0:34:19,54
I was reading about this.

667
0:34:20,14 --> 0:34:22,58
Some concern compared to old behavior.

668
0:34:23,56 --> 0:34:25,62
I don't remember which thought
it was.

669
0:34:26,0 --> 0:34:28,74
You don't see any downsides of
this change?

670
0:34:28,94 --> 0:34:30,06
Michael: I really like this.

671
0:34:30,06 --> 0:34:33,84
I think this mitigates the downsides
of the previous functionality.

672
0:34:34,2 --> 0:34:37,62
Nikolay: Especially if you use
settings, so you see settings,

673
0:34:38,42 --> 0:34:42,44
and you understand the reason why
it's disabled, it's great.

674
0:34:42,8 --> 0:34:43,52
Michael: Yes, exactly.

675
0:34:43,52 --> 0:34:46,78
So if anybody doesn't know, settings
parameter adds to the bottom

676
0:34:46,78 --> 0:34:51,72
of your query plan any non-default
planner related settings and

677
0:34:51,72 --> 0:34:52,98
what they're set to.

678
0:34:53,76 --> 0:34:57,72
Nikolay: Yeah and if you want to
for example if you really somehow

679
0:34:57,72 --> 0:35:3,74
like the old behavior you can just
play with costs right maybe

680
0:35:4,14 --> 0:35:4,9
maybe no.

681
0:35:6,82 --> 0:35:9,68
There is a problem there is different
problem, like we don't

682
0:35:9,68 --> 0:35:10,88
see the second plan.

683
0:35:11,46 --> 0:35:15,6
What the planner had while the
planner was choosing, right?

684
0:35:15,6 --> 0:35:17,22
So we only see the winner.

685
0:35:18,48 --> 0:35:25,36
And I think old behavior of enable
seqscan set to off, getting,

686
0:35:25,52 --> 0:35:31,1
like, putting huge penalty to seq
scans, it helped sometimes

687
0:35:31,24 --> 0:35:35,46
understand that, like, planner behavior,
like, why it switches

688
0:35:35,46 --> 0:35:38,52
from this to that From the sky

689
0:35:38,52 --> 0:35:41,02
Michael: still yeah, you can still
use it for that

690
0:35:42,16 --> 0:35:47,56
Nikolay: Okay Maybe yeah, and I
I want different I want just

691
0:35:47,56 --> 0:35:50,92
to see second and third plan maybe
to understand like the difference

692
0:35:50,92 --> 0:35:57,88
in costs between the winner and
the losers Yeah, but I think

693
0:35:57,88 --> 0:36:1,8
you like UX and like it's not easy
to

694
0:36:2,72 --> 0:36:4,2
Michael: Wait, this makes it easier.

695
0:36:4,2 --> 0:36:6,76
This makes it easier to see the
difference in the costs.

696
0:36:7,44 --> 0:36:7,94
Nikolay: Okay.

697
0:36:8,94 --> 0:36:9,24
Because you

698
0:36:9,24 --> 0:36:10,56
Michael: don't have those huge
numbers.

699
0:36:10,56 --> 0:36:11,98
Nikolay: But it's not what I want.

700
0:36:11,98 --> 0:36:15,48
I want to maybe sometimes always
see the second and third plans.

701
0:36:16,3 --> 0:36:17,08
Michael: Fine, fine, fine.

702
0:36:17,08 --> 0:36:22,54
Nikolay: If it's possible, like
somehow, like add some 0.1 more

703
0:36:22,54 --> 0:36:29,56
flag in EXPLAIN ANALYZE, show me
not 1 plan, but multiple plans.

704
0:36:30,18 --> 0:36:31,42
So, yeah.

705
0:36:31,58 --> 0:36:33,36
Okay, too much, maybe.

706
0:36:34,62 --> 0:36:36,56
Michael: Let's go through the other
couple of changes that we

707
0:36:36,56 --> 0:36:37,62
have got already.

708
0:36:37,86 --> 0:36:45,18
Yeah 1 I think is a kind of a bug
fix which is for parallel bitmap

709
0:36:45,18 --> 0:36:51,12
heap scans You would get exact
heap blocks and lossy heap blocks

710
0:36:51,12 --> 0:36:55,88
to tell you whether the work mem
was big enough for that bitmap

711
0:36:55,92 --> 0:36:56,42
basically.

712
0:36:57,78 --> 0:37:4,74
And it turns out they were only
reporting the leader process,

713
0:37:5,38 --> 0:37:6,7
not the worker processes.

714
0:37:7,44 --> 0:37:13,82
So now that's been changed so that
you now see per worker details

715
0:37:13,98 --> 0:37:17,18
by default, without, with ANALYZE,
not just with VERBOSE.

716
0:37:17,98 --> 0:37:20,5
So Parallel Query workers are normally
only shown, like details for

717
0:37:20,5 --> 0:37:23,3
them is normally only shown with
verbose this is an exception

718
0:37:23,8 --> 0:37:29,44
and if from memory the leader reported
numbers don't include

719
0:37:29,44 --> 0:37:33,98
the worker numbers still so that's
a difference in back So that's

720
0:37:34,34 --> 0:37:37,56
kind of maintaining behavior from
previous versions but it's

721
0:37:37,56 --> 0:37:41,18
different to how most operations
report their details.

722
0:37:41,48 --> 0:37:45,52
So most of the time operations
include the details, include the

723
0:37:45,52 --> 0:37:48,92
information of their children or
from their work processes.

724
0:37:49,6 --> 0:37:53,6
So, yeah, this is a slightly confusing
1, but a big improvement

725
0:37:53,6 --> 0:37:55,46
for people that know what they're
looking for.

726
0:37:55,8 --> 0:38:0,78
Nikolay: TLDR is parallel awareness
of EXPLAIN plans improved.

727
0:38:1,74 --> 0:38:2,24
Right?

728
0:38:2,76 --> 0:38:3,64
Michael: Oh, and accuracy.

729
0:38:4,12 --> 0:38:6,68
Nikolay: Yeah, it sounds like a
bug to me a little bit.

730
0:38:6,68 --> 0:38:8,26
Michael: Yeah it is, it is.

731
0:38:8,26 --> 0:38:11,26
But it's good that we've got that
and you might notice a change.

732
0:38:11,4 --> 0:38:15,78
And then the probably the least
likely to come up but for completeness

733
0:38:16,26 --> 0:38:20,46
did you see we've got a new WAL
buffers full field as well?

734
0:38:20,86 --> 0:38:25,68
So if the WAL buffers happen to
have become full during the

735
0:38:25,68 --> 0:38:29,44
query execution, or the number
of times they become full now

736
0:38:29,54 --> 0:38:30,36
gets reported.

737
0:38:30,42 --> 0:38:32,86
Nikolay: Ah, I remember reading
about this, Yeah, that's great.

738
0:38:32,86 --> 0:38:33,36
Nice.

739
0:38:33,58 --> 0:38:34,84
Michael: So yeah, simple 1.

740
0:38:34,92 --> 0:38:37,96
Nikolay: Yeah, simple, but interesting
how we are going to use

741
0:38:37,96 --> 0:38:42,7
it because usually this kind of
analysis is really like we do

742
0:38:42,7 --> 0:38:47,12
it at macro level and see usually
dealing with wait event analysis

743
0:38:47,22 --> 0:38:54,94
and see, are your WAL buffers
and lightweight lock related to

744
0:38:54,94 --> 0:38:57,72
WAL buffers, WAL write, right?

745
0:38:57,72 --> 0:38:59,02
WAL write and so on.

746
0:38:59,02 --> 0:39:1,56
This is how we indirectly understand
that probably there's a

747
0:39:1,56 --> 0:39:3,84
problem here and contention and
so on.

748
0:39:3,94 --> 0:39:9,62
And then we'll usually go to in
tune siblings and like a group

749
0:39:9,62 --> 0:39:10,46
commit basically.

750
0:39:12,54 --> 0:39:18,02
So let it commit, let it wait a
little bit, slightly, and do

751
0:39:18,06 --> 0:39:23,08
fsync or analog for multiple transactions
for multiple commits

752
0:39:23,3 --> 0:39:28,04
in right heavy, right intensive
workloads it's a very good approach.

753
0:39:28,82 --> 0:39:34,2
But when I think about micro level
like here, explain well, it's

754
0:39:34,2 --> 0:39:38,82
hard because I like usually in
EXPLAIN ANALYZE I do it dependently,

755
0:39:39,4 --> 0:39:43,76
maybe some multiple times, but
I don't think about macro state

756
0:39:43,78 --> 0:39:45,74
of our server.

757
0:39:46,5 --> 0:39:49,34
Michael: Yeah, I actually I don't
imagine this is going to come

758
0:39:49,34 --> 0:39:51,36
up much when you're running it
manually.

759
0:39:51,54 --> 0:39:54,96
You'd have to go on last point
have to be, I think, or imagine

760
0:39:54,96 --> 0:39:58,1
an auto_explain you've got a query
that's intermittently very

761
0:39:58,1 --> 0:39:58,36
slow.

762
0:39:58,36 --> 0:40:0,54
So normally it's relatively okay.

763
0:40:0,62 --> 0:40:2,64
And then sometimes it's extremely
slow.

764
0:40:2,86 --> 0:40:7,76
This would help you find out on
this really slow runs oh it might

765
0:40:7,76 --> 0:40:9,06
be related to this issue

766
0:40:9,24 --> 0:40:12,54
Nikolay: this is great this is
great and but it's it's immediately

767
0:40:13,08 --> 0:40:17,98
provokes a couple of thoughts 1
is p99 for for everything could

768
0:40:17,98 --> 0:40:18,82
not explain

769
0:40:18,82 --> 0:40:21,1
Michael: yeah so on it's we talked
about right

770
0:40:21,14 --> 0:40:26,5
Nikolay: yeah and another thing
is what about checkpoints and

771
0:40:26,5 --> 0:40:32,94
BUFFERS written and seeing sometimes
by backends and like initiated

772
0:40:32,94 --> 0:40:38,42
by backends because something like
because guys called BG writer

773
0:40:38,42 --> 0:40:41,88
and checkpointer, they don't do
their job properly.

774
0:40:42,88 --> 0:40:46,1
So our precious backends need to
do it sometimes.

775
0:40:46,28 --> 0:40:50,64
So I'm thinking maybe explain should
start showing that info as

776
0:40:50,64 --> 0:40:53,82
well, and also in the context of
auto_explain maybe more.

777
0:40:53,86 --> 0:40:59,46
Yeah, that's, again, like I feel
some seeds for future plans

778
0:40:59,46 --> 0:41:2,6
to be grown, you know, in observability.

779
0:41:3,06 --> 0:41:3,84
It's interesting.

780
0:41:4,08 --> 0:41:9,2
And yeah, thank you so much for
sharing all the details here.

781
0:41:9,34 --> 0:41:10,24
Michael: Oh, you're welcome.

782
0:41:10,24 --> 0:41:11,46
Nikolay: I'm glad it was helpful.

783
0:41:11,6 --> 0:41:12,88
For me, it's very helpful.

784
0:41:12,88 --> 0:41:15,4
We are very deep in observability
recently.

785
0:41:15,46 --> 0:41:16,56
So that's great.

786
0:41:16,56 --> 0:41:17,38
That's great.

787
0:41:17,52 --> 0:41:18,34
And I agree with you.

788
0:41:18,34 --> 0:41:21,84
It's more, maybe it's more for
to explain not for occasional

789
0:41:22,58 --> 0:41:24,92
explained plans, but good.

790
0:41:24,92 --> 0:41:25,62
Yeah, maybe.

791
0:41:25,64 --> 0:41:30,14
Anyway, my recommendation is still
like, let's keep WAL included

792
0:41:30,14 --> 0:41:33,98
because we, we, we deal with a
lot of, a lot of startups which

793
0:41:34,38 --> 0:41:36,86
suffer from writing too much WAL.

794
0:41:36,96 --> 0:41:40,22
And again, this brings us to the
topic of let's clean up indexes

795
0:41:40,24 --> 0:41:42,66
because indexes write more WAL.

796
0:41:42,7 --> 0:41:46,58
If you have unused, redundant,
and so on, and then Bloat as well,

797
0:41:46,58 --> 0:41:48,64
you are writing more WAL than
you could.

798
0:41:49,02 --> 0:41:51,66
Eventually, I think it would be
so great to say like you know

799
0:41:51,66 --> 0:41:54,96
like let's clean up this and that
and we predict how much WAL

800
0:41:54,96 --> 0:42:0,6
writes will be reduced but at micro
level having WAL info is

801
0:42:0,6 --> 0:42:1,86
very helpful as well.

802
0:42:2,5 --> 0:42:6,22
Although we always should remember
that it depends on the current

803
0:42:6,22 --> 0:42:9,24
situation, how far from checkpoint
we are.

804
0:42:9,58 --> 0:42:14,68
And also, it depends on parameters,
because you take different

805
0:42:14,68 --> 0:42:18,7
parameters, you will do very different
thing, including WAL

806
0:42:18,7 --> 0:42:20,58
writes will be very different,
so maybe.

807
0:42:21,26 --> 0:42:24,24
Michael: Well, I think it's difficult
for Microsoft because you

808
0:42:24,24 --> 0:42:28,68
need concurrent workload as well
for it to be super inter-

809
0:42:28,86 --> 0:42:32,7
Nikolay: Not concurrent, but the
state needs to be fixed.

810
0:42:32,7 --> 0:42:33,56
Michael: Same each time.

811
0:42:33,56 --> 0:42:36,1
Nikolay: Same, yeah, But this is
solved by Database Lab.

812
0:42:36,22 --> 0:42:39,96
We can reset and be in the same
state, right?

813
0:42:39,96 --> 0:42:40,68
So that's great.

814
0:42:40,68 --> 0:42:41,94
So Database branching helps.

815
0:42:41,94 --> 0:42:47,9
So you start always, you have various
ideas or iterations, and

816
0:42:47,9 --> 0:42:53,5
you always start from very same
state including everything, including

817
0:42:53,8 --> 0:42:57,38
tuples, including the distance
from checkpoint, everything.

818
0:42:57,84 --> 0:42:59,48
So this is not a problem.

819
0:43:0,06 --> 0:43:3,06
The problem is, I see, for example,
if you forgot that parameters

820
0:43:3,08 --> 0:43:3,98
matter a lot.

821
0:43:4,2 --> 0:43:7,72
In one case, you can get a full-fledged
update.

822
0:43:7,72 --> 0:43:10,22
In another case, you can have a
hot update.

823
0:43:11,66 --> 0:43:14,24
And you will see very different
WAL numbers there, right?

824
0:43:14,96 --> 0:43:15,78
Michael: Yeah, true.

825
0:43:16,1 --> 0:43:20,16
Nikolay: But WAL buffers full,
it's interesting.

826
0:43:20,74 --> 0:43:24,86
Like, why don't report number of
dirty buffers in each table

827
0:43:24,86 --> 0:43:26,14
involved in this case?

828
0:43:26,82 --> 0:43:29,94
Like, you know, like having a pg_buffercache here.

829
0:43:31,84 --> 0:43:33,12
Well, I'm joking.

830
0:43:33,12 --> 0:43:34,9
Michael: I think you do get dirty
buffers.

831
0:43:34,9 --> 0:43:37,44
If you include buffers, you get
dirty pages.

832
0:43:37,58 --> 0:43:40,24
Nikolay: What concerns me in this,
what you said, it's not only

833
0:43:40,24 --> 0:43:44,02
what we did, but also what others
did before us recently, right?

834
0:43:44,2 --> 0:43:47,64
In this case, let's report dirty
buffers for each page, each

835
0:43:47,64 --> 0:43:48,84
table fully?

836
0:43:49,86 --> 0:43:54,52
How many, like, buffers loaded
to the buffer pool, how many of

837
0:43:54,52 --> 0:43:57,54
them dirty for this table, for
this index, for each relation?

838
0:43:58,36 --> 0:44:0,88
Oh, this complicates, like, it's
too much.

839
0:44:0,88 --> 0:44:4,86
But anyway, like, I'm curious what
was the reason to introduce

840
0:44:4,86 --> 0:44:5,14
this?

841
0:44:5,14 --> 0:44:7,24
I think there was a specific reason,
right?

842
0:44:8,16 --> 0:44:9,06
Michael: I guess so.

843
0:44:9,12 --> 0:44:11,34
Sometimes these things get added
for completeness.

844
0:44:11,5 --> 0:44:14,76
So like it was getting added to
some system view and it's like,

845
0:44:14,76 --> 0:44:17,2
oh, why don't we also report this
in expense so I'm not sure

846
0:44:17,2 --> 0:44:18,62
which way around this one was.

847
0:44:18,9 --> 0:44:21,68
But yeah, normally these things
get added because they're needed

848
0:44:21,68 --> 0:44:24,02
for some production incident and
they weren't available.

849
0:44:24,02 --> 0:44:28,52
Nikolay: Well in defense of this
there are things that also depend

850
0:44:28,52 --> 0:44:33,2
heavily on previous actions for
example the status of caches

851
0:44:33,2 --> 0:44:33,7
right.

852
0:44:33,88 --> 0:44:37,26
If you use buffer numbers, you see
reads, but next time you see

853
0:44:37,26 --> 0:44:37,76
hits.

854
0:44:38,3 --> 0:44:39,12
Yeah, true.

855
0:44:39,12 --> 0:44:43,6
This is maybe like in the same
route of thinking.

856
0:44:44,16 --> 0:44:45,08
Yeah, OK.

857
0:44:45,66 --> 0:44:46,52
Anyway, great.

858
0:44:46,56 --> 0:44:47,98
I think it was very advanced.

859
0:44:48,82 --> 0:44:53,2
I doubt everyone should know everything,
remember everything,

860
0:44:53,2 --> 0:44:58,48
but in general I feel like the
depth of everything is increasing.

861
0:45:0,06 --> 0:45:2,8
It's more and more nuances.

862
0:45:3,78 --> 0:45:4,66
This is great.

863
0:45:5,14 --> 0:45:5,69
Appreciate it.

864
0:45:5,69 --> 0:45:6,24
Michael: Yeah, absolutely.

865
0:45:6,26 --> 0:45:7,82
Nikolay: Yeah, appreciate all this.

866
0:45:7,82 --> 0:45:8,6
Thank you so much.

867
0:45:8,6 --> 0:45:10,58
Michael: Good to speak with you
and catch you soon.