1
00:00:00,020 --> 00:00:04,369
Michael: Hello, and welcome to Postgres FM,
a weekly show about all things Postgres.

2
00:00:04,550 --> 00:00:06,439
I am Michael founder of PG mustard.

3
00:00:06,529 --> 00:00:08,629
This is Nikolay founder of Postgres AI.

4
00:00:08,750 --> 00:00:10,430
Hey Nikolay What are we talking about today?

5
00:00:10,760 --> 00:00:11,479
Nikolay: Hello, hello.

6
00:00:11,479 --> 00:00:13,340
Let's talk about query optimization.

7
00:00:13,383 --> 00:00:19,836
I think this is  maybe the most interesting topic in in
the area of Pogs in general, but I found not everyone

8
00:00:19,836 --> 00:00:22,126
is interested in it, but let's talk about it anyway.

9
00:00:22,438 --> 00:00:25,935
Michael: Yeah, it's also, I guess, a, topic quite close to both of our hearts.

10
00:00:25,995 --> 00:00:29,156
We've spent many years looking at this.

11
00:00:29,246 --> 00:00:31,916
So hopefully we have some interesting things to add.

12
00:00:32,366 --> 00:00:34,556
Nikolay: but let, let's make some boundaries.

13
00:00:34,586 --> 00:00:42,136
Let's distinguish analysis of workload as a whole
and try attempts to find the, like the worst.

14
00:00:42,956 --> 00:00:47,901
Best candidates for optimization versus single query optimization.

15
00:00:47,901 --> 00:00:51,331
Let's talk about this, the, the second topic subtopic,

16
00:00:52,236 --> 00:00:57,696
Michael: I've heard you  differentiate between macro
performance analysis and micro performance analysis in the past.

17
00:00:57,696 --> 00:01:04,446
So macro being system level and with, I guess, we're not
talking about that today and we're gonna look more at micro

18
00:01:04,451 --> 00:01:07,522
once you've, worked out, there is a problematic query.

19
00:01:07,522 --> 00:01:11,586
You know which one it is, how do you go from that to.

20
00:01:11,677 --> 00:01:12,697
What can I do about it?

21
00:01:12,931 --> 00:01:13,231
Nikolay: Right.

22
00:01:13,261 --> 00:01:19,721
How, how to understand Is it good or bad in terms
of execution and how to read the query plan?

23
00:01:20,001 --> 00:01:23,244
The comment, explain, which is the main tool here, right?

24
00:01:24,174 --> 00:01:24,784
Let's talk about this.

25
00:01:25,317 --> 00:01:30,839
Michael: Is there any other, is there anything else
before we dive into explain, are there any other

26
00:01:30,839 --> 00:01:33,635
parts of it that we, we might need to cover as well?

27
00:01:34,200 --> 00:01:34,500
Nikolay: well?

28
00:01:34,540 --> 00:01:38,996
the, the things that explain doesn't cover, for example it, it won't show it.

29
00:01:38,996 --> 00:01:43,916
Won't it won't tell you for example, CPU utilization, user CPU system CPU.

30
00:01:44,516 --> 00:01:46,196
It won't tell you physical this Coyo.

31
00:01:47,231 --> 00:01:55,162
how many operations at dis cloud, because PSUs doesn't see
them directly because Pogo works only with file system cash.

32
00:01:55,522 --> 00:02:03,892
So I, your operations reason, his PSUs tell
tells you they are not necessarily from disk.

33
00:02:03,892 --> 00:02:05,212
They are from page cash.

34
00:02:05,212 --> 00:02:09,250
So this can get additional, not use and explain it would be good to.

35
00:02:10,015 --> 00:02:14,185
These things inside, explain somehow like pist K cash extends pist statements.

36
00:02:14,185 --> 00:02:23,199
So it'll be good to have something that would extend, explain,
but I don't, I'm not aware of such thing to exist and Yeah.

37
00:02:23,364 --> 00:02:23,844
Michael: Nor me.

38
00:02:23,844 --> 00:02:30,704
I'm not aware either, but I think we get clues about them in explain
don't we, we see some timing that can't be explained other otherwise

39
00:02:30,729 --> 00:02:32,513
Nikolay: your timing human or,

40
00:02:32,543 --> 00:02:32,813
Michael: Sorry.

41
00:02:32,813 --> 00:02:36,685
No, I, I mean, let's say you mentioned CPU performance.

42
00:02:37,015 --> 00:02:42,065
If we have an operation, that's not doing
much IO but it is taking a long, long time.

43
00:02:42,095 --> 00:02:44,458
That's a clue that there might be something else going on.

44
00:02:44,473 --> 00:02:45,163
Nikolay: right.

45
00:02:45,223 --> 00:02:45,673
Right, right.

46
00:02:45,973 --> 00:02:49,003
So in, in general, if we run even one query.

47
00:02:49,473 --> 00:02:56,280
Theoretically, it might, it might be sense to use
things like P and C flame graph for one queer execution.

48
00:02:56,490 --> 00:03:02,391
And it would augment the information that you
can extract from explain, analyze buffers.

49
00:03:02,841 --> 00:03:05,189
But let's just discuss the basics.

50
00:03:05,189 --> 00:03:06,209
Maybe if were to.

51
00:03:06,744 --> 00:03:13,284
Michael: Sounds good and something so possibly even
explain versus explain analyze is a good place to start.

52
00:03:13,331 --> 00:03:18,262
By explain we get the, the query plan that
often normally returns really quickly.

53
00:03:18,303 --> 00:03:27,993
Roughly in the, the planning time of the query and then explain, analyze we
get the actual, well, it, it runs the query and returns  performance data.

54
00:03:28,023 --> 00:03:30,333
So we, we can see how much time was spent.

55
00:03:30,333 --> 00:03:38,124
And if we ask for buffers, how much IO was done and all sorts of
other things as well, but it allows us to compare things like.

56
00:03:38,400 --> 00:03:39,210
How much.

57
00:03:39,270 --> 00:03:45,330
So the explain might tell us how many rows were being
expected to be returned at each stage and explain, analyze.

58
00:03:45,330 --> 00:03:50,550
We can get the actual number of rows returned at each
stage and comparing the two can be really useful.

59
00:03:50,845 --> 00:03:51,835
Nikolay: right, right.

60
00:03:51,835 --> 00:03:52,420
And Yeah.

61
00:03:52,450 --> 00:03:53,050
absolutely.

62
00:03:53,110 --> 00:03:54,285
And also.

63
00:03:54,447 --> 00:03:58,544
we discussed it some time ago that explained shows only one plan.

64
00:03:59,114 --> 00:04:03,554
Sometimes you want to see multiple plans,
like second best candidate and so on.

65
00:04:04,694 --> 00:04:11,594
Otherwise, like you should do some tricks to try to
guess what plan ahead on, on the plate when choosing.

66
00:04:11,968 --> 00:04:12,878
Michael: That's a really good point.

67
00:04:12,938 --> 00:04:16,923
And maybe even a good place to start in terms of using explain.

68
00:04:16,923 --> 00:04:22,369
So the first thing you probably notice when you're looking
at explain for the first time is a lot of cost numbers.

69
00:04:22,428 --> 00:04:26,380
These are an, an arbitrary unit that give you an idea of how expensive.

70
00:04:26,380 --> 00:04:33,939
So it's a kind of an estimate of well, as they, as the cost numbers go up,
Postgres thinks it'll take longer to execute, but they're not in, they're

71
00:04:33,939 --> 00:04:38,441
not an estimate of milliseconds or they're not in any real unit, but.

72
00:04:38,676 --> 00:04:43,995
You can, you can then use a couple of different like, is it enable sex?

73
00:04:44,235 --> 00:04:48,585
There's like some, there's some parameters you can use to affect those costs.

74
00:04:48,585 --> 00:04:53,415
So you could maybe try and get the second
best plan by making the current plan.

75
00:04:53,415 --> 00:04:54,465
Very expensive.

76
00:04:54,855 --> 00:05:00,384
So if your query is currently doing the sequential
scan and you want to see if it could use an index and

77
00:05:00,384 --> 00:05:03,594
it's just choosing not to, you can disable sex scans.

78
00:05:03,594 --> 00:05:05,844
Well, it doesn't actually disable sex scans.

79
00:05:05,844 --> 00:05:10,804
It just makes them incredibly expensive and you can get exactly.

80
00:05:11,421 --> 00:05:12,801
So you might still see that in the

81
00:05:13,061 --> 00:05:15,791
Nikolay: It's very, this trick is very helpful.

82
00:05:15,881 --> 00:05:20,141
When two plans are very close to each other in terms of cost overall cost.

83
00:05:20,201 --> 00:05:27,675
And if you disable six scan and see, or example index
scan and see different plan and see the cost is very

84
00:05:27,680 --> 00:05:31,725
close, it gives you idea that we are on the edge, right.

85
00:05:31,845 --> 00:05:32,195
Or.

86
00:05:32,745 --> 00:05:38,235
We  either  crossed it recently because data
is changing or we are about to cross it.

87
00:05:38,235 --> 00:05:40,455
And, and it's quite kind of dangerous.

88
00:05:40,935 --> 00:05:47,065
But first of all, I would like to mention that
the plans, the plan a plan is a three right.

89
00:05:47,225 --> 00:05:51,921
Cycles are not possible, which is important
because it could be possible actually.

90
00:05:52,581 --> 00:05:54,561
But no, , I mean,

91
00:05:55,056 --> 00:06:02,706
Michael: I think in the simplest cases yes, but with
CTE's you get some, like, you can get some strange things

92
00:06:02,711 --> 00:06:05,749
can refer to the same CT more than once, for example.

93
00:06:05,766 --> 00:06:07,771
But yeah, in the simplest cases,

94
00:06:07,831 --> 00:06:08,581
Nikolay: but when it's.

95
00:06:08,581 --> 00:06:11,047
already executed, that's still like a tree right.

96
00:06:11,434 --> 00:06:12,184
Oh, interesting.

97
00:06:12,184 --> 00:06:12,544
By the way.

98
00:06:12,544 --> 00:06:20,246
Yes, Anyway, it's it is like very rough it's, it's a three
and it's when it it's printed, it's a three, but right.

99
00:06:20,576 --> 00:06:22,436
Loops are possible inside, of course.

100
00:06:22,436 --> 00:06:22,766
Right.

101
00:06:22,766 --> 00:06:28,620
And important thing to understand that
regular metrics are as cost rose timing.

102
00:06:28,687 --> 00:06:36,083
They are shown for one iteration inside the loop, but buffers,
they it's a sum of everything and it's, it can be confusing.

103
00:06:36,168 --> 00:06:36,978
Sometimes right.

104
00:06:37,334 --> 00:06:38,954
Michael: Yeah, well, let's go back to the tree.

105
00:06:38,954 --> 00:06:40,364
I think that's really important.

106
00:06:40,424 --> 00:06:48,054
And a few, a few things that aren't obvious when you're first
looking at them is that logically it's happening almost backwards.

107
00:06:48,084 --> 00:06:52,554
So the first node that you see on the tree is  the last one to be executed.

108
00:06:53,154 --> 00:06:53,784
Whereas,

109
00:06:53,904 --> 00:06:55,434
Nikolay: It grows from leaves to root

110
00:06:55,591 --> 00:06:56,341
Michael: Exactly.

111
00:06:56,431 --> 00:07:00,385
So kind of outside in like, kind of right to left a little bit.

112
00:07:00,404 --> 00:07:08,324
And there's also some really important statistics, especially when you
use explain, analyze some really important statistics right at the bottom.

113
00:07:08,384 --> 00:07:12,464
So some summary metrics like execution, time planning, time

114
00:07:12,509 --> 00:07:14,339
Nikolay: Oh, they're printed separately from these three.

115
00:07:14,429 --> 00:07:14,629
Right?

116
00:07:14,629 --> 00:07:14,829
Right.

117
00:07:14,964 --> 00:07:19,104
Michael: and like trigger time just in time compilation, each of these things.

118
00:07:19,944 --> 00:07:24,106
Dominant sometimes they can be like where all of the time's going.

119
00:07:24,316 --> 00:07:29,290
And if you have like a really long tree the general
recommendation is start kind of right to left.

120
00:07:29,420 --> 00:07:34,250
But I also say check that bottom section because it
could be, you might not have to look through the entire

121
00:07:34,250 --> 00:07:38,257
tree if you find out that your query's spending 90% a

122
00:07:38,587 --> 00:07:44,189
Nikolay: And interesting additional point here is
that planning time can be sometimes very, very big.

123
00:07:44,579 --> 00:07:54,746
I, I had cases when inspection of of the path of merge
joint led to huge scan and during the planning time, And.

124
00:07:54,798 --> 00:08:00,311
Disabling merge scan helped, but it was not so obvious
at all in the beginning because  if you don't notice

125
00:08:00,311 --> 00:08:04,701
that planning time is seconds so insane, suddenly huge.

126
00:08:04,751 --> 00:08:06,126
It's  a surprise to you for you.

127
00:08:06,216 --> 00:08:08,856
So checking, planning time also useful.

128
00:08:09,756 --> 00:08:17,383
Michael: Yeah, same for time spent in triggers and time spent
just in time compilation as well for an some analytical queries.

129
00:08:17,456 --> 00:08:21,416
you might consider it a relatively simple
query or quite, it should be quite fast.

130
00:08:22,136 --> 00:08:30,656
But if the costs are overestimated a lot, sometimes the just
in time, compilation kicks in, spends several seconds thinking

131
00:08:30,656 --> 00:08:37,390
it's saving your time, but then that's overall, you know, the,
the overall queries only a few milliseconds that's suboptimal.

132
00:08:37,446 --> 00:08:37,896
Nikolay: Right.

133
00:08:37,956 --> 00:08:47,506
And also since for August 13 planning time also like it's, if you
include buffers option, it'll show you buffers used for planning as well.

134
00:08:47,511 --> 00:08:47,806
Right?

135
00:08:48,128 --> 00:08:48,728
Michael: Yes.

136
00:08:49,058 --> 00:08:56,148
And actually one other thing on planning time before we move on
from that, is that  auto_explain  doesn't include planning time.

137
00:08:56,782 --> 00:09:04,402
So there you can't spot planning time issues in order
to explain other than I think we discussed this once.

138
00:09:06,292 --> 00:09:06,697
Yeah.

139
00:09:06,697 --> 00:09:15,095
You could, if you, I think we discussed it years ago or
ages ago, because it was a, it is probably the only use

140
00:09:15,095 --> 00:09:19,625
case for logging the query time plus the order to explain.

141
00:09:19,783 --> 00:09:23,473
Time, and then you could diff the two and it's probably planning time.

142
00:09:23,473 --> 00:09:24,403
That's the difference.

143
00:09:24,587 --> 00:09:30,500
But yeah, it's, it's a limitation, not as you say, not
super common that planning time's the dominant issue,

144
00:09:30,500 --> 00:09:33,800
but when it is, it can be 90 plus percent easily.

145
00:09:33,928 --> 00:09:34,348
Nikolay: Right.

146
00:09:34,348 --> 00:09:35,878
It's it's gonna be unexpected.

147
00:09:35,883 --> 00:09:38,458
This is the danger of it, right?

148
00:09:39,073 --> 00:09:39,563
Michael: Yeah.

149
00:09:40,173 --> 00:09:41,163
So right to.

150
00:09:41,788 --> 00:09:44,727
In kind of inside out start at the bottom check.

151
00:09:44,727 --> 00:09:50,372
The main statistics, you mentioned briefly
that some of the statistics are per, per loop.

152
00:09:50,882 --> 00:09:55,032
So loops are quite, I think they're quite a confusing
topic when you're first getting used to it and.

153
00:09:55,646 --> 00:09:59,771
Uh, Especially if there's, you know, 10,000 loops, you could easily miss that.

154
00:09:59,771 --> 00:10:05,441
One of the statistics, it looks quite small, but
once you times it, by 10,000, it can be really big.

155
00:10:05,861 --> 00:10:11,831
So things, examples are of course the costs and the
timing, but also things like rose removed by filter.

156
00:10:11,881 --> 00:10:14,171
Sometimes people look out for those numbers.

157
00:10:14,171 --> 00:10:21,678
If it, if it says one that's a per loop average and
actually 10,000 of those is SU suddenly not insignificant.

158
00:10:22,563 --> 00:10:22,893
Nikolay: Right.

159
00:10:22,893 --> 00:10:24,723
And those averages can be rough.

160
00:10:24,887 --> 00:10:29,177
That's also like there's a, there is a
mistake that can be present, present there.

161
00:10:29,520 --> 00:10:34,650
Michael: Well, especially around zero and one,
like, because some of the numbers are intes.

162
00:10:34,710 --> 00:10:35,010
Yeah.

163
00:10:35,070 --> 00:10:37,060
Ex so anybody that's wondering if.

164
00:10:37,645 --> 00:10:38,155
Exactly.

165
00:10:38,155 --> 00:10:39,565
It's rounded to the nearest integer.

166
00:10:39,570 --> 00:10:48,155
And if it's less than nor 0.5 and gets rounded to zero, it doesn't
necessarily mean that there are zero which can be problematic.

167
00:10:48,518 --> 00:10:55,350
Nikolay: Also, you know what,  we discussed things that
probably there are many talks and many articles that are useful.

168
00:10:55,410 --> 00:10:59,370
I think this  podcast is not going to replace them.

169
00:10:59,640 --> 00:11:04,765
We are trying to highlight problems which can be tricky for in the beginning.

170
00:11:04,912 --> 00:11:05,212
Right.

171
00:11:05,285 --> 00:11:09,912
And one of the things um,  I think uh, We should mention tools as well.

172
00:11:09,972 --> 00:11:10,212
Right?

173
00:11:10,212 --> 00:11:16,142
So first of all, explain decom is the oldest one and
still very, very popular, maybe still the most popular

174
00:11:16,472 --> 00:11:22,135
then explain decom, which is P P V two greatly improved.

175
00:11:22,435 --> 00:11:22,915
Very good.

176
00:11:23,035 --> 00:11:29,564
And of course, P master, which is commercial, which
you, you, you develop right worth checking all of them.

177
00:11:29,744 --> 00:11:30,434
They are good.

178
00:11:30,499 --> 00:11:31,879
Pro and cost for all of them.

179
00:11:31,992 --> 00:11:37,229
But what I think is important to understand is some meta.

180
00:11:37,259 --> 00:11:44,169
When you talk about  query analysis of single query
analysis, we should say, okay, this is our plan.

181
00:11:44,484 --> 00:11:53,034
better if it Was with execution and the ex like the
best, if it's execution collected with buffers and we

182
00:11:53,034 --> 00:11:55,404
will discuss by the way, overhead a little bit later.

183
00:11:55,404 --> 00:11:55,674
Right.

184
00:11:56,484 --> 00:12:03,348
But when you ask someone to help with optimization, of
course, the first question will be showing the query itself.

185
00:12:04,023 --> 00:12:05,523
Sometimes show me two plans.

186
00:12:05,523 --> 00:12:09,963
Also, if, if there Was some change and we want
to understand why this change influenza the plan.

187
00:12:09,963 --> 00:12:13,880
So we have two plans, but we need to have
a query like must have, we should have it.

188
00:12:13,916 --> 00:12:20,114
But additionally, I think very important  to get Pogs uh, settings.

189
00:12:20,150 --> 00:12:30,545
for like enable sex can or random patch, cost patch, cost, all costs
work_mem as well, even though work ma'am is not inside planner.

190
00:12:30,572 --> 00:12:33,062
Positive settings are grouped by in groups.

191
00:12:33,362 --> 00:12:39,842
If you can select star from pg_settings and see group
names, and there is a whole group named planner.

192
00:12:40,217 --> 00:12:46,032
Planner rule settings and work ma is not there,
but work ma influences planner decisions.

193
00:12:46,512 --> 00:12:48,612
If you change work, ma'am plan can be different.

194
00:12:48,672 --> 00:12:48,972
Right.

195
00:12:49,362 --> 00:12:53,676
So right now my rule is like, let's take planner, settings plus work.

196
00:12:53,676 --> 00:12:54,876
Ma'am maybe something else.

197
00:12:54,881 --> 00:12:58,326
I, I like I'm interested to see if something else should be there as well.

198
00:12:58,716 --> 00:13:04,639
So when we ask someone to help, we need to present a plan or to query.

199
00:13:04,791 --> 00:13:05,871
Linear settings.

200
00:13:05,981 --> 00:13:10,086
And I also believe that schema is important to present.

201
00:13:10,266 --> 00:13:15,814
Like what kind of schema we had table indexes and probably statistics as well.

202
00:13:15,884 --> 00:13:19,429
This is like whole picture to analyze what we had.

203
00:13:19,437 --> 00:13:27,642
Imagine if the, all these tools collected these things
automatically, for example, how great it would be to.

204
00:13:27,678 --> 00:13:32,431
Jump into like, I want to help someone
with optimization and I see whole picture.

205
00:13:32,491 --> 00:13:40,512
I see query plan settings, the schema, not whole
schema, only part of it, which is involved in like,

206
00:13:40,512 --> 00:13:44,194
which is with query deals with, and also statistics.

207
00:13:44,194 --> 00:13:49,897
Maybe statistics is kind of tricky, but
This is what, basis for planner decisions.

208
00:13:49,897 --> 00:13:51,945
This is what defines what plan will.

209
00:13:52,019 --> 00:13:52,469
Oh, of course.

210
00:13:52,469 --> 00:13:53,922
POG this version as well.

211
00:13:54,015 --> 00:14:00,743
Because the different version of the plan can be different,
different nodes in the plan can, can be present depending on version.

212
00:14:00,976 --> 00:14:03,016
So what do you think about this?

213
00:14:03,083 --> 00:14:05,003
Like big whole picture.

214
00:14:05,093 --> 00:14:12,280
I understand that none of tools collect this information
and none of, none of tools require users to present this

215
00:14:12,280 --> 00:14:15,697
information, but it would be great to, to, to store it in history.

216
00:14:15,697 --> 00:14:16,387
For example,

217
00:14:16,410 --> 00:14:21,450
Michael: Yeah, there are some really interesting
tools that do some, some of that, but not all of it.

218
00:14:21,450 --> 00:14:23,340
So there's a, a tool.

219
00:14:23,392 --> 00:14:29,602
Started as a, my SQL tool, but they've, they've added
Postgres support called ever SQL that that asks for

220
00:14:29,602 --> 00:14:33,292
things like the query and the schema and things like that.

221
00:14:33,297 --> 00:14:36,302
And then does some static analysis, which is super interesting.

222
00:14:36,336 --> 00:14:44,963
There's tools like PG analyze It's a monitoring tool and it's
starting to do some well, it's been for, for at least a couple

223
00:14:44,963 --> 00:14:52,283
of years now doing more ad hoc performance, like query analysis,
fire, explain visualizations and has access to a lot of those.

224
00:14:52,553 --> 00:14:56,273
A lot of that information already by the nature of being a monitoring tool.

225
00:14:56,693 --> 00:15:01,309
But I think there's also this natural trade off between this macro analysis.

226
00:15:01,869 --> 00:15:09,579
This, the, all of the information you can gather and the overhead of doing
so versus the amount of information you're willing to gather, to do a

227
00:15:09,879 --> 00:15:17,079
like once, you know, a certain query is a problem you're willing to pay
a higher overhead because you only, you only need to gather that once.

228
00:15:17,319 --> 00:15:22,796
Whereas if you want to do this all the time for every
query, I think there's a a slightly higher overhead.

229
00:15:22,796 --> 00:15:24,086
So I think there's, there's some

230
00:15:24,086 --> 00:15:24,566
tension.

231
00:15:24,566 --> 00:15:24,686
There's

232
00:15:24,751 --> 00:15:31,646
Nikolay: could do some hashing, for example, to track that like
statistics didn't change and Response experimenters didn't change.

233
00:15:31,646 --> 00:15:34,046
We could just check it too automatically with hash.

234
00:15:34,187 --> 00:15:41,418
Michael: Yeah, I think there's some super cool things here, but there's
also, there's a, there's a couple of different um, environments, right?

235
00:15:41,418 --> 00:15:47,688
So in production, like the one, one key thing
is that let's say statistics is a great example.

236
00:15:48,318 --> 00:15:49,788
Production's not the same as staging.

237
00:15:50,118 --> 00:15:52,518
Like we could, we can make all of the data the same.

238
00:15:52,518 --> 00:15:52,908
We can,

239
00:15:52,983 --> 00:15:53,763
Nikolay: It depends.

240
00:15:54,408 --> 00:15:54,798
Michael: sorry.

241
00:15:54,888 --> 00:15:55,248
Yeah.

242
00:15:55,608 --> 00:16:01,008
Production might not be the same as staging and
therefore like a statistics problem may not show up.

243
00:16:01,308 --> 00:16:04,188
And if you are, let's say you're doing some development work.

244
00:16:04,368 --> 00:16:09,408
It's, it's really tricky to, to reproduce all of those things.

245
00:16:09,578 --> 00:16:16,848
So, and, and I think I'd also push back that some of those,
some of the most obvious things that can be a problem, it.

246
00:16:17,743 --> 00:16:19,963
Most like, maybe even schema doesn't matter.

247
00:16:20,280 --> 00:16:29,389
maybe even query doesn't matter if, if you see that somebody's doing a
sequential scan of 10 million MOS maybe probably it's parallel and they've

248
00:16:29,389 --> 00:16:32,629
returned, they're doing a filter and it returns just one of those rows.

249
00:16:32,999 --> 00:16:35,309
Without the query, we can tell that an

250
00:16:35,459 --> 00:16:36,839
Nikolay: to suggest index right.

251
00:16:37,259 --> 00:16:38,039
Michael: Exactly.

252
00:16:38,309 --> 00:16:38,789
So.

253
00:16:39,469 --> 00:16:47,306
There's a bunch of cases where you can give a, you can give some
pretty sensible advice without any of that extra information, but

254
00:16:47,325 --> 00:16:53,505
definitely as it gets more complex, I think more of those things can
be more useful, but EV even in the index case, you know, you still

255
00:16:53,505 --> 00:17:00,675
need, I think you still need context from the customer in terms of
trade offs to, you know, if they've, if this is a super high right

256
00:17:00,825 --> 00:17:05,115
table, you might be less inclined to, to add an index than if it's not.

257
00:17:05,205 --> 00:17:07,335
Or if the customer has certain requirements.

258
00:17:07,506 --> 00:17:09,936
There's always gonna be a, there's always an, it depends.

259
00:17:09,936 --> 00:17:10,146
Right?

260
00:17:10,176 --> 00:17:15,386
There's always, I think whenever you're giving this kind of
advice, you have to be careful that for different customers,

261
00:17:15,386 --> 00:17:18,566
different things would be sensible, I guess, except for, except for

262
00:17:18,791 --> 00:17:26,356
Nikolay: Well, I understand that what I just described a
collection of all these pieces require requires a lot of efforts.

263
00:17:26,361 --> 00:17:28,136
So that's why it should be automated.

264
00:17:28,236 --> 00:17:32,500
But imagine if  everything was collected automatically inside some tool.

265
00:17:32,538 --> 00:17:35,998
we used an organization and stored historically, you understand?

266
00:17:36,028 --> 00:17:41,592
Okay, we optimize this query and we try to
optimize this query and we know the whole context.

267
00:17:41,597 --> 00:17:48,384
When we ask for help, we have all pieces and if
expert comes to help us, all pieces are present.

268
00:17:48,384 --> 00:17:48,894
That's it?

269
00:17:48,924 --> 00:17:50,567
It would be much easier to help.

270
00:17:50,567 --> 00:17:50,837
Right.

271
00:17:51,022 --> 00:17:51,322
Michael: Yeah.

272
00:17:51,382 --> 00:17:54,862
And I think there are some, some interesting projects in this area.

273
00:17:54,892 --> 00:18:00,202
I think you have you come across the one by Perona,
they they're doing a kind of a replacement to

274
00:18:00,937 --> 00:18:03,577
Nikolay: well, it's about micro macroanalysis as well.

275
00:18:03,787 --> 00:18:04,717
P monitor, right?

276
00:18:05,017 --> 00:18:05,467
PTA monitor

277
00:18:05,666 --> 00:18:06,086
Michael: Yes.

278
00:18:06,091 --> 00:18:06,926
But I think they

279
00:18:06,926 --> 00:18:08,606
do things like plan.

280
00:18:08,836 --> 00:18:10,247
Nikolay: start statements or no,

281
00:18:10,327 --> 00:18:16,177
Michael: Yes, but with additions, like I think
they let you track query plans per query.

282
00:18:16,207 --> 00:18:20,504
So like, I think you could, for example, see if a plan has changed.

283
00:18:20,534 --> 00:18:22,224
So it's that, that kind of thing.

284
00:18:22,224 --> 00:18:26,853
With relatively low overhead, I think you
start to get a bit more of that information.

285
00:18:26,853 --> 00:18:31,113
So when an expert comes along, hopefully this
is something already installed and already

286
00:18:31,188 --> 00:18:33,045
Nikolay: This is old big discussion.

287
00:18:33,345 --> 00:18:41,166
There is a current ongoing discussion in PJI hackers about
Brisco hackers, mailing list someone froms proposed adding

288
00:18:41,265 --> 00:18:44,912
plan ID to PSA statements, triggering discussion one more time.

289
00:18:44,917 --> 00:18:46,562
And this is, this would be great.

290
00:18:46,622 --> 00:18:47,192
Of course.

291
00:18:47,261 --> 00:18:53,702
We know that each query registered and P
statements can, might, might have multiple plan.

292
00:18:54,202 --> 00:18:56,272
Depending on parameters used.

293
00:18:56,752 --> 00:18:59,722
So when you optimize a query, very important thing.

294
00:18:59,722 --> 00:19:00,502
I, I missed it.

295
00:19:00,502 --> 00:19:03,100
My, in my list parameters you used, right?

296
00:19:03,100 --> 00:19:07,590
Because different parameters may trigger  the
planner to choose different plan.

297
00:19:07,650 --> 00:19:08,210
Right?

298
00:19:08,340 --> 00:19:10,290
So, so it's very, very important.

299
00:19:10,374 --> 00:19:13,147
When optimize the query, we cannot say we optimize the query.

300
00:19:13,152 --> 00:19:14,467
We, we, we must say.

301
00:19:15,017 --> 00:19:22,895
We optimize a query for some parameters, and we need
to think about variations that we should expect on on

302
00:19:22,955 --> 00:19:26,801
production and check them too, not just a single case, right?

303
00:19:26,801 --> 00:19:28,481
So this, and this is tricky by the way.

304
00:19:28,514 --> 00:19:28,904
Michael: Yeah.

305
00:19:28,994 --> 00:19:37,754
So if anybody's wondering, this is like the simplest example of this
is let's say you have a column where 99% of the data is a single value.

306
00:19:38,084 --> 00:19:43,171
And then one, the, the other 1% is millions of  like unique values.

307
00:19:43,881 --> 00:19:45,651
If you search for one of the unique values.

308
00:19:46,001 --> 00:19:47,345
You might get an index scan.

309
00:19:47,347 --> 00:19:53,437
If you search for the value that 99% of the
table is, then you should get a sequential scan.

310
00:19:53,437 --> 00:19:54,847
That would be the, the optimal plan.

311
00:19:54,907 --> 00:19:55,357
So that's the

312
00:19:55,397 --> 00:19:55,817
Nikolay: Right.

313
00:19:55,877 --> 00:20:00,587
This is classic example and even enable set, enable six can to.

314
00:20:01,502 --> 00:20:04,232
Might not help to avoid six can in some cases.

315
00:20:04,652 --> 00:20:09,700
And I also, I, I had a couple of times in my optimization activities.

316
00:20:09,700 --> 00:20:17,694
I had the case when somebody provided my me a query without parameters
and I've checked the table, I saw, okay, what's the worst case.

317
00:20:17,774 --> 00:20:27,354
And I started to optimize for the worst case and made bad decisions
because this worst case was, was never used in production.

318
00:20:27,921 --> 00:20:30,681
so it's very, very interesting topic.

319
00:20:30,891 --> 00:20:36,406
I definitely want to find some approach when we don't know which parameters.

320
00:20:36,476 --> 00:20:43,466
We have, but we guess somehow, for example, if some
additional tool would analyzing statistics, this tool

321
00:20:43,466 --> 00:20:47,136
would say,  oh, take this set of parameter, this and this.

322
00:20:47,166 --> 00:20:48,876
Like, this is most typical case.

323
00:20:48,876 --> 00:20:53,173
This is like some kind of worst case and try to optimize for them.

324
00:20:53,233 --> 00:20:53,973
This would be.

325
00:20:54,228 --> 00:20:55,988
Michael: So, so yes, agree.

326
00:20:56,048 --> 00:20:59,258
And I think there are some, I think, for example, auto explain.

327
00:20:59,658 --> 00:21:06,738
Very old tool, but one thing it does really well is it
spits out the exact query that caused that slow plan.

328
00:21:06,918 --> 00:21:11,568
And that's one way of getting at least the
extreme versions of the parameters that

329
00:21:11,591 --> 00:21:12,671
Nikolay: Or just slow log.

330
00:21:12,671 --> 00:21:22,352
If you have log duration statement above 500, 100 milliseconds which is
good, or at least a second or two, which is not so good, but also , fine.

331
00:21:22,417 --> 00:21:28,799
Uh, You have examples of parameters which
trigger slow execution, but you don't see.

332
00:21:29,142 --> 00:21:33,435
good parameter sets, which are not registered in this slow log.

333
00:21:33,645 --> 00:21:34,605
When I say slow.

334
00:21:34,605 --> 00:21:38,859
log I mean a part of single PSUs log, because PSUs has just one log.

335
00:21:38,985 --> 00:21:41,055
It's a different, different discussion maybe.

336
00:21:41,109 --> 00:21:50,078
And if log duration statement enabled to see the, the examples
with duration, but ought to explain it's even better with the plan.

337
00:21:50,687 --> 00:21:51,177
Michael: Yeah.

338
00:21:51,687 --> 00:21:53,497
Is this a good time to talk about overhead?

339
00:21:53,720 --> 00:21:54,020
Nikolay: Yeah.

340
00:21:54,020 --> 00:21:55,280
Let's talk about overhead.

341
00:21:55,490 --> 00:22:04,130
So when you run, explain, analyze versus you run
query without any observability tooling, which

342
00:22:04,130 --> 00:22:07,040
explain, explain analyze is, is observability tooling.

343
00:22:07,040 --> 00:22:12,380
It, it adds  a lot of details about query execution and planner decision.

344
00:22:12,380 --> 00:22:12,680
Right.

345
00:22:12,946 --> 00:22:18,762
But you can just run the query and see some timing, but
then run, explain, and license and see different timing.

346
00:22:18,945 --> 00:22:19,275
Right?

347
00:22:19,305 --> 00:22:25,545
You had a block post on this topic, but about auto
explain auto explain is also like a related question here.

348
00:22:25,729 --> 00:22:30,319
Michael: Yeah, Andres had a really good blood post on the observer effect.

349
00:22:30,319 --> 00:22:38,429
So I think, I think there's two, there's two cases that , not only
can explain, analyze, tell you that it, that it can be very accurate.

350
00:22:38,429 --> 00:22:43,469
So it can be that it's roughly the same amount of
time as the, as running the query through your client.

351
00:22:43,859 --> 00:22:48,646
It can be too high where it's adding overhead and it can be too low.

352
00:22:48,807 --> 00:22:52,027
For the case where lots of  data's being transmitted.

353
00:22:52,189 --> 00:22:53,860
It doesn't transmit that data.

354
00:22:53,860 --> 00:22:58,093
So it can even be faster than a query that would return the data.

355
00:22:58,363 --> 00:23:02,593
So there's kind of three, three cases, two cases that are bad.

356
00:23:02,863 --> 00:23:04,973
They're quite rare in my experience.

357
00:23:04,973 --> 00:23:05,413
And they.

358
00:23:06,001 --> 00:23:07,321
especially on modern hardware.

359
00:23:07,321 --> 00:23:08,701
They don't show up that often.

360
00:23:09,061 --> 00:23:12,391
And also they're not that problematic when
you're actually looking for the problem.

361
00:23:12,451 --> 00:23:19,170
If,  there's a relatively universal overhead added and you're
still looking for what's the slowest part, it's still  probably

362
00:23:19,170 --> 00:23:22,378
the same place, but yeah, let's explain why it happens.

363
00:23:22,491 --> 00:23:24,446
uh, , it's doing,  in order to measure timing.

364
00:23:24,446 --> 00:23:25,566
There is some overhead

365
00:23:25,702 --> 00:23:27,562
Nikolay: I would split it to three parts.

366
00:23:27,592 --> 00:23:28,852
So sorry for interrupting.

367
00:23:28,882 --> 00:23:33,382
I would split it to three parts first, when we
say explain, we just see the plan of decision.

368
00:23:33,382 --> 00:23:37,432
We don't is good cure, query, nothing to discuss in terms of overhead here.

369
00:23:37,432 --> 00:23:37,732
Right?

370
00:23:37,784 --> 00:23:43,713
Well, there's, uh, The cost of  planning, planning work,
but it's not overhead it's it's anyway, we need it.

371
00:23:44,073 --> 00:23:54,170
But when we add analyze, There is overhead like we really execute
a query, but we need to measure things and see how manys in each

372
00:23:54,170 --> 00:23:57,890
node were collected, everything like that and timing as well.

373
00:23:58,400 --> 00:24:00,540
But they also can say buffers.

374
00:24:00,656 --> 00:24:02,303
This, this is additional overhead.

375
00:24:02,316 --> 00:24:05,826
And we also can say Tracko timing, which is a posts setting.

376
00:24:05,899 --> 00:24:07,392
You can set dynamically  I.

377
00:24:07,392 --> 00:24:07,782
guess.

378
00:24:07,805 --> 00:24:08,105
Right.

379
00:24:08,105 --> 00:24:13,607
And you can see IO timing additionally printed
in by explain, by explaining lies here.

380
00:24:14,027 --> 00:24:14,267
Right.

381
00:24:14,357 --> 00:24:17,007
And this like three pieces of overhead.

382
00:24:17,217 --> 00:24:17,547
Right?

383
00:24:17,547 --> 00:24:18,927
What do you think about each of them?

384
00:24:19,665 --> 00:24:19,995
Michael: Yes.

385
00:24:19,995 --> 00:24:25,245
Well, as you mentioned, I did do a blog post on this
because I, I saw quite a few places where people would

386
00:24:25,365 --> 00:24:29,265
really warn against or to explain with timing on.

387
00:24:29,535 --> 00:24:34,396
There was really, there's a really  strong warning
against it in the documentary, in the Postgres docs.

388
00:24:34,436 --> 00:24:40,712
There's multiple monitoring tools that tell you if you
have, or to explain on, make sure you have timing off.

389
00:24:41,580 --> 00:24:48,665
Nikolay: At the same time, I, I observe very heavily loaded
systems serving a lot of thousand, like hundred, hundred

390
00:24:48,665 --> 00:24:52,775
thousand transactions, very loaded systems where it's enabled.

391
00:24:53,019 --> 00:24:53,469
Michael: Same.

392
00:24:53,469 --> 00:24:55,899
I was, I was coming across customers that had it

393
00:24:56,015 --> 00:24:59,465
Nikolay: Doesn't it depend on the hardware on.

394
00:24:59,465 --> 00:25:00,785
on the CPU.

395
00:25:01,639 --> 00:25:02,059
Michael: Yes.

396
00:25:02,064 --> 00:25:02,919
So I did.

397
00:25:03,238 --> 00:25:06,448
there's like a, there's a tool in progress that lets you check.

398
00:25:06,538 --> 00:25:07,588
Uh, I've forgotten what it's called.

399
00:25:07,588 --> 00:25:08,818
Is it PG test timing?

400
00:25:08,818 --> 00:25:09,538
It's ah,

401
00:25:09,715 --> 00:25:10,615
Nikolay: Something like that.

402
00:25:10,615 --> 00:25:14,155
It's in binary directory, standard package

403
00:25:14,515 --> 00:25:14,665
of

404
00:25:14,938 --> 00:25:16,918
Michael: yeah, I'll find it and link to it.

405
00:25:16,948 --> 00:25:26,244
But yeah, basically my understanding is if you have pretty fast system
system clock, lookups, it, the overhead can be hard to measure, but if

406
00:25:26,244 --> 00:25:30,864
you have slow system clock, then it can be extremely easy to measure.

407
00:25:30,864 --> 00:25:38,234
And I that's the ongress blog post, I think is deliberately
picking a system has a slow system clock in order to

408
00:25:38,234 --> 00:25:41,714
show that it can add hundreds of percent of I overhead.

409
00:25:42,074 --> 00:25:49,443
But when I was looking at on a when I was looking at
it on an OTP workload, very, very simple PG bench OTP

410
00:25:49,443 --> 00:25:53,793
workload, I, I was basically unable to measure it.

411
00:25:53,793 --> 00:25:58,833
I, I got, I think I got a 2% overhead of adding
all of the parameters and it was basically

412
00:25:58,999 --> 00:26:01,714
Nikolay: Y, you know, This is my old idea.

413
00:26:01,714 --> 00:26:05,764
And I, I, couple of times we implemented it in any company.

414
00:26:05,764 --> 00:26:17,347
When we deal with many SGO costs, it would be good when we set up a host
to have a set of micro benchmarks checking uh, Like this scale limits CPU.

415
00:26:17,947 --> 00:26:25,644
We can use this bench for that, or fi for this in the old
life we used Boni plus, plus I remember, and this micro

416
00:26:25,644 --> 00:26:29,644
benchmark checking timing overhead would be also great there.

417
00:26:30,034 --> 00:26:35,524
And like sometimes we, we might have in, in cloud,
sometimes we might have two virtual machines of the.

418
00:26:36,134 --> 00:26:37,424
same class, same type.

419
00:26:37,964 --> 00:26:40,143
And they, but they behave differently.

420
00:26:40,143 --> 00:26:44,293
So it would be good to check it all the time we set up  some machine.

421
00:26:44,421 --> 00:26:44,661
Michael: Yep.

422
00:26:45,081 --> 00:26:46,491
Well, yeah, this is your age old.

423
00:26:46,584 --> 00:26:50,634
this is what you are dedicating your professional life to is experiment.

424
00:26:50,694 --> 00:26:56,364
You know, PE get, if you are intrigued as to what it would be
on your system, it's it, it might be different for you for some

425
00:26:56,364 --> 00:27:03,054
reason, maybe for hardware reasons, maybe for workload reasons,
there might be some, some specific way that it's bad for you.

426
00:27:03,594 --> 00:27:06,984
Very difficult to provide general advice and the advice you read online.

427
00:27:07,704 --> 00:27:15,294
Generally be cautious, especially in the Postgres documentation,
then they're gonna be cautious by default because they don't want

428
00:27:15,294 --> 00:27:19,110
to give advice that one person's gonna find horrifically awful.

429
00:27:19,135 --> 00:27:20,545
even if the majority would find

430
00:27:20,567 --> 00:27:20,987
Nikolay: Right.

431
00:27:21,100 --> 00:27:23,870
Back to these three classes of overhead.

432
00:27:23,870 --> 00:27:28,561
I guess the first class is like from UN analyze Inex explain part.

433
00:27:28,561 --> 00:27:30,331
And second is Tracko timing.

434
00:27:30,841 --> 00:27:33,331
Third is buffers let's let's postpone a little bit.

435
00:27:33,332 --> 00:27:39,584
They both are related to this overhead from
how clock clock work with clock is organized.

436
00:27:39,944 --> 00:27:44,864
But the difference is that inside explain, analyze, they are both.

437
00:27:45,004 --> 00:27:51,124
Working, but Tracko timing also working if you haveta
statements, because it's registered there as well.

438
00:27:51,244 --> 00:27:56,134
So regular execution without running, explain regular regular query execution.

439
00:27:56,434 --> 00:27:57,514
We've also included.

440
00:27:57,814 --> 00:28:05,675
So if working with clock is slow Tracko timing
can add some penalty when you use PSTA statements.

441
00:28:05,735 --> 00:28:06,275
Right?

442
00:28:06,293 --> 00:28:07,703
Michael: Same with auto explain.

443
00:28:07,767 --> 00:28:11,817
auto explain runs on every, in there is a, there is a parameter where you

444
00:28:11,919 --> 00:28:14,919
Nikolay: There is sampling and it existed long ago.

445
00:28:14,924 --> 00:28:18,469
I, I, I didn't realize it exists uh, for slow lock.

446
00:28:18,469 --> 00:28:23,139
There is sampling capabilities since POS 13, I guess, but for auto explain it.

447
00:28:23,569 --> 00:28:24,469
you told me, right.

448
00:28:24,469 --> 00:28:27,379
It exists at, for long, like many years already.

449
00:28:27,409 --> 00:28:27,949
It's great.

450
00:28:28,129 --> 00:28:31,519
So you cannot auto explain only like 1% of everything

451
00:28:31,718 --> 00:28:34,658
Michael: If you're to be cautious at first, you, yeah.

452
00:28:34,658 --> 00:28:41,228
You can sample a really small percentage, but naturally yeah, for LTP,
it's probably fine because you're probably running the same queries

453
00:28:41,228 --> 00:28:44,348
over and over, and you don't need loads of examples of them to optimize.

454
00:28:44,448 --> 00:28:47,788
Nikolay: there is also possible observer effect from just logging.

455
00:28:47,793 --> 00:28:50,308
If you, if log writing to logs is slow.

456
00:28:50,308 --> 00:28:51,578
For example, disc is not.

457
00:28:52,093 --> 00:28:53,683
Very fast where you log it.

458
00:28:53,683 --> 00:28:57,124
So it's also can be a problem, but it's slightly different topic.

459
00:28:57,124 --> 00:29:01,049
The third part of buffers, what do you think about other health from buffers

460
00:29:01,368 --> 00:29:08,053
Michael: Well, you were the first to tell me that it's worth
looking into, but I wasn't able to I wasn't able to measure it.

461
00:29:08,113 --> 00:29:08,503
Yeah.

462
00:29:08,563 --> 00:29:09,703
I wasn't able to measure it.

463
00:29:09,753 --> 00:29:10,023
, 
Nikolay: there?

464
00:29:10,203 --> 00:29:13,503
That definitely should be difference if you're just run, explain wise.

465
00:29:13,540 --> 00:29:17,920
Many times everything is cashed and then explain
analyze buffers difference should be there.

466
00:29:18,010 --> 00:29:22,429
I I'm sure, but still hundred percent worth having buffers inside.

467
00:29:22,579 --> 00:29:26,316
Explain, analyze as we discussed separately, whole half an hour, right.

468
00:29:26,484 --> 00:29:27,024
Michael: Yes.

469
00:29:27,047 --> 00:29:34,187
previous episode, I actually think it might have, I don't know
if you, if, if this is to do with you, but I think explain Dr.

470
00:29:34,187 --> 00:29:36,167
pe.com deserve some praise.

471
00:29:36,197 --> 00:29:41,069
Cause I noticed today or yesterday that
it now asks for explain, analyze buffers.

472
00:29:41,349 --> 00:29:42,269
So that's quite a

473
00:29:42,550 --> 00:29:47,470
Nikolay: Well, in my opinion, if like when we
analyze a query, we should not do it on production.

474
00:29:47,560 --> 00:29:52,330
As usual, we should do it on a special
environment, which should be a clone of production.

475
00:29:52,330 --> 00:29:56,858
And the best way to get to have a clone
is using database lap engine we develop.

476
00:29:56,908 --> 00:30:01,028
And then there, of course you are in slightly different situation.

477
00:30:01,040 --> 00:30:02,450
Maybe hardware is different.

478
00:30:02,780 --> 00:30:06,680
Maybe you have less memory, for example, different state of Cassius.

479
00:30:06,873 --> 00:30:12,309
And maybe a different fast system as in the case of
database lap engine, because it uses ZFS by default.

480
00:30:12,368 --> 00:30:14,713
And there you, you should focus on buffers.

481
00:30:14,713 --> 00:30:24,583
This is like, should be like our final goal is timing, but inside
it, we focus on buffers inside the process and reducing IO numbers.

482
00:30:25,643 --> 00:30:32,470
Not just buffers, maybe IO numbers like RO rose is also
logical is also important metric to, to keep in mind.

483
00:30:32,650 --> 00:30:36,486
And if you reduce IO, you'll reduce timing.

484
00:30:36,546 --> 00:30:38,803
This is secret of optimization.

485
00:30:38,916 --> 00:30:41,200
Everyone should understand in my opinion, right.

486
00:30:41,203 --> 00:30:42,673
Michael: Yes, couldn't agree more.

487
00:30:42,694 --> 00:30:46,008
and if anybody disagrees we can refer you to episode.

488
00:30:46,028 --> 00:30:46,898
I'm guessing two.

489
00:30:46,958 --> 00:30:48,248
I, it was quite an early one.

490
00:30:48,342 --> 00:30:49,332
Nikolay: Right, right.

491
00:30:49,752 --> 00:30:49,992
Good.

492
00:30:50,142 --> 00:30:51,162
So what else?

493
00:30:51,312 --> 00:30:55,373
we should discuss in terms of starting of working with explain,

494
00:30:55,707 --> 00:31:00,973
Michael: Yeah, well, I, we might be close to time, you
know  I wonder if we should save it for another time.

495
00:31:01,077 --> 00:31:03,627
Is there anything else that we have to mention?

496
00:31:03,651 --> 00:31:08,091
Nikolay: well, we didn't discuss particular
notes, like various types of join and, and so on.

497
00:31:08,391 --> 00:31:11,241
Of course, it's, it requires time to learn.

498
00:31:11,241 --> 00:31:12,771
And of course there is documentation.

499
00:31:12,776 --> 00:31:15,199
There are many, I see different people.

500
00:31:15,248 --> 00:31:18,098
Present talk name, explaining, explain.

501
00:31:18,128 --> 00:31:21,998
This is like default default name for such talks.

502
00:31:22,058 --> 00:31:24,667
So not just one person presented it.

503
00:31:24,667 --> 00:31:28,594
So I think all of those talks are useful were checking,

504
00:31:28,882 --> 00:31:31,102
Michael: My, yeah, my favorite is one by Josh.

505
00:31:31,102 --> 00:31:32,992
Burkus I'll make sure to link it up.

506
00:31:33,082 --> 00:31:34,492
He did a really, yeah.

507
00:31:34,522 --> 00:31:39,167
Old, but still I, I listened to it again last
year and it's, it's still perfectly relevant.

508
00:31:39,177 --> 00:31:41,487
there've been some new parameters and sure.

509
00:31:41,487 --> 00:31:42,537
It doesn't cover everything.

510
00:31:42,803 --> 00:31:46,253
Nikolay: New notes, pluralization since then was added,

511
00:31:46,287 --> 00:31:46,887
Michael: Yeah.

512
00:31:46,947 --> 00:31:47,847
But equally.

513
00:31:48,433 --> 00:31:52,433
Nikolay: JIT compilation, which should be disabled on LT.

514
00:31:52,855 --> 00:31:53,125
Michael: Yeah.

515
00:31:53,132 --> 00:31:59,955
I've also done two talks, one at the, be one to try and cover the  I
think there's some beginner stuff that he doesn't cover at the beginning.

516
00:32:00,015 --> 00:32:05,355
And there's some more advanced stuff that he doesn't
get to cuz it in an hour you can only do so much.

517
00:32:05,360 --> 00:32:10,499
So I have done two talks trying to cover either side
of that, not doing the explaining, explain part.

518
00:32:10,576 --> 00:32:12,126
so yeah, maybe I'll link those up as well.

519
00:32:12,126 --> 00:32:12,366
Oh.

520
00:32:12,366 --> 00:32:13,366
And also have a glossary

521
00:32:13,622 --> 00:32:13,832
Nikolay: Oh,

522
00:32:13,832 --> 00:32:14,642
Glosser is great.

523
00:32:14,732 --> 00:32:15,032
Yes.

524
00:32:15,092 --> 00:32:15,372
Yes.

525
00:32:15,422 --> 00:32:15,962
So, Yeah.

526
00:32:16,095 --> 00:32:16,725
It's a good thing.

527
00:32:16,805 --> 00:32:17,225
Right?

528
00:32:17,285 --> 00:32:17,525
Good.

529
00:32:17,765 --> 00:32:20,635
So I hope it was helpful for  some folks.

530
00:32:20,692 --> 00:32:21,772
let's wrap it up,

531
00:32:22,060 --> 00:32:23,140
Michael: Yeah, I hope so too.

532
00:32:23,198 --> 00:32:25,628
fingers crossed, and also feel free to reach out.

533
00:32:25,633 --> 00:32:29,108
Like, I think this is the kind of topic
that we love and find it very interesting.

534
00:32:29,348 --> 00:32:32,378
I'm Def very, very happy to help people with this kind of thing.

535
00:32:32,578 --> 00:32:35,368
Nikolay: We are asking for topics we can ask right here.

536
00:32:35,368 --> 00:32:41,264
Once again, like we are open, We have a list
of dozens of ideas, but we react to feedback.

537
00:32:41,264 --> 00:32:45,824
If someone asks for a topic, we will prioritize it in, in our list.

538
00:32:45,829 --> 00:32:46,394
Definitely.

539
00:32:46,394 --> 00:32:47,924
And we will try to discuss it soon.

540
00:32:48,704 --> 00:32:53,486
And as, as usual thank you, everyone who is
providing feedback, it's very, very important.

541
00:32:53,486 --> 00:32:54,836
We receive it quite often.

542
00:32:54,836 --> 00:32:55,106
All.

543
00:32:55,351 --> 00:32:59,086
Like at least once per couple of days, it's,  great feeling.

544
00:32:59,091 --> 00:32:59,746
I, I would say.

545
00:33:00,256 --> 00:33:08,861
And also please as usual subscribe everywhere you can like everywhere
you can and please share in your social networks and working groups.

546
00:33:09,153 --> 00:33:09,723
Michael: Absolutely.

547
00:33:09,843 --> 00:33:10,623
Thank you so much.

548
00:33:10,623 --> 00:33:11,283
Thanks everyone.

549
00:33:11,283 --> 00:33:12,003
And thanks Nicola.

550
00:33:12,147 --> 00:33:13,147
Nikolay: Thank you.

551
00:33:13,241 --> 00:33:13,541
Bye bye.