1
00:00:00,024 --> 00:00:04,044
Michael: Hello, and welcome to Postgres FM,
a weekly show about all things Postgres up.

2
00:00:04,194 --> 00:00:05,904
I'm Michael founder of PG mustard.

3
00:00:05,934 --> 00:00:08,514
And this is my coach, Nick Eli, founder of Postgres AI.

4
00:00:08,934 --> 00:00:10,434
Hey Nilay what are we gonna be talking about today?

5
00:00:11,244 --> 00:00:12,090
Nikolay: Hi, Michael.

6
00:00:12,090 --> 00:00:16,725
A few weeks ago we discussed what I call query micro analysis.

7
00:00:16,845 --> 00:00:23,355
When we have a one, one query and we want to understand
how it works, how it will be executed on production.

8
00:00:23,835 --> 00:00:32,846
Let's talk today about the wide picture, about how we can
analyze the whole workload and find the worst parts of it.

9
00:00:33,066 --> 00:00:33,306
Michael: Yeah.

10
00:00:33,306 --> 00:00:37,847
So if that was micro performance analysis,
this, this I've heard you call yeah.

11
00:00:37,852 --> 00:00:40,037
Macro macro performance analysis.

12
00:00:40,187 --> 00:00:43,427
So maybe there's nothing wrong, but we want to be able to see at a glance.

13
00:00:43,697 --> 00:00:45,107
Are there any big issues?

14
00:00:45,137 --> 00:00:50,159
Is there a is there a spike , on some metrics, something
like that,  maybe it's monitoring related, maybe it's

15
00:00:50,271 --> 00:00:52,641
a review of something in the past, that kind of thing.

16
00:00:53,085 --> 00:00:53,325
Nikolay: Right.

17
00:00:53,325 --> 00:01:00,387
There are many goals that we can have in these analysis,
for example there are complaints like databases slow, right?

18
00:01:00,897 --> 00:01:02,307
We hear it quite often.

19
00:01:02,307 --> 00:01:05,047
Database is slow from application developers and other, other.

20
00:01:05,137 --> 00:01:07,927
I dunno, like SRE, no, and so on.

21
00:01:08,017 --> 00:01:14,342
And we want to identify the parts of workload,
which be had the worst, and this is one thing.

22
00:01:14,342 --> 00:01:19,862
Or, or we just want to optimize the source
consumption to prepare for future growth.

23
00:01:20,192 --> 00:01:23,162
And we want, we don't want to spend more for hardware.

24
00:01:23,162 --> 00:01:29,209
So we, we want to find again, we want to find the
worst behaving parts of workload, and we optimize them.

25
00:01:29,628 --> 00:01:31,298
There are many of different cases.

26
00:01:31,866 --> 00:01:36,887
Michael: Yeah, so, I mean, it could be, it could be an application
developer telling us it could be customers reporting that there

27
00:01:36,887 --> 00:01:42,737
are issues and we, maybe we want to find out which part is slow,
or maybe we want to show them that there isn't an issue, you

28
00:01:42,737 --> 00:01:45,527
know, there, or we can't, we can't see anything database side.

29
00:01:45,932 --> 00:01:46,322
Nikolay: all right.

30
00:01:46,322 --> 00:01:54,760
One of the cases I like especially is when we perform
workload analysis as a whole, during various kinds of

31
00:01:55,360 --> 00:01:58,900
preparations, various kinds of testing before we deploy.

32
00:01:59,470 --> 00:02:00,580
And this is also interesting.

33
00:02:00,580 --> 00:02:08,895
And there, we can also try to understand, are all parts behave
well or there are not so well behaving parts, so we should optimize

34
00:02:08,900 --> 00:02:13,080
before we deploy, but let's just start from maybe historical.

35
00:02:13,080 --> 00:02:22,516
Aspects of it like  15 years ago, or so we didn't have Al statements,
which right now is standard de fact extension for macroanalysis.

36
00:02:22,576 --> 00:02:28,432
And this is consensus in community that P
assessments must, must be, have, should be enabled.

37
00:02:28,937 --> 00:02:34,868
In all Postgres installations by default, this extension
is not installed, but everyone should consider installing

38
00:02:34,868 --> 00:02:38,138
this extension because it has very, very small overhead.

39
00:02:38,138 --> 00:02:41,330
But it's like the place where you probably want to

40
00:02:41,330 --> 00:02:42,313
start your

41
00:02:42,383 --> 00:02:43,283
Nikolay: Macro analysis.

42
00:02:43,283 --> 00:02:53,183
We understanding how workload behaves, but before PTA statements,
we had only logs and the idea was okay, we log all slow queries,

43
00:02:53,603 --> 00:02:58,242
for example, queries, execution of which is longer than one second.

44
00:02:58,392 --> 00:03:06,612
And I remember I, when I was briefly just one week, I
was a user of my sequel and switched to PSGs and then I,

45
00:03:06,612 --> 00:03:10,182
it was a confirmation of the, like my choice was right.

46
00:03:10,182 --> 00:03:10,902
It was confirmation.

47
00:03:10,907 --> 00:03:16,397
When I found that in Pogs, I could go down below one second.

48
00:03:16,727 --> 00:03:24,912
I mean, log me duration statement, and the log queries, for example, which
are above a hundred millisecond, but in my sequel, it was not possible.

49
00:03:24,912 --> 00:03:28,138
And one, one second was the lowest value right now.

50
00:03:28,138 --> 00:03:35,150
They fixed it already and you can go down below
one second, but at that time it was in 2000.

51
00:03:35,170 --> 00:03:36,520
5 6, 7.

52
00:03:36,783 --> 00:03:42,224
So and, and we log all slow querie and then we can uh, parts logs.

53
00:03:42,494 --> 00:03:46,520
And I remember we had tool called Ji win written in.

54
00:03:47,400 --> 00:03:47,460
PHP.

55
00:03:47,460 --> 00:03:50,160
And then PPG ger was created written in parallel.

56
00:03:50,430 --> 00:03:53,287
It's much more it's much better in terms of performance.

57
00:03:53,292 --> 00:03:58,087
It, it has much more features and so on, like
more robust, it it's been developed still.

58
00:03:58,087 --> 00:04:01,447
And I, by the way, yesterday they released version 12.

59
00:04:01,447 --> 00:04:02,857
I think I saw some in the news.

60
00:04:02,857 --> 00:04:03,067
Right?

61
00:04:03,072 --> 00:04:06,337
So PGE, ger 12 more and more features right now.

62
00:04:06,337 --> 00:04:09,367
It can work with auto explain and many.

63
00:04:10,244 --> 00:04:16,124
So the idea was let's parse those, those
queries and remove parameters from them.

64
00:04:16,184 --> 00:04:23,298
And so aggregate them the process, which Justa statements
source called terminological  query normalization.

65
00:04:23,725 --> 00:04:30,927
And then we will, we will show the worst
according to some metric, the worst query groups.

66
00:04:31,167 --> 00:04:34,647
But the problem with this approach that this is only the tip of the iceberg.

67
00:04:35,007 --> 00:04:43,930
We might have much more, many more queries, which are
not visible in logs,  but they produce the most load.

68
00:04:44,280 --> 00:04:50,820
Sometimes like 90% of load is produced by queries,
which are under local duration statement, time up.

69
00:04:51,215 --> 00:05:02,447
so the, some NBS used approach, like let's enable all query logging with,
with duration for a few minutes and collect locks and some including

70
00:05:02,447 --> 00:05:07,324
myself found a way to store logs in memory, it was quite like risky.

71
00:05:07,474 --> 00:05:09,154
So I used it only a couple of times.

72
00:05:09,514 --> 00:05:18,185
So we put, we create  a drive in memory and we put logs there, but
it's like only like, I don't know, like half a gigabyte, because

73
00:05:18,185 --> 00:05:22,835
memory is, is expensive and we make very aggressive rotation.

74
00:05:22,835 --> 00:05:33,251
So we don't let Polock to be, to, to saturate this small disk, but
since it's memory, we can afford logging a lot and we can set log

75
00:05:33,371 --> 00:05:36,495
duration statement out to zero, meaning that let's log all queries.

76
00:05:36,619 --> 00:05:44,329
Michael: that's the big downside of, you know, logging does have an overhead
if you, if you're logging excessively or if you so that, that seems to be

77
00:05:44,329 --> 00:05:47,659
a good argument for not, not, well, that's why we don't have to anymore.

78
00:05:47,859 --> 00:05:48,099
Right.

79
00:05:48,589 --> 00:05:48,889
Nikolay: Right.

80
00:05:48,949 --> 00:05:49,219
Yes.

81
00:05:49,250 --> 00:05:59,469
Logging overhead may be very big and it's not noticeable until some point
when your drive  where your right logs is situated in terms of right.

82
00:05:59,537 --> 00:06:00,197
This Coyo.

83
00:06:00,527 --> 00:06:03,088
And then everything goes down  and it's not fun.

84
00:06:03,478 --> 00:06:08,536
Like this is one of the worst observer effects I had in big production cases.

85
00:06:08,545 --> 00:06:09,704
And it was very painful.

86
00:06:09,704 --> 00:06:16,664
So I don't recommend to go down to zero in terms of
administration statement blindly and without proper preparation.

87
00:06:17,144 --> 00:06:22,963
But anyway, this right now we consider as
outdated approach because we have PSA state.

88
00:06:23,489 --> 00:06:23,819
Right.

89
00:06:24,023 --> 00:06:26,603
And we don't need to lock all queries anymore.

90
00:06:26,799 --> 00:06:35,589
We still need to lock some queries because PTA statements doesn't
have examples in PI reports, aggregated, normalized queries.

91
00:06:35,619 --> 00:06:36,849
I call them query groups.

92
00:06:37,209 --> 00:06:45,395
The PI provides a few examples, and this is very important because in
each query group you might have different cases, one the same query.

93
00:06:45,625 --> 00:06:52,135
Abstract query without parameters might behave very differently
in terms of plan, execution plan, depending on parameters.

94
00:06:52,375 --> 00:06:59,635
So when you, you already identified queries,
you want to improve, you need examples always.

95
00:06:59,755 --> 00:07:00,775
And this is tricky.

96
00:07:01,255 --> 00:07:05,605
Guessing examples is a big, not yet solved task.

97
00:07:05,982 --> 00:07:08,382
I think it's a good task for machine learning and so on.

98
00:07:08,382 --> 00:07:10,752
I like, I I'm very interested in like this area.

99
00:07:11,257 --> 00:07:19,551
if some, if someone of our listeners also  interested in this area,
please let's talk because I, I, I think it's very interesting area to

100
00:07:19,556 --> 00:07:25,251
automate, to improve, to allow us to improve more queries with less time.

101
00:07:25,251 --> 00:07:25,611
Right.

102
00:07:25,925 --> 00:07:31,967
Michael: But at the moment we can patch it together
via a mixture of PGE stat statements and the logging.

103
00:07:32,553 --> 00:07:35,463
Nikolay: Yeah, we, we can combine logging and statements.

104
00:07:35,463 --> 00:07:40,623
Also, query ID helps in recent, very recent Postgres versions before query ID.

105
00:07:40,623 --> 00:07:42,280
We used lip co query.

106
00:07:43,480 --> 00:07:44,440
From Lucas fit.

107
00:07:44,770 --> 00:07:45,100
Right?

108
00:07:45,760 --> 00:07:56,789
So we, we like, it's an additional idea and the good thing about this
library or tool that if you apply it to already normalized query, it

109
00:07:56,794 --> 00:08:01,588
will produce the same finger print as for non normalized raw query.

110
00:08:02,268 --> 00:08:04,063
But anyway if you use.

111
00:08:04,075 --> 00:08:13,149
Logs, you can find examples, but another source of examples
of queries is persisted activity, but in most cases where lack

112
00:08:13,149 --> 00:08:21,156
of DBA involvement happened I saw that track activity sites or
house called, I always re forget the, there is parameter, which

113
00:08:21,161 --> 00:08:25,841
says the maximum length of produce that activity dot query.

114
00:08:26,201 --> 00:08:29,304
Column and by default it's one, 1024 only.

115
00:08:29,604 --> 00:08:29,964
Right?

116
00:08:29,964 --> 00:08:30,234
Right.

117
00:08:30,354 --> 00:08:35,863
And it's not enough or our ramps or humans,
they can create much bigger queries these days.

118
00:08:36,100 --> 00:08:39,730
So we want to put it to put like 10 K there or something.

119
00:08:40,083 --> 00:08:41,133
Overhead is very small.

120
00:08:41,133 --> 00:08:45,183
So I always also recommend to increasing,
but increasing, it requires a restart.

121
00:08:45,183 --> 00:08:46,093
This is main problem.

122
00:08:46,665 --> 00:08:46,875
Michael: Yeah.

123
00:08:46,875 --> 00:08:49,875
So one of those ones you have to do at the beginning normally, isn't it.

124
00:08:49,986 --> 00:08:50,226
Nikolay: Yeah.

125
00:08:50,256 --> 00:08:50,856
Yeah, yeah, yeah.

126
00:08:50,886 --> 00:08:59,551
So this this, parameter should be increased and if we increase it, we
have more opportunity to get samples of queries from produced activity.

127
00:08:59,791 --> 00:09:07,763
Then we can join or like, Match them with produced statements,
data, and then probably logs are not that needed unless you use

128
00:09:07,763 --> 00:09:14,593
auto to explain, because auto to explain also very useful, and
again, your article about the overhead and how to measure it.

129
00:09:14,593 --> 00:09:17,713
And the idea that sometimes it's not that big is good.

130
00:09:18,137 --> 00:09:27,257
And maybe you want also this, because in this case you see plans
exactly as it was during execution, because  plan flips happen as well.

131
00:09:28,187 --> 00:09:33,047
Michael: Yeah, I was gonna ask, actually, we've talked
about the overhead of, of these things a little bit.

132
00:09:33,052 --> 00:09:36,107
You mentioned the overhead of PGE stat statements is, is low.

133
00:09:36,437 --> 00:09:42,015
I've seen some people mention that they've tried to
benchmark and struggled, but have you seen anybody

134
00:09:42,200 --> 00:09:43,441
Nikolay: Struggled in which

135
00:09:43,710 --> 00:09:47,477
Michael: struggled to measure the overhead on a, on a normal workload of,

136
00:09:47,632 --> 00:09:48,592
Nikolay: It's, it's very

137
00:09:48,986 --> 00:09:49,066
Michael: you

138
00:09:49,066 --> 00:09:50,306
seen any benchmarks of it?

139
00:09:50,452 --> 00:09:55,192
Nikolay: I, I haven't, I haven't, I trust
already current experience, but we can do it.

140
00:09:55,246 --> 00:09:59,542
This is not the most difficult benchmark in the imposer ecosystem.

141
00:09:59,752 --> 00:10:02,682
So we can just, of course, it's, it's good.

142
00:10:02,682 --> 00:10:05,112
If you can benchmark for your own workload.

143
00:10:05,112 --> 00:10:09,162
And the question is how to reproduce a workload in a reliable way.

144
00:10:09,162 --> 00:10:10,422
So each run is the.

145
00:10:11,082 --> 00:10:12,492
or very close to each other.

146
00:10:12,942 --> 00:10:14,696
And then we can just use.

147
00:10:14,712 --> 00:10:19,312
We, we can use various metrics, which database has without PSA statements.

148
00:10:19,312 --> 00:10:27,259
But for example, from PG start database, we should, if we enable
Tracko timing and by the way, usually we should enable it.

149
00:10:27,829 --> 00:10:31,424
Of course, we also discussed it direct cases on some hardware.

150
00:10:31,424 --> 00:10:37,505
It can be expensive, it's worth checking, but if we
enable it, we can produce workloads check this per.

151
00:10:38,302 --> 00:10:43,866
then there is that project, or we can also, we can check
throughput and especially a latency from application side.

152
00:10:43,866 --> 00:10:52,589
If we use application called PG bench, which I don't know why Ubuntu
ships it in server package, not client package, but, but if, if

153
00:10:52,589 --> 00:10:56,939
we use that application, it reports all latencies or C bench, any.

154
00:10:57,751 --> 00:10:58,801
Michael: Yeah, what I meant.

155
00:10:58,861 --> 00:11:06,331
I think when I said, when I said struggled, what I meant
was the variance of each run is larger than any overhead.

156
00:11:06,391 --> 00:11:10,861
So  either, you you're saying the, you can't say the overhead zero, right?

157
00:11:10,861 --> 00:11:14,581
Cuz it's definitely doing some work, but it's not, not necessarily measurable

158
00:11:14,644 --> 00:11:16,024
Nikolay: It should be a few percent,

159
00:11:16,310 --> 00:11:20,013
Michael: I don't think it, I, I think it
might even be lower than that for some LTP

160
00:11:20,043 --> 00:11:24,374
Nikolay: maybe, well, uh, benchmarking is the area.

161
00:11:24,434 --> 00:11:26,534
We probably need to discuss one day

162
00:11:26,594 --> 00:11:32,384
separately, but my general advice is by default many benchmark tools and.

163
00:11:32,939 --> 00:11:34,799
Picture Badger is no exclusion here.

164
00:11:35,189 --> 00:11:39,078
They  don't do low testing, regular low testing.

165
00:11:39,078 --> 00:11:44,112
They do each case of low testing called stress
testing, like let's load it a hundred percent.

166
00:11:44,632 --> 00:11:45,262
By default.

167
00:11:45,532 --> 00:11:54,974
That's why I suggest usually to find the spot in terms of TPS,
you can control TPSs and PGE bench find a spot like loading

168
00:11:54,991 --> 00:11:59,101
your system in terms of, for example, CPU or dis or like 25%.

169
00:11:59,446 --> 00:12:04,156
Between 25 and 50 emulating normal days of your production.

170
00:12:04,186 --> 00:12:09,847
Because if it's above, you already should think
about upgrading or very high optimization.

171
00:12:09,912 --> 00:12:14,082
And in this case you should check S and compare in insurance.

172
00:12:14,372 --> 00:12:16,622
And in, in this case, variants should be the same.

173
00:12:16,622 --> 00:12:18,722
I don't know why they are different.

174
00:12:18,782 --> 00:12:19,712
Something is wrong.

175
00:12:19,712 --> 00:12:23,582
I, I would like to see the concrete case and, and, and.

176
00:12:24,642 --> 00:12:24,972
Michael: Yeah.

177
00:12:25,062 --> 00:12:31,642
Well, so  back onto, so now, now that we have PG stat
statements, there's a few things to mention this.

178
00:12:31,702 --> 00:12:31,882
Yeah.

179
00:12:31,882 --> 00:12:37,012
Not on by default, unless you are, unless you're on a cloud
provider, they often, they often do have it on by default.

180
00:12:37,103 --> 00:12:38,628
So people do need to load it.

181
00:12:38,648 --> 00:12:43,360
If they don't already, I come across quite a lot of
customers who don't, if they, if they're self-managing,

182
00:12:43,540 --> 00:12:45,580
they don't even, they're not even aware it's the thing.

183
00:12:45,610 --> 00:12:46,030
So.

184
00:12:46,980 --> 00:12:51,570
There probably are a bunch of people out there who don't
have it on, even though the, the experienced people in there.

185
00:12:51,575 --> 00:12:51,935
Nikolay: have it.

186
00:12:52,350 --> 00:12:53,250
Michael: Yeah, I think so.

187
00:12:53,274 --> 00:12:57,548
But then again, what, so there are a few default
settings there that I think can be improved.

188
00:12:57,548 --> 00:13:01,868
There's like a, is it 5,000 statements by the or 5,000 unique

189
00:13:01,942 --> 00:13:08,572
Nikolay: Yeah, you are talking about PTA statements dot max
parameter, which is as I remember 5,000 by default, usually it's.

190
00:13:09,012 --> 00:13:15,109
But in some cases, it's not enough when your queries are very
volatile in terms of structure, not in terms of parameters because

191
00:13:15,109 --> 00:13:18,969
SP assessments  during query normalization removes parameters.

192
00:13:19,209 --> 00:13:26,480
But in terms of the structure, if you just swap two columns
in your query, it's already considered as two different cases,

193
00:13:26,480 --> 00:13:30,154
two different entries in PSO, stable and PSAR statements.

194
00:13:30,154 --> 00:13:31,174
Max is 5,000.

195
00:13:31,174 --> 00:13:32,674
So you can increase the button.

196
00:13:32,884 --> 00:13:33,634
As I remember only.

197
00:13:34,139 --> 00:13:35,369
10,000 maximum.

198
00:13:35,676 --> 00:13:42,770
I don't remember exactly, but why, why we care because
if P statements has metrics, which just grow over time,

199
00:13:43,016 --> 00:13:47,251
incremental metrics like total time total exact time, total.

200
00:13:47,470 --> 00:13:50,230
Plan time because they split in positive 13.

201
00:13:50,230 --> 00:13:54,292
As I remember in this case we need two snapshots to analyze.

202
00:13:54,922 --> 00:14:01,227
We need those two snapshots and two numbers, and then
difference between their two numbers is what we have during our.

203
00:14:01,679 --> 00:14:04,497
Period of observation, snapshoting is absolutely needed.

204
00:14:04,497 --> 00:14:07,737
Otherwise it's not like the manual approach.

205
00:14:07,947 --> 00:14:10,407
Let's reset statistics often.

206
00:14:10,737 --> 00:14:17,225
And then use only final snapshot thinking that we
started everything from zero, it has downsides, for

207
00:14:17,225 --> 00:14:20,550
example feeling with new interest has overhead as well.

208
00:14:21,060 --> 00:14:25,137
If you check source code that says when we add an entry, so there is a lock.

209
00:14:25,843 --> 00:14:30,917
And  it might be noticeable during dozens
or sometimes like a hundred millisecond.

210
00:14:30,917 --> 00:14:33,407
It can be, it can be noticeable for all workload.

211
00:14:33,647 --> 00:14:39,790
So it's, it's better not to reset very often in my experience,
and anyway, it's not, it's not practical to resent them.

212
00:14:39,790 --> 00:14:41,290
You lose information as well.

213
00:14:41,760 --> 00:14:48,359
Then the question is like how often we have
evictions of rare queries and like, what is the.

214
00:14:48,829 --> 00:14:53,233
Some drift of our query set and you, you can see it.

215
00:14:53,233 --> 00:14:58,270
You can compare the difference between two
snapshots like for example, one hour between them.

216
00:14:59,145 --> 00:15:03,765
And you can see which Nu queries and queries,
which, which disappeared from the list.

217
00:15:04,125 --> 00:15:05,488
And this is this difference.

218
00:15:06,085 --> 00:15:15,729
Indicates how well, like eviction speed and I've noticed that some,
for example, Java applications, they use a set application name to so.

219
00:15:16,514 --> 00:15:26,477
Very unique including some maybe process idea or something, and PSA statements
cannot normalize so-called utility comments and set as utility commands.

220
00:15:26,477 --> 00:15:29,957
So these queries are considered as U separate.

221
00:15:29,987 --> 00:15:37,338
All of them all set per in this case, you might want to
turn off statements, track utility, which is owned by.

222
00:15:37,974 --> 00:15:38,274
right.

223
00:15:38,274 --> 00:15:45,807
And in this case you don't, because I, I, I haven't had
cases when we do need to analyze the speed of set commands.

224
00:15:45,837 --> 00:15:49,260
Well,  maybe it might happen, but my experience, not yet.

225
00:15:49,740 --> 00:15:51,390
so it's better to just turn it off.

226
00:15:51,390 --> 00:15:52,410
It's on by default.

227
00:15:53,006 --> 00:15:54,102
Michael: That makes loads of sense.

228
00:15:54,114 --> 00:15:58,952
I think talking about the snapshot comparisons, I
think that must be how the cloud providers all do it.

229
00:15:58,952 --> 00:16:06,720
And a lot, lot of the dashboards that you'll see in RDS,
Google cloud SQL there's a bunch of other ones as well.

230
00:16:06,720 --> 00:16:11,223
Like or even, even like open source source are
talking of tools that have, have had leases recently.

231
00:16:11,223 --> 00:16:13,743
PG hero came out with version three, I think

232
00:16:13,928 --> 00:16:14,258
Nikolay: Wow.

233
00:16:14,258 --> 00:16:15,128
And I

234
00:16:15,273 --> 00:16:15,693
Michael: recent.

235
00:16:15,788 --> 00:16:16,628
Nikolay: good, interesting.

236
00:16:17,258 --> 00:16:19,658
It's very, very lightweight and good tool.

237
00:16:19,658 --> 00:16:21,248
Like for, for small teams.

238
00:16:21,348 --> 00:16:22,748
I, I enjoy

239
00:16:23,493 --> 00:16:23,823
Michael: Yeah.

240
00:16:24,303 --> 00:16:26,923
and and based on page step statements again.

241
00:16:26,931 --> 00:16:34,347
So it's yeah, it's, it's the basis for lots of these, but back
to the cloud providers, by taking snap, the way they get historic

242
00:16:34,347 --> 00:16:40,999
data is by taking these snapshots and rolling them and comparing
them to each other, not by well, they not by rolling it forever on

243
00:16:40,999 --> 00:16:43,602
the same you know, they don't, they won't wanna reset once a year.

244
00:16:43,607 --> 00:16:44,682
For example, if that makes.

245
00:16:45,627 --> 00:16:46,047
Nikolay: right.

246
00:16:46,257 --> 00:16:50,087
Well, so why, why do we care about this eviction speed?

247
00:16:50,237 --> 00:16:54,695
Uh, and, transition to new to new list
because we want to analyze the whole work.

248
00:16:55,178 --> 00:16:55,508
Right.

249
00:16:55,578 --> 00:17:04,269
And in this case of course DISA ity we, we will, don't see
this utility part, but if you consider it small, we can do it,

250
00:17:04,689 --> 00:17:09,189
but we will have more real queries in our producer statements.

251
00:17:10,029 --> 00:17:12,249
And we will have 5,000 by default.

252
00:17:12,249 --> 00:17:13,419
It's quite big number.

253
00:17:13,829 --> 00:17:15,389
but the second place.

254
00:17:16,017 --> 00:17:18,927
Cutoff can happen is monitoring system or cloud?

255
00:17:19,077 --> 00:17:19,557
I don't know.

256
00:17:19,557 --> 00:17:22,737
Like they usually store like 500 on thousand.

257
00:17:22,827 --> 00:17:28,287
They don't take everything because it's expensive
to store everything in monitoring system.

258
00:17:28,292 --> 00:17:38,398
You need to, if you, if you want snapshots samples of statements, snapshots
every for example, Imagine how, how many records you need to store.

259
00:17:38,458 --> 00:17:43,588
If you, every, every minute you store 5,000 entries from producer.

260
00:17:43,858 --> 00:17:48,654
So usually they also cut off and making
decisions what, what to remove, what to leave.

261
00:17:48,964 --> 00:17:52,714
we usually think about which metrics are the most important

262
00:17:53,068 --> 00:17:53,398
Michael: yeah.

263
00:17:53,638 --> 00:18:00,628
So actually that you, I think you already mentioned it
briefly, but the, the total time seems is my, is my favorite.

264
00:18:00,688 --> 00:18:01,498
I know, I know we've

265
00:18:01,648 --> 00:18:08,128
Nikolay: Mine as well, but I mine as well, but I saw
people which prefer not total time, actually in my team.

266
00:18:08,488 --> 00:18:17,377
There, there are such people which prefer, for example average time meantime,
mean exact time or mean exact time plus mean plan time, because we like.

267
00:18:17,902 --> 00:18:24,712
Probably want to combine them because the execution
includes both planning and execution like me.

268
00:18:24,712 --> 00:18:26,842
I mean, okay, totology sorry.

269
00:18:28,702 --> 00:18:32,722
Michael: We call that to, we call it total
time by summing the two, but there's no such

270
00:18:33,022 --> 00:18:35,362
Nikolay: total already used in different context.

271
00:18:36,232 --> 00:18:36,682
Well,

272
00:18:36,965 --> 00:18:42,755
Michael: but then, but the problem, this goes back to our
conversation about logs versus PG sat statements though, the

273
00:18:42,755 --> 00:18:46,145
reason I guess the reason for you as well, but I'd be interested.

274
00:18:46,535 --> 00:18:50,885
The reason I prefer total time is you
could easily have your biggest performance

275
00:18:51,155 --> 00:18:51,365
Nikolay: Yeah.

276
00:18:51,485 --> 00:18:54,875
So, so sorry, you understand why total is, is used twice here, right?

277
00:18:55,055 --> 00:19:02,675
Because total, total is sum of all timing for,
I mean, there is total exact time and total

278
00:19:02,840 --> 00:19:03,620
Michael: A total planning time.

279
00:19:03,770 --> 00:19:04,100
Yep.

280
00:19:04,595 --> 00:19:05,345
Nikolay: Total, total.

281
00:19:05,615 --> 00:19:06,575
No, it's not good.

282
00:19:06,575 --> 00:19:09,728
Like total whole I, how to, how to name it.

283
00:19:09,728 --> 00:19:13,541
Michael: Yeah, well, they don't and they,
but we can sum those at the query level.

284
00:19:13,541 --> 00:19:13,721
Right.

285
00:19:13,721 --> 00:19:16,601
We can sum some of the two of them if that's what we care about.

286
00:19:16,662 --> 00:19:20,599
But my, yeah, sorry, what I was, what I guess I was trying to say was.

287
00:19:21,232 --> 00:19:27,872
Our biggest performance problem could easily be a
relatively fast query, which has a really low average mean.

288
00:19:27,914 --> 00:19:30,466
So sorry, average by mean time.

289
00:19:30,556 --> 00:19:34,506
So it could be on average 20 milliseconds, but it's getting run.

290
00:19:35,136 --> 00:19:38,136
So many times, and maybe it's still like, not optimal.

291
00:19:38,166 --> 00:19:40,716
Maybe it could be running in sub one millisecond.

292
00:19:41,136 --> 00:19:43,956
And that could be our biggest performance opportunity.

293
00:19:44,526 --> 00:19:48,498
And by looking at total time total execution time, plus total planning time.

294
00:19:48,798 --> 00:19:52,338
We could see that that could rise to the top of our query.

295
00:19:52,368 --> 00:19:53,748
It could be line number one.

296
00:19:53,928 --> 00:19:59,598
Whereas if we're looking at average time, we could easily,
they so many queries that only run a couple of times that

297
00:19:59,628 --> 00:20:03,468
take a few seconds each they they'd be long above it.

298
00:20:03,948 --> 00:20:11,633
Nikolay: This is interesting topic, which metric is more important by the
way, the lack of words here indicates that the topic is quite complex, right?

299
00:20:11,933 --> 00:20:17,183
I mean, English doesn't have enough words to,
to provide  I'm joking, of course, but the.

300
00:20:17,423 --> 00:20:25,515
Interesting total, what time, if you combine both exec and plan some
words should exist and we probably already use somewhere some word.

301
00:20:25,755 --> 00:20:29,115
So total versus average or meantime?

302
00:20:29,132 --> 00:20:31,790
In, in my, like I came to conclusion like this.

303
00:20:32,099 --> 00:20:42,801
If our primary goal is resource optimization, if we want to prepare for
future growth, we want to pay less for cloud uh, resources or hardware.

304
00:20:43,251 --> 00:20:50,431
Total time is our front because this is  well,
of course it includes some wait time as well.

305
00:20:50,431 --> 00:20:56,251
For example, we have a lot of contention and some
queries are, some sessions are blocked by other sessions.

306
00:20:56,581 --> 00:21:02,961
It's also contributes to total time, but resource consumption
probably it's not that much because waiting is quite cheap usually.

307
00:21:02,961 --> 00:21:03,231
Right.

308
00:21:03,651 --> 00:21:08,354
But If we forget about this a little bit, total both plan and exact time.

309
00:21:08,354 --> 00:21:12,884
If we combine them, this is our time spent for, to process our workload.

310
00:21:13,184 --> 00:21:18,292
If we know that we analyzed everything,
this is how much work Pogs did we can even.

311
00:21:18,468 --> 00:21:22,277
Take total total time and divide it by observation duration.

312
00:21:23,027 --> 00:21:26,417
And we, we will understand how much time we spend every second.

313
00:21:26,447 --> 00:21:30,077
I, I call it like metric is, is in seconds per second.

314
00:21:30,527 --> 00:21:31,487
My favorite metric.

315
00:21:31,707 --> 00:21:35,387
It, it, if, for example, is if it's one second per second means that.

316
00:21:35,910 --> 00:21:38,282
That we like roughly one one

317
00:21:39,142 --> 00:21:39,632
Michael: Cool.

318
00:21:39,782 --> 00:21:41,492
Nikolay: core could process this.

319
00:21:41,492 --> 00:21:48,989
It's very, very not like we forget about context, which is here of
course, and so on, but it gives someone a feeling of our workload.

320
00:21:49,199 --> 00:21:51,089
If we have 10 seconds per second.

321
00:21:51,689 --> 00:21:52,979
Needed to process.

322
00:21:52,979 --> 00:21:54,659
It's quite good workload already.

323
00:21:54,659 --> 00:21:56,429
We need probably some beef server here.

324
00:21:56,579 --> 00:22:07,060
As for average time, these numbers are most useful when we
have a goal like let's optimize for best user experience.

325
00:22:07,860 --> 00:22:11,040
Michael: Yeah, so like it, I guess that's our 50th percentile.

326
00:22:11,045 --> 00:22:12,984
Isn't it with the is it no, it's not, it's not.

327
00:22:13,044 --> 00:22:19,224
So I, I, even then, I don't prefer, I don't even like
mean for those because I'd much rather look at a P 95 or

328
00:22:19,224 --> 00:22:22,944
something and look at it, client side, not database side.

329
00:22:23,559 --> 00:22:32,183
Nikolay: Well, it depends, but sometimes ex like, as you, as you
said, sometimes we had very, very rare, rarely executed queries,

330
00:22:32,243 --> 00:22:39,353
but, but quite important ones, for example, it can be some kind
of analytics, not analytics, but some aggregation and so on.

331
00:22:39,383 --> 00:22:42,113
And the average is terrible and we do want to.

332
00:22:43,018 --> 00:22:50,458
Because we know that users who look at those numbers
who use these queries, these users are important.

333
00:22:50,458 --> 00:22:58,108
For example, some like our internal team analyzing
something, or I dunno, like finance people or something.

334
00:22:58,168 --> 00:23:05,723
And some, some kind of more analytical workload, not
necessarily analytical, but I hope you understand.

335
00:23:05,803 --> 00:23:06,043
Right.

336
00:23:06,253 --> 00:23:06,603
So.

337
00:23:06,873 --> 00:23:07,383
Michael: I understand.

338
00:23:07,383 --> 00:23:09,273
So like give you an example.

339
00:23:09,315 --> 00:23:13,545
When I was at a payments company, we had, it
was like a batch, it was daily batch payments.

340
00:23:13,550 --> 00:23:15,669
We had a deadline to submit a file.

341
00:23:15,669 --> 00:23:17,876
It was like a 10:00 PM UK time deadline.

342
00:23:18,206 --> 00:23:20,846
And the job literally had to finish before then.

343
00:23:20,966 --> 00:23:27,170
And ,  as this job got longer and longer, It got closer
to that deadline and then it, it forced some, some work.

344
00:23:27,170 --> 00:23:32,059
So maybe that wouldn't have shown up if
we'd looked at duration or total total time.

345
00:23:32,059 --> 00:23:39,075
But yeah, I did, but I also think those kinds of issues often
crop up without you doing, having to do this, like macro

346
00:23:39,075 --> 00:23:42,585
analysis work, because somebody's telling you about them.

347
00:23:42,905 --> 00:23:43,235
Nikolay: Yeah.

348
00:23:43,235 --> 00:23:51,635
So, so if we, or if we decided to order by meantime, you, sometimes
we see on the top, we see something that we say, oh, it's fine.

349
00:23:51,635 --> 00:23:55,505
That it executes a minute a minute because it's some crunch job.

350
00:23:55,685 --> 00:24:02,735
And nobody cares if it's, if it's just select for
example, and no, no lock involved and it lasts one minute.

351
00:24:02,765 --> 00:24:03,695
It's not a big deal.

352
00:24:03,785 --> 00:24:09,653
So we, we probably want to exclude some queries
from top and ordered by meantime every time.

353
00:24:10,009 --> 00:24:11,179
For total time, it's not.

354
00:24:11,179 --> 00:24:18,508
So I usually really interested in each entry from the top, that's
why I also prefer total time, but I see people use meantime,

355
00:24:18,508 --> 00:24:21,641
successfully caring about mostly users, not about service.

356
00:24:21,701 --> 00:24:26,534
So roughly total time is for infrastructure teams for optimize for service.

357
00:24:26,954 --> 00:24:33,014
While meantime was probably interesting to
application development teams and for humans.

358
00:24:33,104 --> 00:24:33,344
Right?

359
00:24:33,404 --> 00:24:36,224
So very, very roughly uh, and.

360
00:24:36,948 --> 00:24:39,258
there is also calls important metric, right?

361
00:24:39,410 --> 00:24:46,462
Why we, why we discuss which matter to choose, because in when you
build good monitoring you need to choose several metrics and build

362
00:24:46,462 --> 00:24:54,795
a dashboard consisting of multiple charts top end charts top end
by total time, top end by meantime, top end by calls for example.

363
00:24:54,825 --> 00:24:55,305
Why calls?

364
00:24:55,902 --> 00:24:58,092
Probably for database itself.

365
00:24:58,092 --> 00:24:59,337
It's not that important.

366
00:24:59,697 --> 00:25:07,036
And if the most frequent queries they might, might produce not the
biggest load of course, if, if there are a lot of very, very fast

367
00:25:07,036 --> 00:25:10,513
queries, I would check and text switches, for example, and so on.

368
00:25:10,993 --> 00:25:11,263
Right.

369
00:25:11,263 --> 00:25:15,053
And think about how CPUs are, are busy in this area.

370
00:25:15,293 --> 00:25:24,743
But I've noticed that sometimes we want to reduce frequency of some, some of
the most frequent queries, because overhead on application side is terrible.

371
00:25:24,793 --> 00:25:32,219
This is unusual approach because sometimes people optimizing
workload or database, they think only about database, but

372
00:25:32,219 --> 00:25:40,305
I had cases when optimization, for example, let's take
top three order by calls and just reducing the frequency.

373
00:25:40,305 --> 00:25:44,475
We can throw out 50% of our application nodes.

374
00:25:44,794 --> 00:25:47,354
Can you imagine the, the, the benefit of it?

375
00:25:47,694 --> 00:25:48,654
Michael: The cost saving.

376
00:25:48,654 --> 00:25:48,984
Right.

377
00:25:48,984 --> 00:25:56,189
But so it could like, just to give an example from the application
side I guess that would be one way of spotting potential and plus

378
00:25:56,189 --> 00:26:02,399
one issues where if it's the same queries getting executed over and
over again, that's the, that's the kind of thing it could point to.

379
00:26:02,642 --> 00:26:03,002
Nikolay: Right.

380
00:26:03,032 --> 00:26:03,422
Right.

381
00:26:03,496 --> 00:26:04,456
So it's interesting.

382
00:26:04,617 --> 00:26:11,763
I think I don't understand all of aspects here and I
think we lack good documentation, how to USEA statements.

383
00:26:11,763 --> 00:26:16,183
So many, many angles, so many like derivatives as well, but.

384
00:26:17,003 --> 00:26:19,933
I, I would like to finalize discussion of metrics.

385
00:26:19,933 --> 00:26:25,847
I wanted to mention also, I, I, your metrics
shared buffer hits and, and shared blocks.

386
00:26:25,907 --> 00:26:29,003
They call shared blocks  red and hit, right?

387
00:26:29,063 --> 00:26:33,903
Let me check shared blocks, hit shed blocks, read also Jo and written.

388
00:26:34,093 --> 00:26:35,603
But if it consider only

389
00:26:35,683 --> 00:26:35,968
Michael: temp.

390
00:26:37,003 --> 00:26:42,553
Nikolay: well, local and temp additional, but
let's just, if we discuss everything we need to.

391
00:26:43,783 --> 00:26:44,083
Right.

392
00:26:44,593 --> 00:26:51,878
So many aspects, but I, I wanted to mention only few like hit
and read char B hit and, and red hit and red because it's okay.

393
00:26:51,968 --> 00:27:01,433
If we talk about this only interesting thing here is that sometimes monitoring
system thinks that read is enough because it's the slowest operation.

394
00:27:01,433 --> 00:27:07,453
Let's well, as we already discussed a few
times Pogo doesn't uh, the actual disc

395
00:27:08,278 --> 00:27:11,608
Michael: Like, like the operating system cash versus the

396
00:27:11,673 --> 00:27:12,113
Nikolay: Yeah.

397
00:27:12,113 --> 00:27:12,233
Yeah.

398
00:27:12,233 --> 00:27:12,353
Yeah.

399
00:27:12,358 --> 00:27:14,510
So this read is from page cash.

400
00:27:14,750 --> 00:27:16,850
Maybe it's discreet, but maybe not.

401
00:27:16,850 --> 00:27:26,952
We don't know, but usually monitoring system says, okay, ordering by she
box read is the most interesting, but I had cases at least two times when I.

402
00:27:27,572 --> 00:27:36,065
Really needed just that statements shared blocks hit and
finding the most like, because working with buffer pool in POS,

403
00:27:36,065 --> 00:27:39,828
August shared buffers was so intensive by some query group.

404
00:27:39,828 --> 00:27:40,428
So, so.

405
00:27:40,848 --> 00:27:47,838
And I, if you don't have it in monitoring, you need to start
sampling P assessments, yourself, writing some scripts on the fly.

406
00:27:47,838 --> 00:27:48,888
It's not fun at all.

407
00:27:49,738 --> 00:27:59,279
so I think most exp most DBS who are, have experienced, they have something
in their tool set, but for example, P P center P center can sample it.

408
00:27:59,766 --> 00:28:04,950
you can use it as a hoc tool if you don't have it in
monitoring and you have problem right now, for example,

409
00:28:05,130 --> 00:28:08,363
but I also suspect they like top end by hit number.

410
00:28:08,368 --> 00:28:19,730
So  these angles and new, new metric wall, how, like let's find
querie which generate the most the, the more wall data that's ordered.

411
00:28:19,778 --> 00:28:20,408
How's gold.

412
00:28:20,408 --> 00:28:21,248
Let let's check.

413
00:28:21,248 --> 00:28:22,388
I have the list here.

414
00:28:22,448 --> 00:28:23,108
It's called

415
00:28:23,157 --> 00:28:23,847
Michael: full page.

416
00:28:24,266 --> 00:28:24,476
Nikolay: yeah.

417
00:28:24,476 --> 00:28:24,686
Yeah.

418
00:28:24,686 --> 00:28:29,816
Wall records, wall FBI, full inserts and wall bites.

419
00:28:30,361 --> 00:28:30,761
Michael: Yeah.

420
00:28:31,256 --> 00:28:33,476
Nikolay: Three metrics add to post 13.

421
00:28:33,739 --> 00:28:36,989
I, I didn't see them yet in any monitoring.

422
00:28:37,379 --> 00:28:42,479
I, I hope, oh, maybe our pitch watch two PCI edition.

423
00:28:42,539 --> 00:28:43,439
It has it already.

424
00:28:43,439 --> 00:28:43,709
Right.

425
00:28:44,003 --> 00:28:48,073
Michael: Yeah, I remember this was, this was added to explain in version 13.

426
00:28:48,078 --> 00:28:50,263
Was it added to page that statements the same

427
00:28:50,553 --> 00:28:51,568
Nikolay: was same time.

428
00:28:51,568 --> 00:28:51,808
Yes.

429
00:28:51,808 --> 00:28:52,138
Yes.

430
00:28:52,828 --> 00:28:54,568
Mm-hmm and it's so good.

431
00:28:54,628 --> 00:28:55,138
It's so good.

432
00:28:55,138 --> 00:28:58,898
Like order by like, like we want to reduce world generation.

433
00:28:59,182 --> 00:29:07,045
Definitely because it reduction of it will have very
positive effect, both on our backups, subsystem and replica.

434
00:29:07,602 --> 00:29:09,282
Both logical and physical.

435
00:29:09,282 --> 00:29:15,965
So we do want to produce fewer wall records or fewer wall bites or full page

436
00:29:16,355 --> 00:29:17,015
rights as well.

437
00:29:17,405 --> 00:29:17,855
Yeah, yeah.

438
00:29:17,855 --> 00:29:18,095
Yeah.

439
00:29:18,485 --> 00:29:19,895
So it's, I never use it yet.

440
00:29:19,895 --> 00:29:21,995
I, I hope soon I will use it someday.

441
00:29:22,145 --> 00:29:25,642
So this, I mean, I know that it's there, but never use it myself yet.

442
00:29:25,867 --> 00:29:26,287
Michael: Yeah.

443
00:29:26,347 --> 00:29:31,087
Talking about this has given me an idea as
well in, we talked while back about buffers.

444
00:29:31,117 --> 00:29:36,847
And one of the things we do on a per query basis
is look at the total sum of all of the buffers.

445
00:29:36,852 --> 00:29:43,257
And I don't, I know that doesn't make tons of sense, summing
dirty buffers plus temp buffers, plus local plus shared.

446
00:29:43,507 --> 00:29:44,167
Nikolay: we can call a

447
00:29:44,467 --> 00:29:47,047
Michael: But yeah, exactly.

448
00:29:47,317 --> 00:29:51,817
Or, or some kind of measure of work done
and actually some summing, all of those.

449
00:29:51,817 --> 00:29:56,137
And then ordering by that and looking at the top 10 queries by total IO

450
00:29:56,407 --> 00:29:57,397
Nikolay: it's smart idea.

451
00:29:57,457 --> 00:29:59,347
Each IO has some.

452
00:30:00,157 --> 00:30:04,475
And if we find queries which  involve most IO operations, of course.

453
00:30:04,748 --> 00:30:07,165
It's a good angle for our analysis.

454
00:30:08,005 --> 00:30:08,215
Yeah.

455
00:30:08,725 --> 00:30:09,505
What, what else?

456
00:30:09,715 --> 00:30:18,103
We mentioned that we deal with  page cash when we look at iyo,
but sometimes we do want to order by real physical disc iyo.

457
00:30:18,523 --> 00:30:18,853
Right.

458
00:30:19,123 --> 00:30:20,503
And there is such opportunity.

459
00:30:20,688 --> 00:30:30,035
For those who manage Pogs themselves, it's called PSTA Kash, additional
extension to PSTA statements, extension to extension, I would say.

460
00:30:30,117 --> 00:30:37,531
And it provides you very good things like dis reads and
rights, real physical discs and rights, and also CPU.

461
00:30:37,536 --> 00:30:45,588
Sometimes you want to find queries that generate the most load
to your CPU and it, it even distinguishes system and user CPU.

462
00:30:46,065 --> 00:30:47,655
it's like, it's good.

463
00:30:47,655 --> 00:30:47,835
Yeah.

464
00:30:47,835 --> 00:30:48,105
Yeah.

465
00:30:48,405 --> 00:30:50,415
And also take switches.

466
00:30:50,888 --> 00:30:52,958
So it's very useful extension.

467
00:30:52,958 --> 00:31:01,385
If you care about resource consumption and you want to prepare for growth and
you want to, to do some capacity planning and before that you want optimize.

468
00:31:01,771 --> 00:31:06,601
Michael: And I think you've said before, but did
you say it's not available on most managed services?

469
00:31:07,151 --> 00:31:09,861
Nikolay: No, I only know Yandex manage services.

470
00:31:09,902 --> 00:31:15,323
They, they installed  by default, but I
don't, I'm not aware of any others, so yeah.

471
00:31:16,148 --> 00:31:18,638
Also, there is another way to analyze workload.

472
00:31:18,638 --> 00:31:21,760
We didn't cover today at all weight event analysis.

473
00:31:21,760 --> 00:31:27,368
This is what RRGs for example, provides us like
starting point actually for workload analysis.

474
00:31:27,421 --> 00:31:31,119
I think it came from Oracle world active session history analysis.

475
00:31:31,664 --> 00:31:38,097
So let's yes, some someday let's discuss it and
compare it with traditional analysis we discussed.

476
00:31:39,072 --> 00:31:39,462
Michael: Yes.

477
00:31:39,462 --> 00:31:45,972
And for anybody that is aware of Ash and wants it for
Postgres, there is a, a, I've heard it called Ash.

478
00:31:46,062 --> 00:31:46,362
Yeah.

479
00:31:46,442 --> 00:31:47,762
P a S H as well.

480
00:31:47,822 --> 00:31:48,002
Yeah,

481
00:31:48,227 --> 00:31:54,371
Nikolay: but it's only a, it's a Java client application,
which will sample do sampling from just activity.

482
00:31:54,761 --> 00:31:58,597
But uh, it can  only be used as at hoc tool if you're in

483
00:31:58,972 --> 00:31:59,422
Michael: Same,

484
00:31:59,887 --> 00:32:00,157
Nikolay: Yeah.

485
00:32:00,622 --> 00:32:01,252
Michael: same as Ash.

486
00:32:01,257 --> 00:32:01,552
Right?

487
00:32:01,822 --> 00:32:02,382
Isn't it.

488
00:32:02,745 --> 00:32:10,224
Nikolay: Well you can install for example, PPG weight sampling
extension, and uh, immediately in our PPG watch two POS addition,

489
00:32:10,224 --> 00:32:14,591
you will have  similar graphs as performance insights in RDS.

490
00:32:15,608 --> 00:32:17,198
I think Google also implemented it.

491
00:32:17,558 --> 00:32:20,578
I'm not quite hundred percent sure, but I think they did it.

492
00:32:21,938 --> 00:32:23,168
all Puget center.

493
00:32:23,168 --> 00:32:25,478
I mentioned earlier, also a good at ho tool.

494
00:32:25,478 --> 00:32:31,083
It also has weight event sampling it's
but let, let's discuss it some other day

495
00:32:31,193 --> 00:32:36,040
Michael: we also have an episode on monitoring that people
can go check out if they want the, a deeper discussion on

496
00:32:36,190 --> 00:32:37,330
Nikolay: And micro analysis.

497
00:32:37,330 --> 00:32:38,800
It's good to distinguish the things.

498
00:32:38,800 --> 00:32:44,450
Sometimes you, you have already query you, just you need
to under go inside it and understand what's happening.

499
00:32:44,450 --> 00:32:45,560
Why is this?

500
00:32:45,620 --> 00:32:46,520
Is, is it so slow?

501
00:32:46,790 --> 00:32:49,740
But sometimes you have no idea where to start.

502
00:32:50,063 --> 00:32:51,143
Database is slow.

503
00:32:51,203 --> 00:32:52,073
Everything is bad.

504
00:32:52,237 --> 00:32:56,645
In this case, queer analysis, macro analysis is definitely worth conducting.

505
00:32:56,801 --> 00:32:57,651
So yeah.

506
00:32:58,386 --> 00:33:00,426
Okay, sorry, about 40 minutes again.

507
00:33:00,696 --> 00:33:01,356
So it's again,

508
00:33:01,596 --> 00:33:03,966
it's again, longer than, than we wanted,

509
00:33:04,836 --> 00:33:06,966
uh, as as usual.

510
00:33:06,966 --> 00:33:12,224
Let's thank all our listeners who provide
feedback this week was excellent as well.

511
00:33:12,269 --> 00:33:13,008
A lot of, a lot of

512
00:33:13,253 --> 00:33:13,463
Michael: Yeah.

513
00:33:13,463 --> 00:33:15,143
We had a lot of great suggestions.

514
00:33:15,225 --> 00:33:15,975
It's been really good.

515
00:33:15,975 --> 00:33:16,515
Thank you.

516
00:33:16,855 --> 00:33:17,345
Nikolay: Yeah.

517
00:33:17,345 --> 00:33:18,395
This, this drives us.

518
00:33:18,395 --> 00:33:19,265
Thank you so much.

519
00:33:19,426 --> 00:33:20,745
Michael: Yeah, really appreciate it.

520
00:33:20,775 --> 00:33:21,735
Well, thanks again, Nicola.

521
00:33:21,735 --> 00:33:23,745
I hope you have a good week and see you next

522
00:33:24,000 --> 00:33:26,498
Nikolay: as final words like share, share.

523
00:33:26,558 --> 00:33:27,008
share.

524
00:33:27,343 --> 00:33:27,458
you.

525
00:33:27,605 --> 00:33:27,815
Bye.

526
00:33:28,233 --> 00:33:28,773
Michael: Take halfway.