1
00:00:00,060 --> 00:00:02,660
Michael: Hello and welcome to PostgresFM,
a weekly show about

2
00:00:02,660 --> 00:00:03,580
all things PostgreSQL.

3
00:00:03,840 --> 00:00:07,480
I am Michael, founder of pgMustard,
and today I am joined by

4
00:00:07,480 --> 00:00:11,500
Alicja Kucharczyk, Program Manager
for Azure Database for PostgreSQL

5
00:00:11,920 --> 00:00:16,300
at Microsoft and organizer of the
Warsaw PostgreSQL user group.

6
00:00:16,720 --> 00:00:19,600
Thank you so much for joining me
today Alicja, it's an honor

7
00:00:19,600 --> 00:00:20,580
to have you here.

8
00:00:21,040 --> 00:00:22,620
Alicja: Thank you for having me
Michael.

9
00:00:23,600 --> 00:00:27,720
Michael: Wonderful, well I proposed
a topic this week but mainly

10
00:00:27,720 --> 00:00:31,560
because you're one of the biggest
fans or at least advocates of

11
00:00:31,560 --> 00:00:33,500
this tool, I think in the world.

12
00:00:34,540 --> 00:00:38,160
But it's probably an underused
tool, in my opinion anyway.

13
00:00:38,320 --> 00:00:42,100
I definitely don't use it at times
when I could or should, and

14
00:00:42,100 --> 00:00:45,540
I know most people that could be
using it have never even heard

15
00:00:45,540 --> 00:00:47,360
of it, at least newer users.

16
00:00:47,360 --> 00:00:49,420
So what are we going to be talking
about today?

17
00:00:50,320 --> 00:00:54,440
Alicja: Yeah about my favorite
tool which is pgBadger which is

18
00:00:54,440 --> 00:00:59,060
totally open source which is really
great it's like allows you

19
00:00:59,060 --> 00:01:03,540
like to do really comprehensive
troubleshooting and I believe

20
00:01:03,540 --> 00:01:06,260
like it's yeah top-notch of the
class.

21
00:01:07,200 --> 00:01:09,480
Michael: Yeah absolutely do you
want to give us a little bit

22
00:01:09,480 --> 00:01:12,980
of background on the tool or how
people use it?

23
00:01:13,100 --> 00:01:16,360
Alicja: Yes absolutely so maybe
I will start with my own story

24
00:01:16,360 --> 00:01:20,940
because I've learned that the tool
exists over, I believe, 10

25
00:01:20,940 --> 00:01:23,540
years ago when I was working for
a Polish Bank.

26
00:01:24,160 --> 00:01:28,320
And the process in there was like
each time something was happening,

27
00:01:28,380 --> 00:01:31,360
they started to generate a pgBadger
report.

28
00:01:31,580 --> 00:01:34,480
And that was just always the
first step that they've been

29
00:01:34,480 --> 00:01:34,780
doing.

30
00:01:34,780 --> 00:01:38,820
So that was the place where I learned
that this tool exists.

31
00:01:39,480 --> 00:01:43,320
And over the years, actually, because
I was moving from company

32
00:01:43,340 --> 00:01:46,940
to company, and I started to be
a Postgres consultant.

33
00:01:47,680 --> 00:01:52,960
And back then, that started to
be my main tool for doing the

34
00:01:52,960 --> 00:01:53,460
troubleshooting.

35
00:01:53,800 --> 00:01:56,820
Because as a consultant, you don't
really know the system.

36
00:01:56,820 --> 00:02:00,560
Like you're going to the customer,
you know something has happened

37
00:02:00,680 --> 00:02:05,300
and the customer is saying, for
instance, CPU is very high or

38
00:02:05,320 --> 00:02:09,800
latency of the query suddenly has
grown, but you still don't

39
00:02:09,800 --> 00:02:13,020
really have any knowledge about
the system.

40
00:02:13,620 --> 00:02:17,420
And pgBadger is really one of the
best tools to quickly

41
00:02:17,440 --> 00:02:21,600
get knowledge about the workload
that is happening within your

42
00:02:21,600 --> 00:02:22,100
database.

43
00:02:23,040 --> 00:02:28,180
And I really like pgBadger also
that it's showing you like

44
00:02:28,180 --> 00:02:32,300
really comprehensive picture because
you might know other tools,

45
00:02:32,300 --> 00:02:34,260
like similar tools, like pg_stat_statements.

46
00:02:35,460 --> 00:02:41,300
And it's not that I want to say
that this is a bad tool or anything.

47
00:02:41,400 --> 00:02:42,600
I do use pg_stat_statements.

48
00:02:43,640 --> 00:02:48,740
I know that many people do, but
pg_stat_statements is actually just

49
00:02:48,740 --> 00:02:51,260
one tab in the pgBadger report.

50
00:02:51,660 --> 00:02:55,460
And next to that, you've got also
like report about the temporary

51
00:02:55,460 --> 00:02:55,960
files.

52
00:02:56,360 --> 00:02:59,720
You've got really nicely aggregated
errors.

53
00:03:00,220 --> 00:03:03,840
So, everything that you've got
in the logs that are errors, fatals,

54
00:03:03,940 --> 00:03:07,940
warnings, it's really nicely aggregated
on the events tab.

55
00:03:08,180 --> 00:03:13,220
You would also have the logs and
logs with CK, right?

56
00:03:13,220 --> 00:03:14,980
Not logs, but logs.

57
00:03:15,180 --> 00:03:17,720
I know it's easy to mix these two.

58
00:03:18,080 --> 00:03:21,640
So you also have these logs, you've
got the number of connections,

59
00:03:21,780 --> 00:03:23,140
number of sessions.

60
00:03:23,760 --> 00:03:26,640
So you really have a comprehensive
view.

61
00:03:26,880 --> 00:03:30,020
For instance, if you've got
pg_stat_statements, you see, okay,

62
00:03:30,020 --> 00:03:34,760
this query was executed 1,000 times,
and the total time it was

63
00:03:34,760 --> 00:03:37,580
this and like the average time
was this, right?

64
00:03:37,640 --> 00:03:41,680
You would have like the same thing
in PgBadger, but the difference

65
00:03:41,680 --> 00:03:46,040
is in PgBadger, you can quickly
go to another tab, for instance,

66
00:03:46,300 --> 00:03:50,660
temporary files tab and see that,
okay, this execution time was

67
00:03:50,660 --> 00:03:54,180
caused actually because this query
was generating temporary files

68
00:03:54,400 --> 00:03:56,600
or maybe it was waiting for a lock.

69
00:03:56,820 --> 00:04:02,220
So that's why I do prefer PgBadger
over pg_stat_statements and

70
00:04:02,220 --> 00:04:06,360
other tools, because it's giving
me this comprehensive view.

71
00:04:07,360 --> 00:04:08,300
Michael: Yeah, I like it.

72
00:04:08,300 --> 00:04:10,340
Shots fired right up top.

73
00:04:10,440 --> 00:04:14,480
I'm a big fan of pg_stat_statements
as well, so I'm hoping to

74
00:04:14,480 --> 00:04:16,040
be somewhat converted here.

75
00:04:16,300 --> 00:04:18,120
I definitely think there's pros
of both.

76
00:04:18,120 --> 00:04:21,240
So with PG Badger, it's a log analysis.

77
00:04:21,340 --> 00:04:23,400
So it's a standalone tool as well,
right?

78
00:04:23,400 --> 00:04:28,480
So open source, as you said, we
can install it wherever and analyze

79
00:04:28,980 --> 00:04:33,000
our log files for anything that
we can spit out to the log.

80
00:04:33,000 --> 00:04:37,260
So as you mentioned, locks, yeah,
so wait events, errors as well,

81
00:04:37,260 --> 00:04:39,640
that's a big missing part of pg_stat_statements.

82
00:04:39,640 --> 00:04:43,580
So sometimes we have, it's quite
common advice to have a timeout,

83
00:04:43,580 --> 00:04:43,860
right?

84
00:04:43,860 --> 00:04:46,960
If you have queries that are taking
more than, I don't know,

85
00:04:46,960 --> 00:04:50,500
some, on some systems might be
30 seconds, time them out instead

86
00:04:50,500 --> 00:04:52,840
of measuring, instead of letting
them complete.

87
00:04:52,840 --> 00:04:55,580
In pg_stat_statements, we don't
see those at all, but in PgBadger,

88
00:04:55,760 --> 00:04:56,820
we can see those because

89
00:04:56,820 --> 00:04:57,740
Alicja: of the errors, right?

90
00:04:57,740 --> 00:04:58,240
Absolutely.

91
00:04:58,500 --> 00:04:59,640
Yes, yes, yes, yes, yes.

92
00:04:59,640 --> 00:05:04,040
And the errors actually, you know,
it's helping with other problems.

93
00:05:04,120 --> 00:05:08,400
Like recently I was troubleshooting
logical replication errors

94
00:05:08,720 --> 00:05:13,080
and if you are just, have the plaintext
book and you've got just

95
00:05:13,080 --> 00:05:17,800
1 error, it's really not easy like
to go up like thousands of

96
00:05:17,800 --> 00:05:21,960
lines up and see what was the root
cause error, right?

97
00:05:21,960 --> 00:05:25,020
Because sometimes you've got 1
error and this is causing the

98
00:05:26,380 --> 00:05:27,840
whole other errors, right?

99
00:05:27,840 --> 00:05:28,980
Like in this case.

100
00:05:29,100 --> 00:05:31,660
And in PgBadger, you're seeing
like that.

101
00:05:31,880 --> 00:05:35,100
That's great because of the aggregation
that pgBadger is doing.

102
00:05:36,160 --> 00:05:36,660
Michael: Nice.

103
00:05:36,960 --> 00:05:41,440
So, because we're reliant on the
logs here, I would like to ask,

104
00:05:41,520 --> 00:05:45,040
a lot of people won't have, a lot
of the default settings are

105
00:05:45,040 --> 00:05:47,260
to not log a lot of important settings.

106
00:05:47,360 --> 00:05:50,720
So a lot of people's logs, if they're
new to Postgres or if they're

107
00:05:50,720 --> 00:05:54,720
using a managed provider and haven't
set any settings, they won't

108
00:05:54,720 --> 00:05:56,040
have many of the things.

109
00:05:56,040 --> 00:05:59,640
So if they ran pgBadger now, I'm
guessing a lot of them wouldn't

110
00:05:59,640 --> 00:06:01,960
get some of the reports you're
talking about because they wouldn't

111
00:06:01,960 --> 00:06:03,540
have the right settings on.

112
00:06:03,680 --> 00:06:04,580
Is that fair?

113
00:06:04,740 --> 00:06:05,940
Alicja: Yes, yes, absolutely.

114
00:06:06,100 --> 00:06:10,260
Like with default settings, you
would probably have like a very

115
00:06:10,260 --> 00:06:14,820
poor pgBadger report because as
you said, like pgBadger is just

116
00:06:14,820 --> 00:06:19,720
Perl script that is just taking
the text files, the text logs

117
00:06:19,780 --> 00:06:24,400
as the input and then generating
HTML report as the output, right?

118
00:06:24,840 --> 00:06:29,680
So whatever is in logs, it will
be regenerated, like it will

119
00:06:29,680 --> 00:06:33,340
be parsed and it will be, you know,
aggregated in this really

120
00:06:33,340 --> 00:06:34,980
nice pgBadger format.

121
00:06:35,500 --> 00:06:38,560
But if something is not in the
logs, then obviously you will

122
00:06:38,560 --> 00:06:39,860
not see that, right?

123
00:06:40,320 --> 00:06:45,100
So for instance, if you want to
see the locks this time, like

124
00:06:45,100 --> 00:06:49,180
you need to have log_lock_waits
on.

125
00:06:49,700 --> 00:06:50,200
On.

126
00:06:50,580 --> 00:06:54,560
There is 1 setting in Postgres
that you need to have locks

127
00:06:54,680 --> 00:06:57,320
logged to be able to generate that.

128
00:06:57,380 --> 00:06:59,600
Michael: Yeah, log_lock_waits,
I think it is.

129
00:06:59,600 --> 00:07:00,740
Alicja: Log_lock_waits, yes.

130
00:07:01,440 --> 00:07:03,200
I believe this is the one.

131
00:07:03,200 --> 00:07:07,780
And this is responsible for the
locks tab in the pgBadger.

132
00:07:08,320 --> 00:07:11,080
So if you've got this off, you
will not see anything.

133
00:07:11,540 --> 00:07:15,280
Also like the log_line_prefix is
super important.

134
00:07:15,860 --> 00:07:20,200
So, you know, the default prefix,
it's pretty much null.

135
00:07:20,900 --> 00:07:24,740
And I'm using the one that is actually
recommended by pgBadger,

136
00:07:25,280 --> 00:07:29,440
and it contains a lot of stuff
like database name, user name,

137
00:07:29,480 --> 00:07:33,960
application name, of course, like
the timestamp, and it also

138
00:07:33,960 --> 00:07:35,080
has like a host.

139
00:07:35,080 --> 00:07:38,340
So the IP of the server that the connection
is coming from.

140
00:07:38,560 --> 00:07:42,380
And this allows you to have really
like, you know rich pgBadger

141
00:07:42,540 --> 00:07:47,200
because pgBadger is able also to
split the queries into different

142
00:07:47,200 --> 00:07:49,480
tabs per database per user.

143
00:07:49,480 --> 00:07:53,020
So it's able like to show you for
instance how many select queries

144
00:07:53,080 --> 00:07:57,840
were like on a particular database
or were issued by a particular

145
00:07:58,260 --> 00:07:58,760
user.

146
00:07:59,540 --> 00:08:01,720
But you need log_line_prefix for
that.

147
00:08:01,720 --> 00:08:05,240
Otherwise, if you've got really
a short default, you will not see

148
00:08:05,240 --> 00:08:06,620
a lot of useful information.

149
00:08:06,900 --> 00:08:11,980
But I believe in here, the pgBadger
documentation is pretty

150
00:08:11,980 --> 00:08:12,480
good.

151
00:08:12,600 --> 00:08:16,300
You've got the entire list of the
settings that you need to set.

152
00:08:16,840 --> 00:08:20,920
And most of that, like most of
that is safe.

153
00:08:20,920 --> 00:08:25,620
Like I would say most, because
you've got, of course, 1 setting,

154
00:08:25,680 --> 00:08:29,860
log_min_duration_statement, which
is super, might be super

155
00:08:29,860 --> 00:08:31,040
dangerous, right?

156
00:08:31,380 --> 00:08:34,700
Of course, maybe just to explain
log_min_duration_statement, it's

157
00:08:34,700 --> 00:08:40,520
the setting that is regulating
how many queries we want to log

158
00:08:40,520 --> 00:08:41,780
into the text logs.

159
00:08:42,560 --> 00:08:44,640
And by default, this is minus 1.

160
00:08:44,640 --> 00:08:45,640
It means no.

161
00:08:46,460 --> 00:08:48,920
And 0 means log everything.

162
00:08:49,080 --> 00:08:53,260
So everything means even select
1, like something that is under

163
00:08:53,260 --> 00:08:54,640
milliseconds, right?

164
00:08:55,080 --> 00:09:01,100
With 0, you will see like a huge,
huge impact on your database.

165
00:09:01,860 --> 00:09:08,520
So, you know, you always need to
kind of adjust this to your

166
00:09:08,520 --> 00:09:11,940
system, to the workload you've
got, how intensive, you know,

167
00:09:11,940 --> 00:09:14,080
your database is processing the
queries.

168
00:09:15,040 --> 00:09:19,640
If you've got an analytical database,
right, like super huge data

169
00:09:19,640 --> 00:09:23,640
amounts that you've got, yeah,
5 queries per hour, probably not

170
00:09:23,640 --> 00:09:26,840
many of you, but yeah, let's say
log_min_duration_statement

171
00:09:26,840 --> 00:09:28,020
0 is okay then.

172
00:09:28,200 --> 00:09:32,380
But with millions, thousands of
queries per second, this is,

173
00:09:32,380 --> 00:09:33,940
yes, this is a killer.

174
00:09:34,080 --> 00:09:38,900
Like I saw even like 40% overhead
of the overall performance

175
00:09:39,240 --> 00:09:41,820
with log_min_duration_statement
0. 

176
00:09:41,820 --> 00:09:43,340
So this is dangerous.

177
00:09:43,380 --> 00:09:48,580
I usually start with something
around between 1 minute to 10

178
00:09:48,580 --> 00:09:50,140
minutes, depending on the system.

179
00:09:50,280 --> 00:09:50,960
Oh, wow.

180
00:09:51,580 --> 00:09:52,080
Yeah.

181
00:09:52,300 --> 00:09:55,740
So, you know, depending on the
particular intensivity, like,

182
00:09:55,740 --> 00:09:59,900
let's say, of the system, depends
what customer is telling me,

183
00:09:59,900 --> 00:10:00,920
okay, it's super intense.

184
00:10:00,920 --> 00:10:02,060
If it's not, right?

185
00:10:02,240 --> 00:10:07,080
So let's say we are starting from
1 minute and then we see the

186
00:10:07,080 --> 00:10:07,580
overhead.

187
00:10:07,900 --> 00:10:09,860
If it's huge, we are going up.

188
00:10:10,440 --> 00:10:14,980
If it's not huge, then we try and
go a bit down with the setting.

189
00:10:15,480 --> 00:10:18,620
Michael: When you say 1 minute,
you mean only log the duration

190
00:10:18,640 --> 00:10:21,840
of statements that take longer
than 60 seconds?

191
00:10:22,280 --> 00:10:24,300
Alicja: Yes, yes, yes, yes.

192
00:10:25,760 --> 00:10:28,040
Michael: I thought you're going
to go much lower, but that makes

193
00:10:28,040 --> 00:10:30,260
a lot of sense, just in case there's
a huge overhead.

194
00:10:30,780 --> 00:10:31,840
Alicja: Exactly, exactly.

195
00:10:31,840 --> 00:10:34,900
This is like to be on the safe
side, right?

196
00:10:34,900 --> 00:10:39,140
Because otherwise if you've got
a production system, then you

197
00:10:39,140 --> 00:10:42,040
go too low and then you've got
an outage.

198
00:10:43,660 --> 00:10:45,860
Michael: And I don't think this
requires a restart, right?

199
00:10:45,860 --> 00:10:50,740
So you can reload the config and
it's not a huge overhead to

200
00:10:50,740 --> 00:10:53,300
change that 1 but yeah that's
a really good tip.

201
00:10:53,360 --> 00:10:57,660
Without that we get almost nothing
like we don't get any reporting

202
00:10:57,660 --> 00:11:02,540
on the slow statements but I guess
the trade-off is if our system's

203
00:11:02,080 --> 00:11:06,020
biggest bottleneck is CPU,
let's say the customer mentioned

204
00:11:06,020 --> 00:11:09,320
they have high CPU, and that's
mostly being caused by millions

205
00:11:09,320 --> 00:11:12,980
of executions of very fast queries,
then we don't spot them.

206
00:11:12,980 --> 00:11:16,780
So it's that trade-off between
slow queries being the issue versus

207
00:11:16,780 --> 00:11:18,340
lots of fast queries being the
issue.

208
00:11:18,400 --> 00:11:21,340
Alicja: It is, but you know in
this kind of, this is a very good

209
00:11:21,340 --> 00:11:26,120
point because you are right like
I saw outages when we had like

210
00:11:26,120 --> 00:11:33,020
yes a lot of super small queries,
right, **that were executed often

211
00:11:33,060 --> 00:11:38,720
like thousand times a second and
you know we've got 2 ways actually

212
00:11:38,720 --> 00:11:39,600
to do that.

213
00:11:39,720 --> 00:11:43,320
One way would be like to have some
kind of cron job that would

214
00:11:43,320 --> 00:11:45,860
change log_min_duration_statement
to 0.

215
00:11:45,860 --> 00:11:50,580
And after some short period of
time, like switch back, right?

216
00:11:50,740 --> 00:11:51,600
That would be one.

217
00:11:51,600 --> 00:11:57,560
And yeah, and the second one would
be, there is like the new method

218
00:11:57,560 --> 00:12:00,800
that I haven't tried yet on production,
but there is the sampling.

219
00:12:01,860 --> 00:12:05,660
So, you know, in Postgres, like,
I haven't tried it yet, but

220
00:12:05,660 --> 00:12:07,440
it might be something to consider.

221
00:12:07,440 --> 00:12:11,420
So, for now, I was just doing cron jobs,
but to be very honest

222
00:12:11,420 --> 00:12:17,140
with you, it's a problem when you
are analyzing the report, right?

223
00:12:17,140 --> 00:12:21,000
Because, yeah, you see really weird,
like, charts.

224
00:12:21,940 --> 00:12:25,760
So, you always need to keep that
in mind that you've got this

225
00:12:25,760 --> 00:12:27,740
switch in the cron job.

226
00:12:28,280 --> 00:12:31,320
Michael: Yeah, or for the fast
queries, rely on something like

227
00:12:31,320 --> 00:12:35,340
pg_stat_statements if you have
that on already, for example.

228
00:12:35,340 --> 00:12:40,280
But back to pgBadger, I looked
at the docs and some of the other

229
00:12:40,280 --> 00:12:43,120
ones that people might want to
switch on that have low overhead.

230
00:12:43,140 --> 00:12:45,780
Stop me if I mention one that is
dangerous.

231
00:12:46,020 --> 00:12:49,460
But we have log_checkpoints, log_connections, log_disconnections,

232
00:12:50,320 --> 00:12:53,040
log_lock_waits you mentioned already,
log_temp_files.

233
00:12:53,200 --> 00:12:54,140
Do you do any...

234
00:12:54,140 --> 00:12:56,880
So the recommendation in the pgBadger
Docs is to set that to

235
00:12:56,880 --> 00:12:57,380
0?

236
00:12:57,740 --> 00:13:01,160
Alicja: No I just set it to 0 you
know.

237
00:13:01,560 --> 00:13:04,180
Michael: Okay great so that's good
that that's you don't see

238
00:13:04,180 --> 00:13:06,940
that as being anywhere near as
dangerous **I have seen some recommendations

239
00:13:07,120 --> 00:13:09,860
to start with a higher setting
of that and go downwards as well

240
00:13:09,860 --> 00:13:12,800
which I guess people can do if
they want to be extra cautious

241
00:13:12,800 --> 00:13:16,400
but if you've tuned work_mem at
all already then I guess you should

242
00:13:16,400 --> 00:13:18,100
be pretty safe to put that to 0.

243
00:13:18,180 --> 00:13:21,580
Alicja: I've never seen any problem
when I tweaked with temp

244
00:13:21,580 --> 00:13:24,120
files or checkpoints to be very
honest.

245
00:13:24,520 --> 00:13:28,180
Like maybe there are some narrow
use cases, but for, you know,

246
00:13:28,180 --> 00:13:32,040
over the years, I've never seen
any problems with other settings.

247
00:13:32,040 --> 00:13:35,680
Like the same with autovacuum,
like people also were a bit afraid

248
00:13:35,680 --> 00:13:39,140
about the autovacuum, but **it's
not, it's really just a couple

249
00:13:39,140 --> 00:13:43,500
of lines like per minute and yeah,
it will not hit your system

250
00:13:43,500 --> 00:13:44,200
that much.

251
00:13:44,820 --> 00:13:45,140
Michael: Nice.

252
00:13:45,140 --> 00:13:47,420
And yeah, you've got one that I forgot
there, which is log_autovacuum_min_duration.

253
00:13:47,420 --> 00:13:48,400
vacuum min duration.

254
00:13:48,420 --> 00:13:51,720
Set that to 0 as well so that we
get the important vacuum stuff.

255
00:13:51,720 --> 00:13:52,220
Great.

256
00:13:53,140 --> 00:13:55,320
So that lets people get this set
up.

257
00:13:55,320 --> 00:13:56,740
And I've checked some hosting.

258
00:13:57,040 --> 00:14:01,220
So seeing as you work on a managed
service provider, Do you have

259
00:14:01,220 --> 00:14:04,200
some of these set by default or
is this something that people

260
00:14:04,200 --> 00:14:05,660
need to go in and change?

261
00:14:06,140 --> 00:14:10,120
Alicja: Yes, for instance, yeah,
because I work at Microsoft

262
00:14:10,120 --> 00:14:14,140
it's the easiest for me to talk
about how we've got the setup

263
00:14:14,140 --> 00:14:14,600
done.

264
00:14:14,600 --> 00:14:18,480
I don't know that much about other cloud
providers but we've got log

265
00:14:18,480 --> 00:14:22,960
connections, log disconnections
to on, and users cannot change

266
00:14:22,960 --> 00:14:23,200
it.

267
00:14:23,200 --> 00:14:24,260
It's read only.

268
00:14:24,480 --> 00:14:28,620
And it's because we need it for
the internal telemetry of ours.

269
00:14:28,620 --> 00:14:30,480
So this is just on.

270
00:14:30,720 --> 00:14:34,620
Also, you know, pgBadger needs
to have logs in English.

271
00:14:35,240 --> 00:14:37,520
So also this is by default on Azure.

272
00:14:37,760 --> 00:14:43,060
So a lot of them are already enabled,
already on, like on Azure.

273
00:14:43,380 --> 00:14:47,040
So this is good news, but still
you need to enable logs, you

274
00:14:47,040 --> 00:14:51,420
need to enable autovacuum, you
need to enable logging like

275
00:14:51,420 --> 00:14:53,660
for other stuff and log_line_prefix.

276
00:14:54,720 --> 00:14:59,100
I know that not all cloud providers
you are able to change

277
00:14:59,100 --> 00:15:02,120
log_line_prefix, as far as I know
at least.

278
00:15:02,660 --> 00:15:04,900
Yes, we do allow that.

279
00:15:04,960 --> 00:15:09,120
So this is good news, but I've
heard that not on all, like probably

280
00:15:09,120 --> 00:15:11,700
AWS doesn't allow that, as I remember.

281
00:15:12,260 --> 00:15:15,640
At least it wasn't a couple of
years ago, you know.

282
00:15:16,000 --> 00:15:19,660
I remember writing a script,
like to parse the logs, to

283
00:15:19,660 --> 00:15:23,080
be able to generate like pgBadger,
because we were not able to

284
00:15:23,080 --> 00:15:24,480
change log_line_prefix.

285
00:15:25,440 --> 00:15:29,860
And yes, they also have this, they
had really weird format.

286
00:15:29,860 --> 00:15:32,780
So we needed a script that would
parse that.

287
00:15:33,040 --> 00:15:38,200
But good news is that pgBadger
introduced support for this log

288
00:15:38,200 --> 00:15:38,620
format.

289
00:15:38,620 --> 00:15:41,140
So you've got a separate format,
which is RDS.

290
00:15:41,240 --> 00:15:43,600
So you don't need to have custom
scripts anymore.

291
00:15:43,600 --> 00:15:45,260
So this is good news.

292
00:15:45,940 --> 00:15:48,360
Michael: Yeah, I was so impressed
reading through the pgBadger

293
00:15:48,380 --> 00:15:51,920
docs, all the different formats
they support is incredibly

294
00:15:52,540 --> 00:15:55,840
flexible tool, almost as if it's
designed by people that are

295
00:15:55,840 --> 00:15:58,520
using it and built by people that
use it all the time.

296
00:15:58,520 --> 00:16:04,240
But definitely tons of options
and lots of auto recognizing formats

297
00:16:04,440 --> 00:16:06,500
and file types and things like
that.

298
00:16:06,500 --> 00:16:09,760
So for a lot of people I would
guess it would just work and for

299
00:16:09,760 --> 00:16:13,380
people on cloud providers you've
written a great post for using

300
00:16:13,380 --> 00:16:16,720
it with Azure but there's also
I found a couple of posts from

301
00:16:16,720 --> 00:16:20,460
people from the RDS team and from
Google CloudSQL teams that

302
00:16:20,740 --> 00:16:23,200
explain how to use it with their
systems as well.

303
00:16:23,200 --> 00:16:26,040
If any other cloud providers are
listening and have guides, please

304
00:16:26,040 --> 00:16:28,580
send them to me and I'll include
them up in the show notes.

305
00:16:28,580 --> 00:16:31,560
But yeah, it feels like this is
a tool people could be using

306
00:16:31,560 --> 00:16:33,580
even with unmanaged Postgres
providers.

307
00:16:33,900 --> 00:16:37,240
Alicja: Exactly, it makes me really
happy to see the blog posts

308
00:16:37,240 --> 00:16:40,800
from AWS and Google Cloud because
exactly as you said it means

309
00:16:40,800 --> 00:16:45,780
that they're using pgBadger,
like the users of the managed

310
00:16:46,100 --> 00:16:50,260
services are using pgBadger,
so it was probably needed.

311
00:16:51,060 --> 00:16:53,260
Michael: Nice, yep and also it
means this thing can live on,

312
00:16:53,260 --> 00:16:53,760
right?

313
00:16:53,800 --> 00:16:57,580
We don't have to completely replace
it with cloud native tools,

314
00:16:57,660 --> 00:16:59,340
it still should work, right?

315
00:16:59,480 --> 00:17:02,160
Still Postgres, the only place
it currently puts some of this

316
00:17:02,160 --> 00:17:04,960
information is in the logs so at
some point we're gonna have

317
00:17:04,960 --> 00:17:08,380
to look at the logs and having
a tool to do so often helps.

318
00:17:09,140 --> 00:17:12,780
Cool, all right so we've mentioned
the min duration being a potential

319
00:17:12,780 --> 00:17:15,920
gotcha are there any others that
we need to be cautious of when

320
00:17:15,920 --> 00:17:16,760
using this?

321
00:17:17,360 --> 00:17:22,140
Alicja: About the parameter settings,
actually this is like the

322
00:17:22,140 --> 00:17:23,040
most dangerous.

323
00:17:23,120 --> 00:17:29,280
So not that much I would say, like
the other thing you might

324
00:17:29,280 --> 00:17:32,360
have problem with, like it's the
size of the log itself.

325
00:17:32,360 --> 00:17:35,740
So I had sometimes like a problem,
you know, when I was like

326
00:17:35,740 --> 00:17:39,400
receiving from the customer just
terabytes of logs and it was

327
00:17:39,400 --> 00:17:43,500
okay, here are my logs, just generate
whatever you want from

328
00:17:43,500 --> 00:17:44,200
that, right?

329
00:17:44,200 --> 00:17:47,320
And Yeah, that also might be a
problem.

330
00:17:47,320 --> 00:17:51,420
And although, you know, pgBadger
has this minus J option, like

331
00:17:51,420 --> 00:17:53,460
most of Postgres tools, right?

332
00:17:53,560 --> 00:17:57,900
It's still like on my own laptop,
it was like days.

333
00:17:58,780 --> 00:18:04,460
So, Yes, so still like if you've
got like really, you know huge

334
00:18:04,460 --> 00:18:09,220
amount of logs you probably still
need powerful VM just to parse

335
00:18:09,220 --> 00:18:09,800
the logs.

336
00:18:09,800 --> 00:18:12,780
Otherwise like yeah, it will not
work

337
00:18:13,860 --> 00:18:17,980
Michael: Yeah, I saw I saw it has
both -j lowercase and -J uppercase So it has like two different

338
00:18:17,980 --> 00:18:21,680
ways of handling.

339
00:18:22,740 --> 00:18:26,500
I think the first one is for parallelizing,
like having multiple

340
00:18:26,980 --> 00:18:29,240
CPUs handling a single large log
file.

341
00:18:29,240 --> 00:18:30,420
I didn't hear of terabytes.

342
00:18:30,420 --> 00:18:33,440
All the examples I saw were in
gigabytes, so that's quite scary.

343
00:18:33,640 --> 00:18:39,020
But I think "-J" was even faster
if you had, let's say, like if

344
00:18:39,020 --> 00:18:43,100
you were given 20 Large log files
and you wanted to give 20 CPUs

345
00:18:43,100 --> 00:18:46,920
and handle them one at a time like
one CPU per file So that's quite

346
00:18:46,920 --> 00:18:50,820
cool that it has multiple ways
of handling like huge workloads,

347
00:18:51,340 --> 00:18:53,460
but terabytes of logs is terrifying.

348
00:18:56,000 --> 00:18:56,880
Did they have to?

349
00:18:56,880 --> 00:18:59,580
So I saw in that, like it has so
many options by the way, I have

350
00:18:59,580 --> 00:19:01,980
a, in fact, a quick trivia question
for you.

351
00:19:02,320 --> 00:19:07,080
It has so many dash single letter
commands that I had noticed

352
00:19:07,080 --> 00:19:09,960
there are only 2 letters in the
English alphabet that it doesn't

353
00:19:09,960 --> 00:19:16,040
have a Dash that letter Can you
guess which letters they are?

354
00:19:20,660 --> 00:19:25,540
Only K and Y. It's not fair only
K, and and there is

355
00:19:25,540 --> 00:19:28,540
even a they even have an option
that's keep comments.

356
00:19:28,600 --> 00:19:32,840
So my recommendation to the pgBadger
team is to make that the

357
00:19:32,840 --> 00:19:35,500
K, and then they only have to find
1 more, and they have the whole

358
00:19:35,500 --> 00:19:40,580
alphabet. Anyway, sorry, that's
a that's not But

359
00:19:40,580 --> 00:19:43,280
Alicja: no, no, this is actually
great point because sometimes

360
00:19:43,280 --> 00:19:47,860
you know, the customers has got
like some non-common questions,

361
00:19:48,420 --> 00:19:48,760
right?

362
00:19:48,760 --> 00:19:52,500
For instance, I had a customer
that had a problem, like performance

363
00:19:52,540 --> 00:19:55,560
problem, and they wanted to share
pgBadger with me, but they

364
00:19:55,560 --> 00:19:57,980
were not able to because of the
data, right?

365
00:19:57,980 --> 00:20:02,300
Like, because they were not able
to show me the queries.

366
00:20:02,520 --> 00:20:06,600
And I didn't even know, but yes,
pgBadger has so many options

367
00:20:06,600 --> 00:20:10,220
that I quickly check and there
is like this anonymized option

368
00:20:10,280 --> 00:20:15,040
like yeah so you can you can find
a lot of stuff like really

369
00:20:15,040 --> 00:20:17,940
a lot of options in pgBadger.

370
00:20:18,280 --> 00:20:20,220
So this is actually a great point.

371
00:20:20,280 --> 00:20:23,520
Michael: I was gonna ask about
2 of them actually, B and E, so

372
00:20:23,520 --> 00:20:24,520
beginning and end.

373
00:20:24,520 --> 00:20:28,240
If we have a huge log file, but
we only, let's say it's from

374
00:20:28,520 --> 00:20:33,540
a month of logs or something, and
we only care about, let's say

375
00:20:33,540 --> 00:20:37,920
a day or 2 that we had problems,
if we set beginning and end,

376
00:20:37,960 --> 00:20:41,820
would that drastically speed things
up as well or are we still

377
00:20:41,820 --> 00:20:42,480
Alicja: going to?

378
00:20:43,700 --> 00:20:46,320
Depends, right, of the size of
the file.

379
00:20:46,320 --> 00:20:50,320
So it's more like an operating system
question, I would say, because,

380
00:20:50,320 --> 00:20:53,420
you know, sometimes if you've got
really huge file, like it,

381
00:20:53,420 --> 00:20:57,180
you need to read your operating
system, you need to read it anyways,

382
00:20:57,180 --> 00:20:57,440
right?

383
00:20:57,440 --> 00:21:00,460
So in this case, it might not help
that much.

384
00:21:00,780 --> 00:21:04,900
So, you know, it's a long story,
but, you know, it's really huge

385
00:21:05,220 --> 00:21:06,840
logs, like huge files.

386
00:21:07,400 --> 00:21:11,820
I've been trying like a lot of
stuff and I've been trying like

387
00:21:11,820 --> 00:21:15,560
to split the files, you know, using
just the split command.

388
00:21:15,560 --> 00:21:19,400
Like, I was trying like a lot of
different ways, but finally

389
00:21:19,400 --> 00:21:21,620
what works the best is just a huge
VM.

390
00:21:23,560 --> 00:21:27,180
Like this is the easiest way and
you just put that on a huge VM

391
00:21:27,180 --> 00:21:28,720
and you've got it solved.

392
00:21:29,720 --> 00:21:30,220
Michael: Nice.

393
00:21:30,540 --> 00:21:31,000
Cool.

394
00:21:31,000 --> 00:21:31,500
Okay.

395
00:21:31,600 --> 00:21:35,840
I saw you hosted 1 of the Postgres
Friday.

396
00:21:35,840 --> 00:21:38,400
We've done a couple of submissions
of podcast submissions to that

397
00:21:38,400 --> 00:21:42,760
event and yours was on pgBadger
but I saw that Gilles Darold submitted

398
00:21:42,780 --> 00:21:46,880
a post and 1 of the things they
mentioned was flexibility so

399
00:21:46,880 --> 00:21:51,220
they they really like to typical
Postgres consultant I go not

400
00:21:51,220 --> 00:21:56,000
typical but extra on the Postgres
side they parse the logs into

401
00:21:56,000 --> 00:21:59,240
a Postgres database in order to
be able to query them, what like

402
00:21:59,240 --> 00:22:03,040
giving them a little bit more flexibility.
So I guess we are somewhat

403
00:22:03,040 --> 00:22:06,600
limited by the reports that pgBadger
gives us, but in the real

404
00:22:06,600 --> 00:22:09,880
world, it's giving us so much useful
information that I guess

405
00:22:09,920 --> 00:22:12,720
those cases are a bit more like
few and far between.

406
00:22:12,720 --> 00:22:15,580
And we get most of the way there
just with the pgBadger report.

407
00:22:15,580 --> 00:22:16,360
Is that fair?

408
00:22:16,360 --> 00:22:19,700
Alicja: You know, I, yeah, I saw
the answer from Leticia and

409
00:22:19,820 --> 00:22:23,660
you know this is actually interesting,
like parsing this to the

410
00:22:23,680 --> 00:22:27,740
database, especially like I had
a lot of ask about that from

411
00:22:27,800 --> 00:22:30,860
ex-Oracle DBAs, right, they always
ask, you know, we're asking

412
00:22:30,860 --> 00:22:33,300
if there is a way to do that, right?

413
00:22:33,460 --> 00:22:37,900
So it might be useful, but when
I was a consultant, because I'm

414
00:22:37,900 --> 00:22:42,740
not that much anymore, like I do
prefer to use something that

415
00:22:42,740 --> 00:22:47,180
I can just ping a link to, like
the tool that is already there

416
00:22:47,200 --> 00:22:50,820
and it works and just pinging the
link, like generate me data,

417
00:22:51,100 --> 00:22:54,220
you know, focusing on writing my
own tools.

418
00:22:54,620 --> 00:22:58,220
So but OK, I don't say it's wrong
or something, but it's just

419
00:22:58,440 --> 00:22:59,800
it's just easier for me.

420
00:22:59,800 --> 00:23:04,000
And also what I like, it's the
visual part, because like the

421
00:23:04,000 --> 00:23:08,400
visual part, it's really something
that allows you super quickly

422
00:23:09,160 --> 00:23:10,760
to go to the root cause.

423
00:23:11,140 --> 00:23:15,920
It's not that easy with the text,
like if you're doing the query

424
00:23:15,920 --> 00:23:20,080
and you've got the result okay
but you need to focus a bit more

425
00:23:20,080 --> 00:23:22,840
and you know it's pgBadger
you've got this chart and you

426
00:23:22,840 --> 00:23:27,160
just you know see everything straight
away so it's the way easier

427
00:23:27,160 --> 00:23:30,560
way faster for me and you've got
like all the answers straight

428
00:23:30,560 --> 00:23:35,180
away so you know it's just a matter
of preferences I would say

429
00:23:35,180 --> 00:23:38,240
my preference is to have graphical
presentation.

430
00:23:39,400 --> 00:23:42,480
Michael: Yeah, that's a huge pro
for a lot of people me included.

431
00:23:43,520 --> 00:23:47,120
You've also mentioned another one
there which I don't think we've

432
00:23:47,120 --> 00:23:51,780
mentioned is how tried and tested 
this is this is a I think I

433
00:23:51,780 --> 00:23:56,740
looked it up and it was 2012 pgBadger 
v1 came out April or May

434
00:23:56,740 --> 00:24:02,080
time and it's version 12.4 now
with a ton of bug fixes and all

435
00:24:02,080 --> 00:24:03,900
sorts of feature improvements since
then.

436
00:24:03,900 --> 00:24:07,420
So to start writing your own tool,
we've already mentioned some

437
00:24:07,420 --> 00:24:11,700
of the features we get, but there's
honestly dozens of settings

438
00:24:12,560 --> 00:24:14,980
that it has, more than the ones
I mentioned.

439
00:24:15,420 --> 00:24:18,340
Not just every letter covered,
but multiple letters have lowercase

440
00:24:18,340 --> 00:24:21,860
and uppercase and there are quite
a few that don't have shortcuts

441
00:24:22,000 --> 00:24:25,520
like there's another few dozen
that just you know you have to

442
00:24:25,520 --> 00:24:29,700
set separately so you're yes incredibly
flexible and tested like

443
00:24:29,700 --> 00:24:33,780
it's been tested for many many
years which is worth it worth

444
00:24:33,780 --> 00:24:34,460
a lot.

445
00:24:35,280 --> 00:24:38,440
Alicja: Exactly, exactly, you know
like to be honest to write

446
00:24:38,440 --> 00:24:41,200
your own tool that would be that
comprehensive it would have

447
00:24:41,200 --> 00:24:47,060
like so many dashboards so many
information in it like it's years

448
00:24:47,180 --> 00:24:53,260
you know so yeah I do prefer to
have this tool instead of, you

449
00:24:53,260 --> 00:24:58,740
know, writing my own, but I totally
get people that prefer to

450
00:24:58,740 --> 00:24:59,860
write their own tools.

451
00:25:00,520 --> 00:25:01,020
Michael: Nice.

452
00:25:01,220 --> 00:25:04,120
And is there anything that they
could be doing?

453
00:25:04,540 --> 00:25:06,080
I think, is it Gilles Darouine?

454
00:25:06,080 --> 00:25:09,060
I don't know quite how to pronounce
their name, but the maintainer,

455
00:25:09,320 --> 00:25:11,480
is there anything they could be
doing or is there anything you'd

456
00:25:11,480 --> 00:25:13,520
like to see in pgBadger in the
future?

457
00:25:14,060 --> 00:25:16,540
Alicja: I would like to see the
support for Azure.

458
00:25:17,640 --> 00:25:19,440
That would be my main thing.

459
00:25:19,540 --> 00:25:24,260
And you know, probably that also
like the thing about the logs,

460
00:25:24,960 --> 00:25:29,600
and I know this is not easy because
it's not that easy like to

461
00:25:29,600 --> 00:25:31,400
get that from the logs.

462
00:25:31,900 --> 00:25:35,580
But you know, in the logs tab,
you would see only the queries

463
00:25:35,600 --> 00:25:41,080
that were locked, not like the
queries that caused the lock.

464
00:25:41,740 --> 00:25:45,300
And of course, there is a simple
reason because of that, because

465
00:25:45,300 --> 00:25:48,480
it's how Postgres is doing this,
right?

466
00:25:48,960 --> 00:25:52,440
But sometimes if you've got log
min duration statements to 0,

467
00:25:52,440 --> 00:25:58,240
right, and you've got PID in the
prefix, you can just correlate,

468
00:25:58,520 --> 00:25:59,020
right?

469
00:26:00,140 --> 00:26:04,620
Because Postgres would normally
tell you like, okay, this PID

470
00:26:04,740 --> 00:26:06,500
is blocking your query, right?

471
00:26:06,500 --> 00:26:10,200
So then you can search for this
PID and maybe you've got like

472
00:26:10,200 --> 00:26:14,340
the query and try to correlate
that because it's, you know, it's

473
00:26:14,340 --> 00:26:15,580
not that useful.

474
00:26:15,960 --> 00:26:20,380
Like if you just see like the queries
that are locked and not

475
00:26:20,380 --> 00:26:24,020
what is actually locking the queries,
right?

476
00:26:24,020 --> 00:26:29,020
So you need to have an additional
cron job with the lock query to

477
00:26:29,020 --> 00:26:29,940
compare with.

478
00:26:30,060 --> 00:26:33,640
So yeah, that for sure would
be an ask, but I'm not sure if

479
00:26:33,640 --> 00:26:37,400
this is actually the ask to Gilles.

480
00:26:37,580 --> 00:26:42,320
I also am not sure how to pronounce
his name, so I'm saying Gilles.

481
00:26:42,820 --> 00:26:46,840
I'm not sure if this is an ask
to Gilles or more to Postgres contributors,

482
00:26:46,840 --> 00:26:49,340
to be honest, just to have that
in the logs.

483
00:26:50,140 --> 00:26:51,560
Michael: Yeah, to log more information.

484
00:26:51,600 --> 00:26:54,260
So yeah, that makes sense that
it might be there.

485
00:26:54,960 --> 00:26:55,460
Cool.

486
00:26:55,580 --> 00:26:58,520
And yeah, it's released under the
PostgreSQL license, which is

487
00:26:58,520 --> 00:27:03,000
one of the most permissive licenses
I've ever seen, which is awesome.

488
00:27:03,260 --> 00:27:04,780
So thank you for doing that.

489
00:27:04,780 --> 00:27:06,540
Thanks for all your hard work on
it.

490
00:27:06,760 --> 00:27:09,220
Is there anything else you'd like
people to know?

491
00:27:10,240 --> 00:27:13,360
Alicja: Nothing's coming to my mind,
to be honest.

492
00:27:14,660 --> 00:27:18,420
Michael: Well actually, because
you started with throwing shade

493
00:27:18,420 --> 00:27:21,240
at pg_stat_statements, I did wonder
if there were any...

494
00:27:21,280 --> 00:27:25,300
So, like, for example, with pg_stat_statements,
we can see things

495
00:27:25,320 --> 00:27:26,760
like buffers.

496
00:27:27,040 --> 00:27:30,900
We can see the amount of work done
by each query.

497
00:27:30,900 --> 00:27:34,400
So reads, writes, temp, shared,
that kind of thing.

498
00:27:34,400 --> 00:27:37,700
I don't think, is there a way to
see that kind of information?

499
00:27:38,760 --> 00:27:42,080
I guess we could look at the auto_explain
logs but I don't think

500
00:27:42,180 --> 00:27:45,800
pgBadger is doing anything at the
moment to aggregate those per

501
00:27:45,800 --> 00:27:46,660
query group.

502
00:27:46,860 --> 00:27:49,960
Alicja: No, I've never seen that.
Just thinking if we would have

503
00:27:49,960 --> 00:27:56,840
like more, if we would change error
verbosity, if that would

504
00:27:56,840 --> 00:28:00,320
be logged and then presented, but
I haven't seen that.

505
00:28:00,740 --> 00:28:05,580
So I don't want to guess, but you
know, like for me, actually,

506
00:28:05,580 --> 00:28:09,500
the pgBadger, it's like I'm going
to the top queries.

507
00:28:09,800 --> 00:28:14,440
And when I see top query, like,
yeah, maybe let's go first through

508
00:28:14,440 --> 00:28:15,120
the process.

509
00:28:15,420 --> 00:28:16,420
That will be easier.

510
00:28:16,420 --> 00:28:19,860
So when I see the top queries,
sometimes it's just, you know,

511
00:28:19,860 --> 00:28:21,360
you just see what's wrong.

512
00:28:21,420 --> 00:28:26,320
Like, it's really pretty often,
like, much more often than, you

513
00:28:26,320 --> 00:28:29,720
know, you would guess, like, just
you see, okay, this is not

514
00:28:29,720 --> 00:28:34,900
in, right, or this is something
similar.

515
00:28:34,940 --> 00:28:36,620
So, you just see that, right?

516
00:28:36,620 --> 00:28:40,700
So, you don't need to even run
explain analyze but if you don't,

517
00:28:40,760 --> 00:28:45,060
then I just take the query
and run explain analyze on

518
00:28:45,060 --> 00:28:50,220
that and yeah, that's the next step
I would do. So but yes, you

519
00:28:50,220 --> 00:28:50,880
are right.

520
00:28:50,940 --> 00:28:52,520
Michael: And then you can ask for
buffers.

521
00:28:53,040 --> 00:28:56,900
Alicja: Exactly, and then like buffers,
yes, absolutely, I would

522
00:28:56,980 --> 00:28:58,040
use that option.

523
00:28:58,260 --> 00:29:00,760
Michael: So just to clarify on
that, I think this is great, so

524
00:29:00,760 --> 00:29:04,740
top queries is a really good point
and it orders those by total

525
00:29:04,740 --> 00:29:05,820
execution time.

526
00:29:06,060 --> 00:29:07,420
So, across all calls, right?

527
00:29:07,420 --> 00:29:12,180
So, you see, it could be a slow
query that's run a few times that's

528
00:29:12,180 --> 00:29:15,600
taking up the most resources or
it could be a fairly fast query

529
00:29:15,600 --> 00:29:18,320
that's being run thousands of times
per second, as you mentioned,

530
00:29:18,320 --> 00:29:22,560
and they're ordered by, I believe
the main report is by total

531
00:29:22,560 --> 00:29:23,300
execution time.

532
00:29:23,300 --> 00:29:24,060
Is that right?

533
00:29:24,440 --> 00:29:27,400
Alicja: You've got like couple
sections on this page.

534
00:29:27,400 --> 00:29:30,040
So you've got the most important
one that I'm using.

535
00:29:30,040 --> 00:29:31,640
It's exactly the one that you mentioned.

536
00:29:31,640 --> 00:29:35,080
So the total execution time, that
this is always like the first

537
00:29:35,080 --> 00:29:38,100
thing I'm looking at, like, you
know, what took the most time

538
00:29:38,100 --> 00:29:41,000
of this, you know, from the system,
which queries that it doesn't

539
00:29:41,000 --> 00:29:44,700
matter for me, just one query, or
it was like multiple executions

540
00:29:45,060 --> 00:29:46,820
of other queries, right?

541
00:29:46,820 --> 00:29:50,580
So this is like probably the third
section from the top, but

542
00:29:50,580 --> 00:29:56,420
you also have other aggregations,
like the slowest individual

543
00:29:56,720 --> 00:30:00,600
queries, other stuff, but I usually
don't look at that, to be

544
00:30:00,600 --> 00:30:01,420
very honest.

545
00:30:02,180 --> 00:30:06,060
I just, you know, I just taking
the, this is absolutely my favorite.

546
00:30:06,060 --> 00:30:09,960
If I need to then, yeah, I go,
but it's absolutely my main point,

547
00:30:09,960 --> 00:30:11,880
like to go there and see.

548
00:30:12,040 --> 00:30:12,440
Michael: Nice.

549
00:30:12,440 --> 00:30:15,360
That's what I like to recommend
people do on pg_stat_statements

550
00:30:15,360 --> 00:30:15,800
as well.

551
00:30:15,800 --> 00:30:19,300
But I guess with this one, with pgBadger,
we get more of an opinionated,

552
00:30:19,840 --> 00:30:22,800
shows people what to look at rather
than pg_stat_statements.

553
00:30:22,800 --> 00:30:25,080
People can query and order by whatever
they want.

554
00:30:25,080 --> 00:30:29,020
So I see a lot more people making
that mistake or just looking

555
00:30:29,020 --> 00:30:31,240
at their slowest queries rather
than the totals.

556
00:30:31,780 --> 00:30:32,280
Amazing.

557
00:30:32,400 --> 00:30:34,900
Alicja, thank you so much for your
time today.

558
00:30:34,960 --> 00:30:38,200
Is there anything you want to plug
or anything you want to share

559
00:30:38,200 --> 00:30:38,860
with people?

560
00:30:39,020 --> 00:30:44,120
Alicja: Actually, if I may plug
my songs, because I just realized

561
00:30:44,140 --> 00:30:46,220
not many people do know my songs.

562
00:30:46,220 --> 00:30:50,440
I produced two songs, the first was
about the vacuum and the second

563
00:30:50,440 --> 00:30:53,500
one actually you were the author
of the lyrics, right?

564
00:30:54,140 --> 00:30:59,980
So you know there are like two Postgres
songs out there and yes

565
00:31:00,360 --> 00:31:03,080
We are co-producers of the second
one.

566
00:31:03,080 --> 00:31:06,320
So yeah, that would be absolutely
my, my plug.

567
00:31:06,540 --> 00:31:07,440
Michael: That's kind of you.

568
00:31:07,440 --> 00:31:09,640
The lyrics are extremely basic
for that second one.

569
00:31:09,640 --> 00:31:12,540
So I apologize to people in advance,
but you did an awesome job

570
00:31:12,540 --> 00:31:16,100
with the vocals and the, that was
not me, the vocals and the

571
00:31:16,100 --> 00:31:16,770
video as well.

572
00:31:16,770 --> 00:31:19,840
It was a nice one of all the Postgres
events so yeah you did an

573
00:31:19,840 --> 00:31:23,500
awesome job I will happily link
those up in the show notes.

574
00:31:24,240 --> 00:31:24,940
Alicja: Thank you.

575
00:31:25,420 --> 00:31:28,340
Michael: Thank you so much Alicja,
it's been a pleasure take care.