1
00:00:01,060 --> 00:00:04,940
Hello, hello, this is
PostgresFM episode number 78,

2
00:00:06,040 --> 00:00:09,020
live, because I'm alone today again.

3
00:00:09,720 --> 00:00:15,360
Michael is on vacations, holidays,
but I cannot allow us to miss

4
00:00:15,900 --> 00:00:21,000
any weeks, because we do it already
1 year and a half.

5
00:00:21,280 --> 00:00:24,640
Episode 78, so no weeks are missed.

6
00:00:24,840 --> 00:00:27,540
So this week we'll do it as well.

7
00:00:27,600 --> 00:00:29,080
But it will be a small episode.

8
00:00:29,160 --> 00:00:31,600
First of all, happy holidays everyone!

9
00:00:33,100 --> 00:00:39,640
It's December 29th, so it's holiday
season.

10
00:00:40,840 --> 00:00:43,700
And let's have some small episode
about work_mem.

11
00:00:43,780 --> 00:00:46,740
Somebody asked me to cover this
topic.

12
00:00:47,780 --> 00:00:51,240
And I actually wrote a how-to,
I just haven't published it yet.

13
00:00:51,820 --> 00:00:57,180
I'm in my Postgres Marathon series,
where I publish how-tos every

14
00:00:57,180 --> 00:00:57,680
day.

15
00:00:58,080 --> 00:01:01,680
Almost, I'm lagging as well a little
bit, because of holiday

16
00:01:01,680 --> 00:01:03,660
season, but I'm going to catch
up.

17
00:01:04,460 --> 00:01:05,200
So work_mem.

18
00:01:05,460 --> 00:01:08,740
work_mem is that everyone uses,
right?

19
00:01:08,740 --> 00:01:10,060
We all run queries.

20
00:01:11,760 --> 00:01:14,020
We need to use work_mem.

21
00:01:15,240 --> 00:01:21,340
But this is super tricky, super,
how to say, basic setting, because

22
00:01:21,340 --> 00:01:22,440
everyone needs it.

23
00:01:23,800 --> 00:01:28,820
And it's also super tricky because
every statement can utilize

24
00:01:30,800 --> 00:01:32,000
less than work_mem.

25
00:01:33,160 --> 00:01:38,240
This is limit, but it defines an upper
limit, right?

26
00:01:38,240 --> 00:01:41,940
So if we need less, we use less.

27
00:01:42,120 --> 00:01:46,540
It's not like an allocated amount
of memory, like shared buffers,

28
00:01:46,960 --> 00:01:48,840
the size of the buffer pool.

29
00:01:49,440 --> 00:01:53,100
But, so we can use less, any query
can use less.

30
00:01:53,500 --> 00:01:57,900
But the trickiest part is that
we don't know, any statement can

31
00:01:57,900 --> 00:01:59,120
be used multiple times.

32
00:02:00,020 --> 00:02:04,400
Less may be up to work memory,
but multiple times, because this

33
00:02:04,400 --> 00:02:08,980
is the amount of memory needed
for hashing, sorting operations.

34
00:02:10,120 --> 00:02:14,160
So if we have, for example, multiple
hashing operations inside

35
00:02:14,160 --> 00:02:19,860
one query, for example, multiple
hash joins, we can use it multiple

36
00:02:19,860 --> 00:02:20,220
times.

37
00:02:20,220 --> 00:02:29,760
And this adds the kind of, this
like unpredictability, it's hard

38
00:02:29,760 --> 00:02:35,160
to define a good algorithm to tune
it clearly, because we don't

39
00:02:35,160 --> 00:02:35,960
know, right?

40
00:02:36,020 --> 00:02:41,180
For example, we have some amount
of memory and we know our buffer

41
00:02:41,180 --> 00:02:45,600
pool size, it's simple and it's
another topic, but you define

42
00:02:45,600 --> 00:02:48,340
it once and you cannot change it
without restart.

43
00:02:48,340 --> 00:02:51,720
So we say, like some rule, most
people use it 25%.

44
00:02:52,580 --> 00:02:56,700
Okay, we allocate 25% for the buffer
pool.

45
00:02:57,660 --> 00:03:00,940
What's left we can use, and also
the operating system can use for

46
00:03:00,940 --> 00:03:02,620
its own page cache.

47
00:03:04,200 --> 00:03:07,940
We should not forget also that
there is also maintenance work_mem

48
00:03:07,940 --> 00:03:09,440
which is more predictable.

49
00:03:10,260 --> 00:03:14,940
It has some trickiness as well
because there is an autovacuum work_mem

50
00:03:14,940 --> 00:03:16,480
which is by default minus 1.

51
00:03:16,480 --> 00:03:21,380
It means maintenance work_mem will
be used and we have multiple

52
00:03:21,380 --> 00:03:23,940
workers for autovacuum.

53
00:03:24,340 --> 00:03:30,040
So if we set it, for example, I
see people set to 2 gigabytes,

54
00:03:30,940 --> 00:03:34,540
it's maybe not really, it's quite
a lot.

55
00:03:34,540 --> 00:03:38,360
You need to ensure that, for example,
when you create an index,

56
00:03:38,880 --> 00:03:42,540
2 gigabytes is indeed helpful.

57
00:03:43,580 --> 00:03:48,100
But we can have multiple autovacuum
workers, and I usually advocate

58
00:03:48,160 --> 00:03:52,740
to raise the autovacuum workers a lot
so we can have many of them

59
00:03:52,740 --> 00:03:57,460
say 10 if you have a lot of CPUs
or maybe 20 even.

60
00:03:57,780 --> 00:04:03,060
It means if you set maintenance
work_mem to 2 gigabytes, autovacuum

61
00:04:03,060 --> 00:04:07,200
work_mem is minus 1, means
it inherits from maintenance

62
00:04:07,200 --> 00:04:07,700
work_mem.

63
00:04:08,560 --> 00:04:11,300
Autovacuum alone can use up to
20 gigabytes.

64
00:04:11,640 --> 00:04:16,960
It also depends because not for
everything it will use it, but

65
00:04:16,960 --> 00:04:22,460
anyway, we need to subtract these
20GB from the remaining memory.

66
00:04:22,900 --> 00:04:25,760
We also have some overhead for
additional processes.

67
00:04:26,820 --> 00:04:30,940
And then what's left, we can just
say, okay, we have max connections,

68
00:04:30,980 --> 00:04:32,160
say, 200.

69
00:04:32,560 --> 00:04:39,440
So we just divide the remaining
memory by 200 and this is roughly

70
00:04:39,480 --> 00:04:41,200
per each backend what we can use.

71
00:04:41,200 --> 00:04:46,200
But we don't know how many times
backends will use work_mem, right?

72
00:04:47,060 --> 00:04:52,280
So let's discuss the approach I
kind of developed just

73
00:04:53,000 --> 00:04:53,980
writing this how-to.

74
00:04:54,860 --> 00:04:59,180
First, my favorite rule is this
Pareto principle rule 80/20.

75
00:05:00,560 --> 00:05:06,180
So we take for example PGTune Leopard,
I forgot the name, but

76
00:05:06,180 --> 00:05:13,580
this is a very simple tuning heuristic
based tool, which is quite

77
00:05:13,580 --> 00:05:14,060
good.

78
00:05:14,060 --> 00:05:15,100
I mean, it's good.

79
00:05:15,100 --> 00:05:16,560
Good enough in many cases.

80
00:05:16,560 --> 00:05:21,900
You just use it, and it will give
you some value for work_mem.

81
00:05:22,060 --> 00:05:23,740
Quite a safe value.

82
00:05:25,360 --> 00:05:30,860
Again, the main thing is with work_mem
to see how your max connections,

83
00:05:30,860 --> 00:05:33,640
because if you set, if you increase
max connections you need

84
00:05:33,640 --> 00:05:39,640
to understand that, like, we don't
want to have out of memory.

85
00:05:39,680 --> 00:05:45,120
This is the main thing to be careful
with.

86
00:05:45,580 --> 00:05:50,860
So, okay, it will give you some
rough value, and I think let's

87
00:05:50,860 --> 00:05:52,540
go with this value, that's it.

88
00:05:53,040 --> 00:05:59,540
Then we run it for some time in
production, and the second important

89
00:05:59,540 --> 00:06:01,820
step is to have very good monitoring.

90
00:06:02,800 --> 00:06:09,100
Monitoring can provide you with some
very useful insights about temporary

91
00:06:09,100 --> 00:06:09,860
file creation.

92
00:06:10,120 --> 00:06:13,580
When work_mem is not enough,
it doesn't mean Postgres cannot

93
00:06:13,580 --> 00:06:14,440
execute the query.

94
00:06:14,440 --> 00:06:17,320
Postgres will execute your query,
but it will involve temporary

95
00:06:17,320 --> 00:06:21,680
file creation, meaning that it
will use disk to have more memory.

96
00:06:22,640 --> 00:06:25,800
And this is of course very slow,
it will slow down query execution

97
00:06:25,800 --> 00:06:33,120
a lot, but it will eventually finish
unless statement_timeout

98
00:06:33,120 --> 00:06:34,180
is reached, right?

99
00:06:34,780 --> 00:06:40,580
So, okay, we applied this rough
tuning.

100
00:06:40,840 --> 00:06:44,360
We started monitoring work_mem,
oh by the way, how to monitor

101
00:06:45,180 --> 00:06:46,320
these temporary files.

102
00:06:47,460 --> 00:06:51,540
I see two sources of temporary file
creation.

103
00:06:52,300 --> 00:06:55,380
First is, like, very high level is
pg_stat_database.

104
00:06:55,400 --> 00:06:59,800
For each database, you can see the
number of temporary files already

105
00:06:59,800 --> 00:07:04,020
created and the size of them, total
size of them, columns, temp

106
00:07:04,020 --> 00:07:05,780
files, and temp bytes.

107
00:07:06,480 --> 00:07:12,100
So if your monitoring is good,
or if you can extend it to have

108
00:07:12,100 --> 00:07:16,460
this, you will see the rates of
temp file creation and also size,

109
00:07:16,460 --> 00:07:17,720
size also interesting.

110
00:07:18,340 --> 00:07:23,260
We can talk about average size
or maybe maximum size for each

111
00:07:23,260 --> 00:07:23,760
file.

112
00:07:25,640 --> 00:07:31,000
Well, we can probably play with
this data more, but it's only

113
00:07:31,000 --> 00:07:31,820
two numbers, right?

114
00:07:31,820 --> 00:07:34,180
So number of files and number of
bytes.

115
00:07:34,540 --> 00:07:39,340
It's not a lot, we cannot have
p95, for example, here, right?

116
00:07:39,860 --> 00:07:43,840
So next, more detailed information
is from Postgres logs.

117
00:07:44,620 --> 00:07:50,640
If we adjust log_temp_files setting,
we can have details about

118
00:07:50,860 --> 00:07:54,520
every occurrence of temporary file
creation in the Postgres logs.

119
00:07:54,520 --> 00:07:57,940
Of course, we need to be careful
with observer effect because

120
00:07:58,040 --> 00:08:02,180
if we set it, for example, to 0 and
for example our work memory

121
00:08:02,180 --> 00:08:06,720
is very small and a lot of queries
need to create temporary files.

122
00:08:07,440 --> 00:08:10,760
Not only temporary files will slow
us down, but also we will

123
00:08:10,760 --> 00:08:12,020
produce a lot of logging.

124
00:08:12,840 --> 00:08:15,380
Observer effect can be bad here.

125
00:08:15,380 --> 00:08:19,440
So probably we should be careful
and not set it immediately to

126
00:08:19,440 --> 00:08:23,000
0, but to some sane value first,
and go down a little bit and

127
00:08:23,000 --> 00:08:23,500
see.

128
00:08:24,060 --> 00:08:28,620
But eventually, if we know that
temporary files are not created

129
00:08:28,620 --> 00:08:31,060
often, we can go even to 0.

130
00:08:31,100 --> 00:08:32,620
Again, we should be careful.

131
00:08:33,080 --> 00:08:37,860
And finally, the third source of
important monitoring data here

132
00:08:37,900 --> 00:08:39,060
is pg_stat_statements.

133
00:08:40,240 --> 00:08:46,880
It has a couple of columns, temp
blocks, BLKS read and temp blocks

134
00:08:46,880 --> 00:08:47,380
written.

135
00:08:47,940 --> 00:08:52,420
So we can understand for each normalized
query, I call it query

136
00:08:52,420 --> 00:08:52,920
group.

137
00:08:53,440 --> 00:08:57,980
For each query group, we can see
again, like same as for database

138
00:08:57,980 --> 00:09:02,220
level, we can see a number of,
oh no, not the same.

139
00:09:02,540 --> 00:09:04,400
We don't have a number of files
here.

140
00:09:04,400 --> 00:09:08,840
Instead, we have block read, read
and written.

141
00:09:08,840 --> 00:09:11,900
So written blocks are interesting
here.

142
00:09:13,180 --> 00:09:19,540
But the good idea here is that
we can identify the parts of our

143
00:09:19,540 --> 00:09:26,280
whole workload and understand which
queries are most active in

144
00:09:26,280 --> 00:09:28,040
terms of temporary file creation.

145
00:09:28,040 --> 00:09:33,380
That means they need more work
memory, right?

146
00:09:33,380 --> 00:09:36,560
They lack work memory.

147
00:09:37,500 --> 00:09:42,680
So once we build our monitoring,
or we already have it, maybe.

148
00:09:43,140 --> 00:09:45,180
I'm not sure everyone has very
good.

149
00:09:45,180 --> 00:09:48,340
As usual, I'm very skeptical in
terms of the current state of

150
00:09:48,340 --> 00:09:50,040
Postgres monitoring in general.

151
00:09:51,100 --> 00:09:56,040
But assuming we have this covered
in our monitoring tools, and

152
00:09:56,040 --> 00:10:00,700
we have some details probably in
logs, the next thing, of course,

153
00:10:00,940 --> 00:10:06,280
we can identify parts of our code
and we can think about optimization

154
00:10:06,540 --> 00:10:06,960
first.

155
00:10:06,960 --> 00:10:11,880
Instead of raising our work_mem,
we can have an idea, let's try

156
00:10:11,880 --> 00:10:16,020
to reduce, let's be less hungry
for work_mem, right?

157
00:10:16,080 --> 00:10:19,900
Let's reduce the memory usage.

158
00:10:21,260 --> 00:10:24,720
Sometimes it's quite straightforward,
sometimes it's tricky.

159
00:10:24,960 --> 00:10:29,440
Again, here I recommend using the
Pareto principle and not to

160
00:10:29,440 --> 00:10:32,660
spend too much effort on this optimization.

161
00:10:32,900 --> 00:10:36,820
We just try, if it takes too much
time, too much effort, we just

162
00:10:36,820 --> 00:10:38,200
proceed to the next step.

163
00:10:38,680 --> 00:10:40,680
Next step is raising work_mem.

164
00:10:41,780 --> 00:10:47,260
From these, like monitoring already
can suggest us what is average

165
00:10:47,480 --> 00:10:51,800
temporary file size and what is
maximum temporary file size.

166
00:10:52,040 --> 00:10:55,880
And from that information we can
understand how much we need

167
00:10:55,880 --> 00:10:56,540
to raise.

168
00:10:56,940 --> 00:11:03,260
Of course, instead of jumping straight
to this new value, it

169
00:11:03,260 --> 00:11:04,120
may be risky.

170
00:11:04,820 --> 00:11:06,260
Sometimes I see people do it.

171
00:11:06,260 --> 00:11:09,140
I mean, we know our max_connections
value.

172
00:11:09,140 --> 00:11:13,680
We know that each statement can
consume multiple times up to

173
00:11:13,860 --> 00:11:17,580
work_mem size because of operations,
this approach.

174
00:11:17,780 --> 00:11:23,080
Also, since Postgres 13, there
is a new setting, which is...

175
00:11:24,240 --> 00:11:26,960
I always forget this name, but
there is a setting that tells

176
00:11:26,960 --> 00:11:33,280
you multiplier for hash operations.

177
00:11:33,740 --> 00:11:37,360
And as I remember, by default it's
2, meaning that you have work_mem,

178
00:11:37,360 --> 00:11:42,880
but hash operations can use up
to 2 work_mem, which adds complexity

179
00:11:43,040 --> 00:11:44,460
in the logic and tuning.

180
00:11:45,780 --> 00:11:49,740
And again, it makes it even trickier
to tune.

181
00:11:51,820 --> 00:11:54,680
So, like on the safe side, if you
want to be on the safe side,

182
00:11:54,680 --> 00:11:57,900
you understand the available memory,
you understand your max

183
00:11:57,900 --> 00:12:02,540
connections, and you add some multiplier,
like 2, 3, maybe 4,

184
00:12:02,720 --> 00:12:07,460
but usually this will lead us to
very low work_mem.

185
00:12:07,640 --> 00:12:12,540
So this is why this iterative approach
and maybe raising understanding

186
00:12:12,780 --> 00:12:16,340
that, like our workload won't change
tomorrow suddenly, like

187
00:12:16,340 --> 00:12:17,500
a whole, usually.

188
00:12:17,540 --> 00:12:21,240
In our existing project, usually
we understand, okay, realistic

189
00:12:21,600 --> 00:12:23,340
consumption of memory is this.

190
00:12:23,540 --> 00:12:24,580
So we are fine.

191
00:12:24,580 --> 00:12:26,900
We can start raising this work_mem.

192
00:12:27,840 --> 00:12:32,080
But like, and If you apply the
formula, you will see, oh, we

193
00:12:32,080 --> 00:12:33,480
have risks of out of memory.

194
00:12:33,480 --> 00:12:36,980
But no, no, we, our workload is,
we know our workload, right?

195
00:12:36,980 --> 00:12:40,960
Of course, if we release it, release
changes in applications,

196
00:12:41,480 --> 00:12:44,200
often workloads can change as well,
right?

197
00:12:44,200 --> 00:12:46,380
So we should be careful with it.

198
00:12:46,560 --> 00:12:49,840
Especially we should be careful
raising max_connections after

199
00:12:50,140 --> 00:12:55,120
this tuning of work_mem because
this can lead us to higher out

200
00:12:55,120 --> 00:12:56,120
of memory risks.

201
00:12:57,040 --> 00:13:01,720
So instead of raising globally,
I recommend trying to think about

202
00:13:01,720 --> 00:13:02,660
raising locally.

203
00:13:02,720 --> 00:13:06,500
For example, you can say, I want
to raise for a specific session

204
00:13:06,500 --> 00:13:09,060
because I know this is a heavy
report.

205
00:13:09,060 --> 00:13:10,740
It needs more memory.

206
00:13:10,800 --> 00:13:12,740
I want to avoid temporary files.

207
00:13:12,740 --> 00:13:16,480
I just set work_mem to a higher value
in this session and that's

208
00:13:16,480 --> 00:13:16,960
it.

209
00:13:16,960 --> 00:13:22,480
Other sessions still use the global
setting of work_mem.

210
00:13:22,820 --> 00:13:27,320
We can set even, say, even set
local work_mem in a transaction,

211
00:13:27,440 --> 00:13:31,580
so when the transaction finishes, work_mem
kind of resets in the same session.

212
00:13:31,960 --> 00:13:35,940
Or we can identify some parts of
the workload and this is good practice

213
00:13:35,940 --> 00:13:42,180
to split the workload by users and,
for example, we have a special

214
00:13:42,180 --> 00:13:47,360
user that runs heavier queries
like analytical queries, maybe

215
00:13:48,180 --> 00:13:52,720
and we know this user needs a higher
work_mem, so we can alter the user's

216
00:13:52,720 --> 00:13:53,220
work_mem.

217
00:13:54,160 --> 00:13:57,760
And this is also a good practice
to avoid global raises.

218
00:13:58,080 --> 00:14:03,760
But of course, this will make the
logic complex.

219
00:14:04,400 --> 00:14:05,940
We need to document it properly.

220
00:14:06,620 --> 00:14:12,100
So, if we have a bigger team and
we need to think, other people

221
00:14:12,100 --> 00:14:13,100
will deal with it.

222
00:14:13,100 --> 00:14:16,900
Of course, this needs proper documentation.

223
00:14:17,660 --> 00:14:21,060
SET doesn't have a comment unlike
database objects.

224
00:14:21,280 --> 00:14:25,520
So maybe, by the way, I just realized
maybe it's a good idea to

225
00:14:25,520 --> 00:14:30,200
have some commenting capabilities
in Postgres for configuration

226
00:14:30,820 --> 00:14:31,920
settings, right?

227
00:14:32,860 --> 00:14:37,700
So anyway, as a final step, of
course, we consider raising it

228
00:14:37,700 --> 00:14:38,200
globally.

229
00:14:38,240 --> 00:14:39,520
And we do it all the time.

230
00:14:39,520 --> 00:14:45,600
I mean, we see max_connections
quite high, and we raise work_mem,

231
00:14:45,880 --> 00:14:49,640
so even if you multiply max_connections
by work_mem, you see that

232
00:14:49,640 --> 00:14:54,520
you already exceed the kind of
available memory.

233
00:14:55,160 --> 00:15:00,360
But this is tricky, I mean, this
is risky of course, but if we

234
00:15:00,360 --> 00:15:04,400
observe our workload for a very
long time, and we know we don't

235
00:15:04,400 --> 00:15:09,560
change everything drastically,
but we change only parts of the workload,

236
00:15:09,960 --> 00:15:11,360
sometimes it's okay.

237
00:15:11,680 --> 00:15:16,020
But of course, we understand there
are risks here, right?

238
00:15:16,020 --> 00:15:20,640
So raising work_mem is kind
of risky and should be done with

239
00:15:20,640 --> 00:15:22,700
an understanding of the details I just described.

240
00:15:23,440 --> 00:15:26,680
Okay, I think maybe that's it.

241
00:15:26,820 --> 00:15:31,020
Oh, there is also, since Postgres
14, there is a function pg_get_backend_memory_contexts.

242
00:15:34,900 --> 00:15:36,000
It's very useful.

243
00:15:37,120 --> 00:15:40,760
I mean, I don't use it myself yet,
because it's quite new.

244
00:15:40,760 --> 00:15:43,000
Postgres 14 is only a couple of years old.

245
00:15:44,540 --> 00:15:49,140
But, and there's a drawback to
it.

246
00:15:49,740 --> 00:15:51,860
It can be applied only to the current
session.

247
00:15:51,960 --> 00:15:55,580
So this is only for troubleshooting,
detailed troubleshooting.

248
00:15:55,760 --> 00:16:00,520
If you deal with some queries,
you can see what's happening with

249
00:16:00,520 --> 00:16:02,180
memory for a particular session.

250
00:16:03,360 --> 00:16:08,720
I saw discussions to extend this
function to be able to use it

251
00:16:09,520 --> 00:16:14,480
for other backends, for any session,
and when I was preparing

252
00:16:15,320 --> 00:16:22,000
my how-to, These days I use our
new AI bot and of course it hallucinated

253
00:16:22,360 --> 00:16:24,740
thinking Oh, you can just pass
PID to it.

254
00:16:24,740 --> 00:16:26,340
No, it doesn't have any parameters.

255
00:16:26,360 --> 00:16:29,360
You cannot pass anything to it
I would expect it.

256
00:16:29,440 --> 00:16:31,720
So I will probably hallucinate
as well.

257
00:16:31,720 --> 00:16:34,680
But the reality is it supports
only the current session, that's

258
00:16:34,680 --> 00:16:34,920
it.

259
00:16:34,920 --> 00:16:37,160
Maybe in the future it will be
extended.

260
00:16:37,880 --> 00:16:43,160
So that discussion, as I understand,
didn't lead to patches accepted

261
00:16:43,180 --> 00:16:43,680
yet.

262
00:16:44,100 --> 00:16:47,120
But anyway, This is additional,
like, extra.

263
00:16:48,180 --> 00:16:51,500
I think what I just described is
already quite practical.

264
00:16:52,240 --> 00:16:56,540
Just remember that any session
can use, any query can use multiple

265
00:16:57,100 --> 00:16:59,320
work_mems, but usually it's not
so.

266
00:17:00,060 --> 00:17:05,680
And so the approach based on temporary
files is the way to go

267
00:17:05,680 --> 00:17:06,400
these days.

268
00:17:06,820 --> 00:17:08,700
Just monitor temporary files.

269
00:17:09,620 --> 00:17:14,240
It's not like a big deal if we
have few of them happening sometimes,

270
00:17:15,200 --> 00:17:17,380
especially for queries, analytical
queries.

271
00:17:17,520 --> 00:17:20,440
They anyway are slow probably.

272
00:17:21,000 --> 00:17:26,640
And okay, temporary files, we can
check how much we can win if

273
00:17:26,640 --> 00:17:27,600
we raise work_mem.

274
00:17:28,860 --> 00:17:32,520
But anyway, for WALTP, of course,
you want to avoid temporary

275
00:17:32,520 --> 00:17:33,160
file creation.

276
00:17:33,160 --> 00:17:38,900
And by default, I forgot to mention,
work memory is just 4 megabytes.

277
00:17:38,940 --> 00:17:39,900
It's quite low.

278
00:17:40,080 --> 00:17:41,620
These days, it's quite low.

279
00:17:42,040 --> 00:17:47,760
I see in practice for mobile web
apps on bigger servers with

280
00:17:47,760 --> 00:17:51,500
hundreds of gigabytes, we usually
raise it to 100 megabytes,

281
00:17:52,160 --> 00:17:57,180
having few hundred max connections
and connection poolers, we

282
00:17:57,180 --> 00:18:00,540
usually tend to have like 100 megabytes
work memory.

283
00:18:01,260 --> 00:18:04,100
Maybe even more sometimes, again,
depends.

284
00:18:05,980 --> 00:18:07,280
I think that's it.

285
00:18:08,560 --> 00:18:13,400
So hello chat, I see several people
joined, thank you for joining.

286
00:18:13,940 --> 00:18:17,480
Honestly, I recorded live just
because this is more convenient

287
00:18:17,480 --> 00:18:18,260
for me.

288
00:18:18,520 --> 00:18:23,500
So this is podcast anyway, this
will be distributed as usual.

289
00:18:24,760 --> 00:18:32,560
I want to again say thank you for
being a listener, happy holidays,

290
00:18:33,540 --> 00:18:40,180
and I hope we will have very good
topics in new year and I hope

291
00:18:40,380 --> 00:18:45,160
every Postgres production server
is up and running with very

292
00:18:45,160 --> 00:18:53,220
good uptime and with as few failovers
as possible and with as

293
00:18:53,720 --> 00:18:56,600
low temporary file numbers as possible
as well.

294
00:18:57,180 --> 00:19:02,780
So this is my wish for you in a
new year and thank you for listening

295
00:19:02,780 --> 00:19:03,460
and watching.