[Babase] Re: Database RAM
Karl O. Pinc
babase@www.eco.princeton.edu
Thu, 14 Oct 2004 12:57:41 -0500
On 2004.10.13 16:13 Hunter Matthews wrote:
> I've mucked about a little (today aint a great day) and have quickly
> decided to grant 256MB to postgres exclusive use.
>
> So, given that PG has or will have 256MB for its sole use, how should
> it
> be divided up? (IE, send me a postgresql.conf file that you want).
# Disk setting, this extremely large number reduces
# disk contention slightly. Set back to default when
# babase conversion finishes. (2004-10-14)
checkpoint_segements = 24
# 16MB ram, in units of 4K page sizes. If the kernel
# has a different page size this should change.
# This can be reduced to 1MB, (wal_buffers=256)
# when the babase conversion finishes.
wal_buffers = 4096
# 10MB, in 1KB units.
sort_mem = 10240
# In 8KB units.
# 256MB minus 40MB (4 * sort_mem, which assumes two (medium heavy)
# concurrent users) minus 16MB (for wal_buffers) - 1632KB
# (for the RSS of the pg postmaster process, sans any activity)
# = 198.4MB.
shared_buffers = 25396
# In 8KB units.
# This is (adjusting for units) shared_buffers + wal_buffers
# + however much other RAM the kernel uses for postgresql disk
# buffering. I've guessed this to be 50MB. ??
# Note that this is used by the planner and in no way impacts
# the actual amount of RAM used, it's an estimate of RAM used.
effective_cache_size = 31796
Notes:
This attemps to allow for the memory used by postgresql code, but you
may want to adjust shared_buffers based on the RSS size of your
posgresql.
The only way I know to find this is to look at the RSS (via ps)
of the postmaster process right after starting it. Then
subtract from that the 10MB shared_buffers of your current config
to get the actual amount used by the pg to do nothing.
The figure I used of 204 8KB blocks is from my system.
There is no way I know of to set a maximum on the amount of
RAM used by postgresql, short of using 'ulimit'. The amount
of RAM used varies with both the number of users and the
complexity of the task each user is doing. sort_mem is
the parameter related to this, each concurrent sort or
join being allowed to use up to this much RAM, but
there's nothing that prevents lots of users from doing lots
of concurrent activity.
There is no way to limit postgresql's use of the RAM the
kernel uses for disk buffering, AFIK.
The only way to tune performance and resource consumption
is to monitor both during periods when the load is known,
or typical, or extreme, or whatever else you're tuning for.
Please let me know ASAP after you do this. Thanks.
>
> Please note that was a quick decision. I'm gonna see about booting ace
> off of dbserve and onto another machine. When that happens, I'll give
> PG
> 384 or 512.
Would it help if we bought RAM for the box? If there's enough RAM
then disk throughput should not be an issue. Really, I don't expect
our users to put much load on the box during the regular course
of events. However, the conversion which loads the data from
the old database will wail on the box until done. We'll be
re-running this repeatedly for up to a few weeks and then
should never do it again. Of course, I can't promise that
creative users won't find a way to load the database, but when
it comes down to it there just isn't all that much data.
>
> --
> Hunter Matthews Unix / Network Administrator
> Office: BioScience 145/244 Duke Univ. Biology Department
> Key: F0F88438 / FFB5 34C0 B350 99A4 BB02 9779 A5DB 8B09 F0F8 8438
> Never take candy from strangers. Especially on the internet.
>
>
Karl <kop@meme.com>
Free Software: "You don't pay back, you pay forward."
-- Robert A. Heinlein