Sunday, May 01, 2005

First Analysis of Swap Space

When I was learning to configure my first Dynix/Ptx system, I was told by a senior team member to ensure that the swap is at least twice the memory size. I asked him what was the logic behind that, he said, it was a rule of thumb. In the time to come, I would learn the some reasoning behind it. I would at this point recommend “Modern Operating Systems, by Andrew S. Tanenbaum”. He uses Don Knuth’s Fifty Percent Rule as the basis for his analysis. Let me explain the Fifty Percent Rule first

The simple explanation of the rule follows. In the state of equilibrium

  1. Half the number of memory operations are allocations, the other half is freeing
  2. For the half that is freeing, half of those operations result in holes being merged (contiguous ones)

This tells us that the ratio of holes to allocated blocks is fifty percent.

In the state of equilibrium there are half as many holes as total allocated blocks. A hole is referred to as a available memory (free memory). If the total allocated memory in blocks is n then the number of holes is n/2.

The total available blocks is 3n/2 and available blocks is n/2. The ratio of free memory to total available memory (assuming block sizes are equal) comes to 1/3. So if you have 256MB of RAM on your system and want it to be available free for the next task you want to run, then you must allocate a swap size of 512MB, so that the ratio of 1/3 is held.

There are of course complications to this simple rule or calculation that I explained above. I will try and explain some of those complexities in the next series of swapping articles.

6 comments:

Anonymous said...

So are you saying that the rule of thumb would apply irrespective of the use of the machine? In other words, would the same rule be applied to a HTTP server, a MySQL server, or a development machine I use once in a while to test a compile on an errant distribution? (all being completely separate physical machines)

Balbir said...

This would be true for any system in a state of equilibrium. It is recycling memory equally (freeing and allocating).

Anonymous said...

I understand what you are saying but can't you also see that what you have said has not provided me with a tangible solution. If your advice is primarily based on theory without practice (which is not a bad thing), then there is a level of risk that I must assume. You said 'any system in a state of equilibrium'; my first question would be 'what kind of a system is that?'. If I turn off my computer system that runs my HTTP server, then it is for sure in a state of equilibrium—however no longer able to service requests on the HTTP port. Even if a system is up and running, for how long does it stay in a state of equilibrium? The very ideal definition of a system is to facilitate disorder (or requests and receipts) and yet it is the job of the OS to bring order. I think your assumption (I have to believe you made an assumption) that 'it is recycling memory equally' is an error. That might need to be thought through again.

Balbir said...

The idea of equilibrium is that when a transaction goes on for a long time, the number of requests coming in and being served reach a certain level of stability based on the capability of the system.

However, with a less capable system you are unlikely to ever reach an equilibrium state.

I guess any non-stressed system is in a state of balance.

Anonymous said...

Ha, you must be playing games with me. :-)
Don't equilibrium and balance mean the same thing?

Balbir said...

yes, they do. I should try and be more consistent with the words I use

Ranking and Unranking permutations

I've been a big fan of Skiena's Algorithm Design Manual , I recently found my first edition of the book (although I own the third ed...