OS/2, eCS & ArcaOS - Technical > Storage

JFS cache sizing, and system "speed-up"

(1/8) > >>

Dariusz Piatkowski:
Alright, so for what it's worth, I had recently made some changes to the JFS cache sizes and thought I would share my results.

8Gig machine here, at bootup 4Gig is used for a RAM drive, the remaining is seen by OS/2 as workable memory. Not all of it, but you know the standard limitations that apply.

OK, so I've had my VIRTUALADDRESSLIMIT=3072 setup in CONFIG.SYS for quite some time, and utilized a JFS cache of 256M. Seemed to work fine, no issues. Keep in mind the underlying disk is a SSD (Samsung 850Evo).

Given the above hardware details and my application use/mix I have consistently found that at least 1.5Gig of that upper memory would regularly go un-used. It would simply sit there and if anything, it was always the Shared Memory that would get exhausted first. So that got me thinking: cache is cache, and so if the RAM is there, why not make better use of the darn thing?  ;)

If you see what ther other OSes are running, the cache sizes are typically much larger.

So off I went experimenting a little bit:

1) bump the cache up to 512M, set VIRTUALADDRESSLIMIT=2048, but I kept the MIN and MAX buffer settings the same as with the prior smaller cache, that being:

G:\OS2\CACHEJFS.EXE /LW:8,30,6 /MINBUFFER:4000 /MAXBUFFER:21000

Umm...nice, system just seems faster...apps respond quicker. Obviously not on the first attempt, but openning OO the 2nd and 3rd and 4th time is faster.

2) now since I freed up some upper memory due to the VIRTUALADDRESSLIMIT change why not go bigger (or go home, as the saying goes...LOL), so I bumped the cache up to 768M

Following this change I started to see a weird 'stagger' in the system...went back to some published/presentation notes on the JFS and realized that I was saturating the cache and pretty much exhausting the free buffer space, so I implemented the following:

G:\OS2\CACHEJFS.EXE /LW:8,30,6 /MINBUFFER:16000 /MAXBUFFER:84000

...basically I increased my MIN and MAX buffer sizes by a factor of 4.

Those system 'staggers' I mentioned above appear to be gone now. The apps are loading consistently faster, on multiple attempts.

All in all, tripling the JFS cache has provided for a good positive result.

Of course not everyone will be able to max out on their JFS cache this way, and even if you have the room your apps may limit you to how much of that upper memory can be utilized for cache itself.

Doug Bissett:

--- Quote ---OK, so I've had my VIRTUALADDRESSLIMIT=3072 setup in CONFIG.SYS for quite some time, and utilized a JFS cache of 256M. Seemed to work fine, no issues. Keep in mind the underlying disk is a SSD (Samsung 850Evo).
--- End quote ---

Personally, I always use VAL=2560, which seems to be a sweet spot, for my usage. At least two of my systems get very unstable if I use more than 2900 (and don't run very long, if I go to 3000). I do monitor the available lower, and upper, private, and shared, memory (using Above512, and a logging script). I find that the lowest I ever get with upper shared memory, is somewhere in the 800K range. Lower shared memory does get down to the 35K range (when I run VBox), but most of the time, it is above 100K. When it does get below 100K, I try to prevent further programs from running, because it sometimes does go to zero, with bad results (it seems that some programs never check to see if they got lower shared memory, before they try to use it, and then they can't recover). Of course, I do use HIGHMEM to set as much as possible to use high shared memory. I find that it is necessary to run HIGHMEM again, after some updates. It doesn't always find something that got changed, but sometimes it does.

For those who don't know, VAL has absolutely nothing to do with available real memory. I use VAL=2560 on my antique ThinkPad, which only has 256 Mb of real memory. What it seems to do, is increase the amount of upper shared memory, that is available.

I find it interesting, that private memory (upper, or lower) rarely changes value. Almost everything allocates shared memory, but that could be because Above512 is lying to me.


--- Quote ---So off I went experimenting a little bit:
--- End quote ---

Been there, done some of it. I usually use JFS cache at 132K (on machines with more than 1 GB of real memory), which seems to be a bit of a sweet spot (64K, or 10% of real memory, is default). Using more tended to end up not helping much (it has been a few years since I tried that), and I found that the systems became somewhat less stable. Usually the problem was an unexplained hang, although I seem to be able to reproduce a similar problem by writing huge (20 GB on JFS) files to USB devices (JFS), while using the default cache size.

I never played with buffer sizes, and cache size doesn't seem to matter if it is using a spinning disk, or a SSD. A SSD, of course, has no seek time, or spin, delay, so it is faster.

I gave up using HPFS, years ago, because the HPFS cache goes into lower shared memory, which is already in very short supply. JFS does have other, significant, advantages over HPFS.


--- Quote ---The apps are loading consistently faster, on multiple attempts.
--- End quote ---

A lot of that "improvement" depends on timing. If the program parts have not been flushed from the cache, the load time is faster (even when using a small cache). Using a larger cache does increase the probability that most of it will still be there, but it depends on what you do between starts.


--- Quote ---Obviously not on the first attempt, but openning OO the 2nd and 3rd and 4th time is faster.
--- End quote ---

If you are using the AOO QuickStart feature, that may be what you are seeing. From what I see, QuickStart keeps parts of the program available in memory, after you close it, but it looks, to me, like the first program load still needs to load everything. QuickStart does work around DLL unload problems when using upper shared memory, without the kernel fixes that ArcaOS has.

FWIW, I have turned off all Lazy Write settings (FAT32, and JFS). That doesn't seem to affect system response much, but it does seem to make the system less likely to hang, especially when writing large files to USB.

Dariusz Piatkowski:
Hi Doug,


--- Quote from: Doug Bissett on October 21, 2021, 07:16:45 pm ---...

--- Quote ---The apps are loading consistently faster, on multiple attempts.
--- End quote ---

A lot of that "improvement" depends on timing. If the program parts have not been flushed from the cache, the load time is faster (even when using a small cache). Using a larger cache does increase the probability that most of it will still be there, but it depends on what you do between starts...
--- End quote ---

So this point here is actually the crux of what I was trying to improve upon.

Consider a set of applications that you normally run, in my case that normally is a mix of:

1) FF
2) Thunderbird
3) Lotus 1-2-3
4) AOO
5) VSE & IBMCPP & GCC (compilation)
6) PMView

Therefore, by increasing the size of the cache (if the upper memory is rarely used otherwise) allows me to try to pull in more of that working set into the cache, and hopefully convert that into a quicker responding system.

In my case, that appears to have been the result of tripling the size of the cache, with the matching increase in the buffer settings.

Of course that that not mean ALL of these apps and their data (heck, FF has a 512M disk cache) are always there, the cache will of course always continue to be shuffled around as needed.

roberto:
Hello
I'm doing cache testing lately. With interesting results. But the difference with you is that I am testing with this:
rem DISKCACHE=D,LW
DISKCACHE=4096000,LW

swappath=c:\ 0 4096


saludos

Doug Bissett:

--- Quote from: roberto on October 28, 2021, 06:55:06 pm ---Hello
I'm doing cache testing lately. With interesting results. But the difference with you is that I am testing with this:
rem DISKCACHE=D,LW
DISKCACHE=4096000,LW

swappath=c:\ 0 4096


saludos

--- End quote ---

Uhmm. That is totally wrong. From Help DISKCACHE:

--- Code: ---DISKCACHE Command: n Parameter

Specifies a number from 48 through 14400 that indicates the number of 1024-byte blocks (or 1KB blocks) of storage to be used for control information and programs in the disk cache buffer. The default value is d, which is set during installation and based upon the amount of system memory. You can reset this value to a numerical value.

To set your disk cache size to 128KB, type the following in the CONFIG.SYS file:

DISKCACHE=128

--- End code ---
The n parameter is what you say is "4096000". So, 14400 is maximum. If your number works, at all, it is only because it wraps to a smaller number, somewhere between 48 an 14400.

DISKCACHE is for FAT anyway, so it is probably not, at all, helpful to play with the default.

Navigation

[0] Message Index

[#] Next page

Go to full version