OS2 World Community Forum

OS/2, eCS & ArcaOS - Technical => Storage => Topic started by: Dariusz Piatkowski on October 21, 2021, 05:43:04 pm

Title: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on October 21, 2021, 05:43:04 pm
Alright, so for what it's worth, I had recently made some changes to the JFS cache sizes and thought I would share my results.

8Gig machine here, at bootup 4Gig is used for a RAM drive, the remaining is seen by OS/2 as workable memory. Not all of it, but you know the standard limitations that apply.

OK, so I've had my VIRTUALADDRESSLIMIT=3072 setup in CONFIG.SYS for quite some time, and utilized a JFS cache of 256M. Seemed to work fine, no issues. Keep in mind the underlying disk is a SSD (Samsung 850Evo).

Given the above hardware details and my application use/mix I have consistently found that at least 1.5Gig of that upper memory would regularly go un-used. It would simply sit there and if anything, it was always the Shared Memory that would get exhausted first. So that got me thinking: cache is cache, and so if the RAM is there, why not make better use of the darn thing?  ;)

If you see what ther other OSes are running, the cache sizes are typically much larger.

So off I went experimenting a little bit:

1) bump the cache up to 512M, set VIRTUALADDRESSLIMIT=2048, but I kept the MIN and MAX buffer settings the same as with the prior smaller cache, that being:

G:\OS2\CACHEJFS.EXE /LW:8,30,6 /MINBUFFER:4000 /MAXBUFFER:21000

Umm...nice, system just seems faster...apps respond quicker. Obviously not on the first attempt, but openning OO the 2nd and 3rd and 4th time is faster.

2) now since I freed up some upper memory due to the VIRTUALADDRESSLIMIT change why not go bigger (or go home, as the saying goes...LOL), so I bumped the cache up to 768M

Following this change I started to see a weird 'stagger' in the system...went back to some published/presentation notes on the JFS and realized that I was saturating the cache and pretty much exhausting the free buffer space, so I implemented the following:

G:\OS2\CACHEJFS.EXE /LW:8,30,6 /MINBUFFER:16000 /MAXBUFFER:84000

...basically I increased my MIN and MAX buffer sizes by a factor of 4.

Those system 'staggers' I mentioned above appear to be gone now. The apps are loading consistently faster, on multiple attempts.

All in all, tripling the JFS cache has provided for a good positive result.

Of course not everyone will be able to max out on their JFS cache this way, and even if you have the room your apps may limit you to how much of that upper memory can be utilized for cache itself.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on October 21, 2021, 07:16:45 pm
Quote
OK, so I've had my VIRTUALADDRESSLIMIT=3072 setup in CONFIG.SYS for quite some time, and utilized a JFS cache of 256M. Seemed to work fine, no issues. Keep in mind the underlying disk is a SSD (Samsung 850Evo).

Personally, I always use VAL=2560, which seems to be a sweet spot, for my usage. At least two of my systems get very unstable if I use more than 2900 (and don't run very long, if I go to 3000). I do monitor the available lower, and upper, private, and shared, memory (using Above512, and a logging script). I find that the lowest I ever get with upper shared memory, is somewhere in the 800K range. Lower shared memory does get down to the 35K range (when I run VBox), but most of the time, it is above 100K. When it does get below 100K, I try to prevent further programs from running, because it sometimes does go to zero, with bad results (it seems that some programs never check to see if they got lower shared memory, before they try to use it, and then they can't recover). Of course, I do use HIGHMEM to set as much as possible to use high shared memory. I find that it is necessary to run HIGHMEM again, after some updates. It doesn't always find something that got changed, but sometimes it does.

For those who don't know, VAL has absolutely nothing to do with available real memory. I use VAL=2560 on my antique ThinkPad, which only has 256 Mb of real memory. What it seems to do, is increase the amount of upper shared memory, that is available.

I find it interesting, that private memory (upper, or lower) rarely changes value. Almost everything allocates shared memory, but that could be because Above512 is lying to me.

Quote
So off I went experimenting a little bit:

Been there, done some of it. I usually use JFS cache at 132K (on machines with more than 1 GB of real memory), which seems to be a bit of a sweet spot (64K, or 10% of real memory, is default). Using more tended to end up not helping much (it has been a few years since I tried that), and I found that the systems became somewhat less stable. Usually the problem was an unexplained hang, although I seem to be able to reproduce a similar problem by writing huge (20 GB on JFS) files to USB devices (JFS), while using the default cache size.

I never played with buffer sizes, and cache size doesn't seem to matter if it is using a spinning disk, or a SSD. A SSD, of course, has no seek time, or spin, delay, so it is faster.

I gave up using HPFS, years ago, because the HPFS cache goes into lower shared memory, which is already in very short supply. JFS does have other, significant, advantages over HPFS.

Quote
The apps are loading consistently faster, on multiple attempts.

A lot of that "improvement" depends on timing. If the program parts have not been flushed from the cache, the load time is faster (even when using a small cache). Using a larger cache does increase the probability that most of it will still be there, but it depends on what you do between starts.

Quote
Obviously not on the first attempt, but openning OO the 2nd and 3rd and 4th time is faster.

If you are using the AOO QuickStart feature, that may be what you are seeing. From what I see, QuickStart keeps parts of the program available in memory, after you close it, but it looks, to me, like the first program load still needs to load everything. QuickStart does work around DLL unload problems when using upper shared memory, without the kernel fixes that ArcaOS has.

FWIW, I have turned off all Lazy Write settings (FAT32, and JFS). That doesn't seem to affect system response much, but it does seem to make the system less likely to hang, especially when writing large files to USB.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on October 21, 2021, 09:58:16 pm
Hi Doug,

...
Quote
The apps are loading consistently faster, on multiple attempts.

A lot of that "improvement" depends on timing. If the program parts have not been flushed from the cache, the load time is faster (even when using a small cache). Using a larger cache does increase the probability that most of it will still be there, but it depends on what you do between starts...

So this point here is actually the crux of what I was trying to improve upon.

Consider a set of applications that you normally run, in my case that normally is a mix of:

1) FF
2) Thunderbird
3) Lotus 1-2-3
4) AOO
5) VSE & IBMCPP & GCC (compilation)
6) PMView

Therefore, by increasing the size of the cache (if the upper memory is rarely used otherwise) allows me to try to pull in more of that working set into the cache, and hopefully convert that into a quicker responding system.

In my case, that appears to have been the result of tripling the size of the cache, with the matching increase in the buffer settings.

Of course that that not mean ALL of these apps and their data (heck, FF has a 512M disk cache) are always there, the cache will of course always continue to be shuffled around as needed.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: roberto on October 28, 2021, 06:55:06 pm
Hello
I'm doing cache testing lately. With interesting results. But the difference with you is that I am testing with this:
rem DISKCACHE=D,LW
DISKCACHE=4096000,LW

swappath=c:\ 0 4096


saludos
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on October 28, 2021, 08:24:45 pm
Hello
I'm doing cache testing lately. With interesting results. But the difference with you is that I am testing with this:
rem DISKCACHE=D,LW
DISKCACHE=4096000,LW

swappath=c:\ 0 4096


saludos

Uhmm. That is totally wrong. From Help DISKCACHE:
Code: [Select]
DISKCACHE Command: n Parameter

Specifies a number from 48 through 14400 that indicates the number of 1024-byte blocks (or 1KB blocks) of storage to be used for control information and programs in the disk cache buffer. The default value is d, which is set during installation and based upon the amount of system memory. You can reset this value to a numerical value.

To set your disk cache size to 128KB, type the following in the CONFIG.SYS file:

DISKCACHE=128
The n parameter is what you say is "4096000". So, 14400 is maximum. If your number works, at all, it is only because it wraps to a smaller number, somewhere between 48 an 14400.

DISKCACHE is for FAT anyway, so it is probably not, at all, helpful to play with the default.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: roberto on October 28, 2021, 09:25:14 pm
You are right in everything, but my question is have you tried it?
As you said, it affects fat
I have tried to copy a folder with usb 2.0 to fat32 and it took me 24 minutes at 1120 bps fc 2.40 and to a usb 3.0 with the same card it takes 33 minutes at 780 bps

saludos
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on October 29, 2021, 02:44:05 am
FAT is not FAT32. They are different drivers, with different parameters, and different cache. Using incorrect cache size probably just uses the default size, but it could also just drop the higher bits, and use whatever is left over. If that turns out to be something usable, it will likely work.

No, I haven't tried it. I no longer use FAT, except for the ArcaOS USB stick installer. Working with USB is affected by far too many outside factors, to even consider doing a copy to test speed. Do it 1,000 times, after rebooting each time, with consistent results, and I may find a reason to try it.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Roderick Klein on October 29, 2021, 01:06:28 pm
Hi Doug,

...
Quote
The apps are loading consistently faster, on multiple attempts.

A lot of that "improvement" depends on timing. If the program parts have not been flushed from the cache, the load time is faster (even when using a small cache). Using a larger cache does increase the probability that most of it will still be there, but it depends on what you do between starts...

So this point here is actually the crux of what I was trying to improve upon.

Consider a set of applications that you normally run, in my case that normally is a mix of:

1) FF
2) Thunderbird
3) Lotus 1-2-3
4) AOO
5) VSE & IBMCPP & GCC (compilation)
6) PMView

Therefore, by increasing the size of the cache (if the upper memory is rarely used otherwise) allows me to try to pull in more of that working set into the cache, and hopefully convert that into a quicker responding system.

In my case, that appears to have been the result of tripling the size of the cache, with the matching increase in the buffer settings.

Of course that that not mean ALL of these apps and their data (heck, FF has a 512M disk cache) are always there, the cache will of course always continue to be shuffled around as needed.

Why is standard Sysbench not sufficient ? Its much more consistant. It tests small file I/O large file I/O.  Also it measures the timing very accurate!
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on October 29, 2021, 03:34:13 pm
Hi Roderick,

Why is standard Sysbench not sufficient ? Its much more consistant. It tests small file I/O large file I/O.  Also it measures the timing very accurate!

...because SysBench and diskio measure the raw speed of the hardware and mostly bypass the cache I believe (there is a single cache/bus xfer test in both).

So the attempt here is to optimize the 'runtime' environment by balancing the raw speed of the storage device (that being a SSD in my case) with the system resources (4Gig of RAM) in order to avoid a constant storage device seek for the pertinent data (increasing the cache should allow the most recently used stuff to be available much quicker).

In other words: it's not about sheer performance, rather it's about how all the various controls available to us can produce better application performance.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: roberto on October 30, 2021, 05:14:51 pm
Do it 1,000 times, after rebooting each time, with consistent results, and I may find a reason to try it.
You will have to wait about six months for me to try it a thousand times, but for everyone else I leave you a file with instructions on how you can make it consistent each boot.

-Dariusz I agree 100% with your comments from the sysbench ...

saludos
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on October 30, 2021, 06:24:24 pm
Hi Roderick,

Why is standard Sysbench not sufficient ? Its much more consistant. It tests small file I/O large file I/O.  Also it measures the timing very accurate!

...because SysBench and diskio measure the raw speed of the hardware and mostly bypass the cache I believe (there is a single cache/bus xfer test in both).

So the attempt here is to optimize the 'runtime' environment by balancing the raw speed of the storage device (that being a SSD in my case) with the system resources (4Gig of RAM) in order to avoid a constant storage device seek for the pertinent data (increasing the cache should allow the most recently used stuff to be available much quicker).

In other words: it's not about sheer performance, rather it's about how all the various controls available to us can produce better application performance.

The file IO speeds should show the file systems speed as well as making it obvious when there are increased cache misses ,at least if they tested large enough files.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on October 30, 2021, 07:08:14 pm
Quote
In other words: it's not about sheer performance, rather it's about how all the various controls available to us can produce better application performance.

There are a lot of things, that can affect application performance. Most of them make so little difference that they are not possible to measure. For instance, it takes less time to repaint the desktop, if it is a single, solid, color, than if it is a complicated pattern (like a photo). A small picture paints faster than a large picture. A large screen resolution takes more time, than a small screen resolution. Two screens take a lot longer than one screen. This becomes much more obvious, if you are using a program, like VNC,  to operate another computer over the internet.

You can also position data on your disk, so it is easier to get to it (more effective on a spinning disk, than a SSD). The outer edge of the disk spins faster than the inner edge (modern disks take advantage of that, so it doesn't make as much difference). Once you get there, the data transfer is a little faster (although modern disks also use internal cache, so it probably doesn't make any difference, as long as it is enabled).  File fragmentation also enters this equation.

If you use a RAMDISK, it is faster to read/write directly, than to need two steps to write to cache, then to the program. This may also apply to devices like NVME drives.

Various formats operate at different speeds. FAT is probably the fastest. FAT32 is likely the slowest (not counting optical devices). Enabling Lazy Write (all formats) often slows it down, especially when the cache fills up. Of course, each format has it's own uses.

Eliminating background processes can speed things up, until you need one of them.

There are many more things that can affect application performance. One, that most people forget about, is that it takes a lot longer to run a program, when something crashes, and you need to restart the program, or, worse, reboot the computer. Overall stability is one of the most important performance considerations. I always found that making the cache size too big, contributes to instability.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on October 31, 2021, 01:56:43 am
Various formats operate at different speeds. FAT is probably the fastest. FAT32 is likely the slowest (not counting optical devices). Enabling Lazy Write (all formats) often slows it down, especially when the cache fills up. Of course, each format has it's own uses.

Err, FAT is often slow and in theory FAT32 should be about the same speed as they both use similar structures. My phone is blazing fast at writing and reading a FAT32 stick whereas OS/2 is amazingly slow. The big problems with FAT and FAT32 is fragmentation and the fact that the FAT and directory structure are at the beginning of the disk.
OS/2 is a faster DOS and Windows mostly due to HPFS (and now JFS) which lays out the disk structure in a much better way, both use extents so files are stored as one or 2 continuous groups of sectors, assuming little fragmentation, disk directories in the middle of the disk and arranged as a B-Tree which at least HPFS also optimizes in the background. And even the HPFS cache was much better then the FAT cache.
In comparisons of Linux file systems, JFS scores pretty good as an all around file system. Of course Linux (and even Windows now) have dynamic caches so what memory you are not using is used as a cache.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 01, 2021, 08:25:47 pm
Quote
so I bumped the cache up to 768M

Since I haven't tried this, for a long time, I decided to give it a shot. How did you get it to take 768m (I would assume that you used 768000 (K) as the cache size. When I try that, CACHEJFS shows me:
Cache Size:  131072 kbytes
which I believe is the allowed max now.

Quote
/MINBUFFER:16000 /MAXBUFFER:84000

Did you find a description of what these actually do? I have run out of places to look, and can't find anything.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Andreas Schnellbacher on November 01, 2021, 08:50:38 pm
Yes, 'M' stands for mega, 'k' for Kilo, but 'm' for milli. See https://en.wikipedia.org/wiki/Unit_prefix (https://en.wikipedia.org/wiki/Unit_prefix).
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 22, 2021, 04:56:54 pm
So I thougth I would provide a bit of an update given my experimenting with the JFS cache settings.

I have been using the OpenJFS utilities to gather the internal JFS metrics (ftp://ftp.netlabs.org/pub/snapshots/openjfs/ (http://ftp://ftp.netlabs.org/pub/snapshots/openjfs/)).

Based on various on-line references I had modified an existing REXX script (which in it's on-line shape did not actually work) to do real-time logging of the 'cstats' to a CSV file.

Anyways, through multiple cycles I have been trying to understand how the various parameters affect what JFS does on my system (of course, my usage is an end-user type of use, so very different from a let's say server - web or file for that matter).

Regardless, I am now running a 1G JFS cache and in the current iteration of parameters I have to say the responsiveness of my system is a "day & night" type of a difference from where I started. In particular however, the latest move from a 768M cache to a 1G cache introduced a significant improvement that I'm not sure I can quite explain.

Specifically, using the cstats utility the 1G cache shows the following:

Code: [Select]
cachesize    262141   cbufs_protected       58960
hashsize     131072   cbufs_probationary    30920
nfreecbufs    75323   cbufs_inuse               0
minfree        6000   cbufs_io                  0
maxfree       60000   jbufs_protected       96286
numiolru          0   jbufs_probationary      642
slrun        155246   jbufs_inuse               0
slruN        174760   jbufs_io                  0
Other            10   jbufs_nohomeok            0

Meanwhlie, the os2stats utility show this:

Code: [Select]
NCache: lookup: 262141
        hit: 131072
        miss: 75323
        enter: 6000
        delete: 60000
        name2long: 0
JCache: reclaim: 262141
        read: 131072
        recycle: 75323
        lazywrite.awrite: 6000
        recycle.awrite: 60000
        logsync.awrite: 0
        write: 155246
LCache: commit: 262141
        page.init: 131072
        page.done: 75323
        sync: 6000
        maxbufcnt: 60000
ICache: n.inode: 262141
        reclaim: 131072
        recycle: 75323
        release: 6000

The cachesize field implies that actual data cache is only 256M, which is what I'm struggling with. Keep in mind, I came from HPFS386 world, where seemingly at least, it was pretty clear how big your cache pool was and what your cache hit/miss were. The JFS being a journaling FS deals with metadata, so it's not quite as clear as having a single cache for content data; at least that's my current understanding.

In fact I suspect my interpretation of what these fields mean may not be correct. The '262141' may not be a k-byte count, instead it may be a metadata unit count, I think? LOL

I am therefore curious if anyone else understands this better?

My next step is to look at the sources to see if I can decipher this.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 22, 2021, 05:15:23 pm
Hi Doug,

Quote
so I bumped the cache up to 768M

Since I haven't tried this, for a long time, I decided to give it a shot. How did you get it to take 768m (I would assume that you used 768000 (K) as the cache size. When I try that, CACHEJFS shows me:
Cache Size:  131072 kbytes
which I believe is the allowed max now.
...

So my JFS cache parameters are all set in CONFIG.SYS with:

IFS=G:\OS2\JFS.IFS /CACHE:1048567 /LW:8,30,6 /AUTOCHECK:*

The absolute pre-requisite to increasing the JFS cache size was freeing up the high memory area (see the tail end of the 'QT5 browser' thread discussion where OS4User provides a pretty good explanation of how OS/2 allocates memory => https://www.os2world.com/forum/index.php/topic,2627.msg32933.html#new (https://www.os2world.com/forum/index.php/topic,2627.msg32933.html#new)).

Anyways, previously I had found my system to run best with VAL=3072. However, that meant my JFS cache would only go to about 64M, any attempt at bigger value would produce that "out of memory" boot message and a default cache size being substituted.

Well, I was getting a little annoyed by seeing a quite sizeable NTFS cache allocation on my WIN7 boxes and started to dig into why our OS/2 JFS cache size was so seemingly limitted. As I was reading up on this it dawned on me that setting the VAL lower should allow me to free-up the 'system' memory, which would then become available to various device drivers, and JFS is certainly one of these.

I am now running with VAL=2048, and given that I have a 8G box with 4G allocated to RAMDISK, and 3.2G being recognized by OS/2 as "accessible memory", that allowed me to allocate more memory to FS cache, where I think the most "bang for the buck" exists today (given our platform and it's limitations).

...
Quote
/MINBUFFER:16000 /MAXBUFFER:84000

Did you find a description of what these actually do? I have run out of places to look, and can't find anything.

Yes and no. There were some Warpstock presentations that touched on this. See "Dynamically Tuning the JFS Cache for Your Job" by Sjoerd Visser from 2009. P22 of that deck starts getting into the details of the JFS cache design, which is really about the logic of how the different buffers are handled, and the differences between actual data and metadata.

Bottom line in all this is: tune the settings to your system usage patterns.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 22, 2021, 08:56:42 pm
Quote
The cachesize field implies that actual data cache is only 256M, which is what I'm struggling with.

That would appear to be the number of 4K buffers (but that is only a guess). If true, it matches your defined cache size.

Quote
Quote from: Doug Bissett on November 01, 2021, 08:25:47 pm
Quote

        /MINBUFFER:16000 /MAXBUFFER:84000

    Did you find a description of what these actually do? I have run out of places to look, and can't find anything.

Yes and no. There were some Warpstock presentations that touched on this. See "Dynamically Tuning the JFS Cache for Your Job" by Sjoerd Visser from 2009. P22 of that deck starts getting into the details of the JFS cache design, which is really about the logic of how the different buffers are handled, and the differences between actual data and metadata.

Interesting. It seems that they are all numbers referring to 4K data blocks.

Quote
Anyways, previously I had found my system to run best with VAL=3072. However, that meant my JFS cache would only go to about 64M, any attempt at bigger value would produce that "out of memory" boot message and a default cache size being substituted.

This doesn't make any sense. It implies that the cache, over 64M, goes into unreserved upper shared memory space (above what VAL reserves). Could be possible, I suppose, and it might explain some of the weird crashes that I see when I try to use larger values for VAL.

I think we can safely assume that it does not use (or even know about) PAE memory, so any memory above about 3.5G is likely out of the picture, although it is quite likely that it could use PAE memory, if somebody programmed it to use it.

I use VAL=2560, and going larger causes instability in my system (don't know why). The biggest JFS cache, that I can use, seems to be 132M, no matter what larger value I set in the IFS startup line. However, I just tried setting it to 256M, in a new install that defaults VAL to 1536 (way too small for actual use), and it did take it. The Sentinel memory watcher (XCenter widget) appears to show that the memory was allocated (from somewhere). I tried 512M (then 384M, then your number 1048567), VAL is still 1536, but now cachejfs shows only 132M, and Sentinel seems to confirm that. I never see an "out of memory" boot message.

So, it seems that 256M is the largest value that doesn't default to 132M, for me (that is probably a bug, I expect that 132M is the maximum acceptable). I don't see any indication of where the cache memory is allocated (private low, private high, shared low, or shared high).

Since I seem to be able to use 8 times the default cache size, I would think that changing MAX and MIN to 8 times their default value, would make sense, but that is only a guess. I need to do more reading. Thanks for the reference.

Which version of JFS are you using? The one that I am using is v1.9.9 from AN.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 22, 2021, 10:31:07 pm
Like Doug, testing on a new install with VAL set at 1536. Using the IFS=M:\OS2\JFS.IFS /CACHE:1048567 /LW:8,30,6 /AUTOCHECK:* setting in my config.sys failed with a small cache. I then tried a 256M cache and that worked.
I then thought of using Theseus to check the kernel's system object summary which showed that I had just over 500M of free system memory, so I jacked up the cache size to 700M and that took, and looking at the free system memory, I now have 67.172M free with the largest block being 44.871M andI now have 810.458M of system memory committed, 2492.579M allocated.
This raises the question of how Dariusz is getting so much system memory with VAL set at 2048 that he can use a GB for cache.
Using Theseus, System-->Kernel Information-->System Object Summary. At the bottom are the totals
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 23, 2021, 06:58:16 am
System seems to have ended up unstable after that change, both SM and TB crashing frequently. Leaves me wondering if the kernel commits more memory at times. Different partition with VAL set to 3072 gives me 153M free system memory, 123M largest block.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: OS4User on November 23, 2021, 07:34:20 am
System seems to have ended up unstable after that change, both SM and TB crashing frequently. Leaves me wondering if the kernel commits more memory at times. Different partition with VAL set to 3072 gives me 153M free system memory, 123M largest block.

I have  107M free system memory, 92M largest block.  FF is quite stable.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 23, 2021, 07:40:58 am
System seems to have ended up unstable after that change, both SM and TB crashing frequently. Leaves me wondering if the kernel commits more memory at times. Different partition with VAL set to 3072 gives me 153M free system memory, 123M largest block.

I have  107M free system memory, 92M largest block.  FF is quite stable.

This was the latest beta with the updated NSPR and NSS so that may be part of it.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 24, 2021, 04:05:52 am
Quote
This was the latest beta with the updated NSPR and NSS so that may be part of it.

Yeah. That version is not doing well. This is a typical ExceptQ report:
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 24, 2021, 05:04:39 am
Yea, started to reply and same thing. Now using a build with intree NSPR4 and NSS, same versions as latest YUM verisons, they seemed stable when I built them as part of the Mozilla build previously. Same code, possibly different optimizations and now the newer GCC.
I guess I'll have to upload new builds if this stays stable.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 24, 2021, 04:01:31 pm
Doug, Dave, everyone....

Quote
Anyways, previously I had found my system to run best with VAL=3072. However, that meant my JFS cache would only go to about 64M, any attempt at bigger value would produce that "out of memory" boot message and a default cache size being substituted.

This doesn't make any sense. It implies that the cache, over 64M, goes into unreserved upper shared memory space (above what VAL reserves). Could be possible, I suppose, and it might explain some of the weird crashes that I see when I try to use larger values for VAL.

This is precisely how it seems to work on my machine. The memory locations > VAL become available for SYSTEM use, and that appears to be what JFS cache is using.

...I use VAL=2560, and going larger causes instability in my system (don't know why). The biggest JFS cache, that I can use, seems to be 132M, no matter what larger value I set in the IFS startup line. However, I just tried setting it to 256M, in a new install that defaults VAL to 1536 (way too small for actual use), and it did take it. The Sentinel memory watcher (XCenter widget) appears to show that the memory was allocated (from somewhere). I tried 512M (then 384M, then your number 1048567), VAL is still 1536, but now cachejfs shows only 132M, and Sentinel seems to confirm that. I never see an "out of memory" boot message.

So, it seems that 256M is the largest value that doesn't default to 132M, for me (that is probably a bug, I expect that 132M is the maximum acceptable). I don't see any indication of where the cache memory is allocated (private low, private high, shared low, or shared high).

So here is what I think has a very direct impact on what you are seeing. Given your last response which shows the EXCEPTQ report, I find the following:

Code: [Select]
Hostname:         IREBBS7
 OS2/eCS Version:  2.45
 # of Processors:  2
 Physical Memory:  2793 mb
 Virt Addr Limit:  2560 mb

...however the matching result I see in any of my EXCEPTQ reports are:

Code: [Select]
Hostname:         NEUROBOX
 OS2/eCS Version:  2.45
 # of Processors:  6
 Physical Memory:  3199 mb
 Virt Addr Limit:  2048 mb

Notice the difference in the amount of 'Physical Memory' being reported?

My box shows about 400M more, this is without a doubt allowing me to run a larger cache (in combination with the appropriate VAL setting).

Something on my part that helps me get that is limitting the amount of video memory mapping that SNAP does. I have that set to 24M since that is all I need to support my two panels each running in 1920x1200 resolution @24bit colour.

Which version of JFS are you using? The one that I am using is v1.9.9 from AN.

Yup, same here, latest AN version.

I will get a better summary of all this following my next re-boot, which will give a clean-slate snapshot. Right now I'm on DAY3 cycle of JFS Cache stats gathering, and if there is anything I can learn from that I'll happily toss a deck together and share with you guys.

That will include the Theseus values that Dave mentioned as I happen to actually track my memory consumption using Theseus.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 24, 2021, 04:20:53 pm
OTOH, here, where I could only get 700MB's cache,
Code: [Select]
Hostname:         4C4C454
OS2/eCS Version:  2.45
# of Processors:  4
Physical Memory:  3240 mb
Virt Addr Limit:  1536 mb

Even more memory accessible to the system.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 24, 2021, 07:24:04 pm
Quote
Physical Memory:  2793 mb

Yeah. That seems to be common on newer machines (I have heard of one, that only leaves about 1 GB for the user). They fill up memory with stuff, and that leaves less room for the user. I don't think that has anything to do with what we are talking about though (I could be wrong). I should check to see what is left in UEFI mode.

In any case, it seems that all of this is very machine dependent, and the results of making changes can vary widely. The main problem is to determine if it is actually worth the effort.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 24, 2021, 07:37:56 pm
Dave,

OTOH, here, where I could only get 700MB's cache,
Code: [Select]
Hostname:         4C4C454
OS2/eCS Version:  2.45
# of Processors:  4
Physical Memory:  3240 mb
Virt Addr Limit:  1536 mb

Even more memory accessible to the system.

Hmm...good point, which makes me think you are using up that upper memory with other device drivers.

For what it's worth, here is my Theseus=>System=>Nonswappable Memory Analysis output:

Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 00024000 ->     144K -> 0.141M
Process overhead = 004F2000 ->    5064K -> 4.945M
DD allocated     = 47799000 -> 1171044K -> 1143.598M
DOS              = 0001E000 ->     120K -> 0.117M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00064000 ->     400K -> 0.391M
Kernel code      = 000B3000 ->     716K -> 0.699M
Kernel data      = 017F9000 ->   24548K -> 23.973M
Kernel heap      = 00497000 ->    4700K -> 4.590M

Total            = 49A74000 -> 1206736K -> 1178.453M

That massive 1143.598M number in the 'DD allocated' field is primarily the result of my 1G JFS cache...sure, other things are in there, but that's the big-boy amongst them.

I'm curious what you guys see for your running systems?
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 24, 2021, 09:50:54 pm
Quote
I'm curious what you guys see for your running systems?

This is from my main system, with cache:132000:
Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 0003C000 ->     240K -> 0.234M
Process overhead = 002F1000 ->    3012K -> 2.941M
DD allocated     = 0D453000 ->  217420K -> 212.324M
DOS              = 0001B000 ->     108K -> 0.105M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00051000 ->     324K -> 0.316M
Kernel code      = 000B1000 ->     708K -> 0.691M
Kernel data      = 01166000 ->   17816K -> 17.398M
Kernel heap      = 00190000 ->    1600K -> 1.563M

Total            = 0EB93000 ->  241228K -> 235.574M

I am a bit puzzled about the DOS entry. DOS/WINOS2 is not installed on this system.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 25, 2021, 01:21:49 am
@Doug, it was an EUFI install I was testing on. Within 1 MB of the same as this MBR system. The DOS thing is a VDM that runs really early in the boot, can't remember its purpose right now,

@Dariusz, that was a new install of the latest 5.1 beta, so only the stock device drivers including the ram disk driver. Panorama, have to compare to a SNAP system later.
Here's my nonswappable memory on my regular system.
Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 0002B000 ->     172K -> 0.168M
Process overhead = 001EC000 ->    1968K -> 1.922M
DD allocated     = 083F2000 ->  135112K -> 131.945M
DOS              = 0001D000 ->     116K -> 0.113M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00051000 ->     324K -> 0.316M
Kernel code      = 000B1000 ->     708K -> 0.691M
Kernel data      = 014D0000 ->   21312K -> 20.813M
Kernel heap      = 003F8000 ->    4064K -> 3.969M

Total            = 09FF0000 ->  163776K -> 159.938M
< End of THESEUS4 (v 4.001.00) output @ 16:18:23 on 24/11/2021 >
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 25, 2021, 05:54:06 pm
Alright guys...I've got DATA!!!

Well, data is good, but visuals make it a tad easier to understand.

OK, so take a look at about 4 days of runtime stats. This is the capture of cstats that I mentioned earlier, and what i'm focusing on is the relationship between the metrics.

The thing to note is the nightly spikes...these are my RSYNC runs, which copy my data to NAS and also do a full disk RSYNC of the OS/2 partition to a local disk backup partition.

So I thought it would be interesting to deep-dive some more into these because if anything, these are going to more directly point out the cache behaviour given that a RSYNC copy will require a full disk scan, and therefore one would expect the METADATA to be heavily hit in the JFS cache.

This is shown in the 2nd capture where I basically narrowed things down to just these nightly spikes. For that reason ignore the 'plunge' you see in-between these runs, that's just a placemarker that I used to indicate the manual cut-off between each day.

I have gathered 7 different logs so far since I started experimenting with the larger JFS cache settings, each spanning about 3-4 days. The earlier logs have different JFS cache sizes along with different parameters.

Having said that, I honestly will tell you the following (keep in mind, this is just based on a single 4 day 1G cache cycle):

1) FF is just flat out faster, pages render quicker which is telling me that FF's cache (set to 350M max) persists in the JFS cache for longer
2) the faster application load is present for all other apps that I regularly use: Thunderbird, PMMail, Lotus 1-2-3, OpenOffice, Lucide, PMView, VSE and few others

Now here is the kicker which I did not exect to see: my shared memory over these past 4 days never dropped below 55M. This is VERY different from my prior experience where routinely after 3-4 days of normal system use (that typically includes daily FF re-starts - which i do once I see FF get to about 1G mem consumption as shown in Theseus) I basically exhaust all available shared memory and even shutting down all apps will not bring things back (because of the well known segmentation problem).

Sooo....I obviously need to re-do this cycle multiple times and attempt to better understand the cstats information.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 25, 2021, 06:35:45 pm
Here are the Theseus metrics from a Clean Boot (so basically a re-boot with my default application/utility mix: UPS stuff, Xit, XWP, etc...none of the other major apps):

1) System => Nonswappable Memory

Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 00024000 ->     144K -> 0.141M
Process overhead = 0025B000 ->    2412K -> 2.355M
DD allocated     = 45615000 -> 1136724K -> 1110.082M
DOS              = 0001E000 ->     120K -> 0.117M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00030000 ->     192K -> 0.188M
Kernel code      = 000B3000 ->     716K -> 0.699M
Kernel data      = 01292000 ->   19016K -> 18.570M
Kernel heap      = 003B9000 ->    3812K -> 3.723M

Total            = 46FE0000 -> 1163136K -> 1135.875M

2) System => Free, Idle and Locked Memory

Code: [Select]
Free, Idle, and Locked Memory:
Free                RAM = 7A902000 bytes (2008072K) (1961.008M)
Idle                RAM = 0001A000 bytes (  104K) ( 0.102M).
        (Dirty idle RAM = 00015000 bytes (   84K) ( 0.082M)).
Long  Term Locked   RAM = 0005E000 bytes (  376K) ( 0.367M).
Short Term Locked   RAM = 00000000 bytes (    0K) ( 0.000M).
Short & Long Locked RAM = 00000000 bytes (    0K) ( 0.000M).

3) System => Kernel Information => System Object Summary

Code: [Select]
  Object Allocated Committed   Present   Swapped
 address    memory    memory    memory    memory  Description
          --------  --------  --------  --------
Totals:   619D9000  47B62000  47AD4000  00000000  (in bytes)
           1599332   1174920   1174352         0  (in Kbytes)
          1561.848  1147.383  1146.829     0.000  (in Mbytes)
Number of objects = 1024.

Analysis of 'Free' areas:
There are 398 free blocks which total 1E5E7000 (497564K or 485.902M)
The largest 10 free areas are:
 address      size
8FC10000  1D016000 (475224K or 464.086M)

4) System => General System => General System Information

Code: [Select]
General System Information:

OS/2 version        = 2.45, revision = 0.
Os2krnl build level = 14.203

SYSLEVEL.OS2 information
OS/2 Component ID   = 5639A6101
CSD GA level        = XR04503
CSD Previous level  = XR0C006

SYSLEVEL.FPK information
OS/2 Component ID   = 566933010
CSD Current level   = XR0C006
CSD Previous level  = XR0C006

Theseus4 Version    = 4.001.00
Machine information: Model = 252 (0xFC),
                     Submodel = 1 (0x01),
                     Revision = 0 (0x00),
                     ABIOS = 0 (0x00).
BIOS date = 10/31/12.
RAM available to OS/2 = C7F0B000 bytes (3199.043M).
It appears that all of it is being used as 'paging space' by OS/2.
  (This is the 'proper' usage of the memory.)

Following are the values from DosQuerySysInfo:
 1. QSV_MAX_PATH_LENGTH      = 260.
 2. QSV_MAX_TEXT_SESSIONS    = 16.
 3. QSV_MAX_PM_SESSIONS      = 16.
 4. QSV_MAX_VDM_SESSIONS     = 128.
 5. QSV_BOOT_DRIVE           = 7.
 6. QSV_DYN_PRI_VARIATION    = 1.
 7. QSV_MAX_WAIT             = 1.
 8. QSV_MIN_SLICE            = 32.
 9. QSV_MAX_SLICE            = 32.
10. QSV_PAGE_SIZE            = 4096.
11. QSV_VERSION_MAJOR        = 20.
12. QSV_VERSION_MINOR        = 45.
13. QSV_VERSION_REVISION     = 0.
14. QSV_MS_COUNT             = 263519 (0:04:23).
15. QSV_TIME_LOW             = 1637843077.
16. QSV_TIME_HIGH            = 0.
17. QSV_TOTPHYSMEM           = -940527616 (3275820K -> 3199.043M).
18. QSV_TOTRESMEM            = 1198608384 (1170516K -> 1143.082M).
19. QSV_TOTAVAILMEM          = 2082484224 (2033676K -> 1986.012M).
20. QSV_MAXPRMEM             = 370147328 (361472K -> 353.000M).
21. QSV_MAXSHMEM             = 313262080 (305920K -> 298.750M).
22. QSV_TIMER_INTERVAL       = 310.
23. QSV_MAX_COMP_LENGTH      = 255.
24. QSV_FOREGROUND_FS_SESSION = 36.
25. QSV_FOREGROUND_PROCESS    = 37.
26. QSV_NUMPROCESSORS        = 6.
27. QSV_MAXHPRMEM            = 1409286144 (1376256K -> 1344.000M).
28. QSV_MAXHSHMEM            = 1404301312 (1371388K -> 1339.246M).
29. QSV_MAXPROCESSES         = 128.
30. QSV_VIRTUALADDRESSLIMIT  = 80000000
System Anchor Segment (SAS) selector = 0070.
Size of PTDA                 = 0768 bytes.
Size of TCB                  = 0304 bytes.
Size of Alias        record  = 0008 bytes.
Size of Arena        record  = 0016 bytes.
Size of Object       record  = 0010 bytes.
Size of Context      record  = 0005 bytes.
Size of Page Frame   record  = 000C bytes.
Size of Virtual Page record  = 000A bytes.
Size of SFT entry            = 00A2 bytes.
har of System Arena Sentinel = 0004.
har of Shared Arena Sentinel = 0006.
har above 512m Shr Arena Sen = 0005.
har of Page Frame table      = 001C.
har of Virtual Page table    = 001F.
System Page Directory        @ FE48A000.
System Page Tables start     @ FE200000.
DLL code Page Tables start   @ FE1F8000.
Start address shared global  @ 1E000000.
Alias Record Table           @ FCB16020.

Let me know if there is anything else you want me to catpure.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 25, 2021, 08:05:44 pm
For the Mozilla cache, I use the ramdisk, fairly easy to setup with SM. For Firefox need to use about:config to create preference something like this (H: is my ramdisk) "browser.cache.disk.parent_directory;H:\mozilla\firefox"
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 26, 2021, 03:08:30 pm
Hi Dave!

For the Mozilla cache, I use the ramdisk, fairly easy to setup with SM. For Firefox need to use about:config to create preference something like this (H: is my ramdisk) "browser.cache.disk.parent_directory;H:\mozilla\firefox"

Hmm, from my perspective I'm not sure what the longer-term performance gain would be in terms of doing a FF ramdisk cache, after all I would want the cache to persist in order to speed up the load of the bigger sized web page elements the very next time they are requested. I suppose if you want to keep the profile clean (and smaller sized), and do not care to carry over the cache past any re-boot, that would do it.

HOWEVER

...for the Thunderbird client that is absolutely what i'm going to take a look at. I see no point in keeping that cache persistent since emails will always change, and I would think that causes the normal disk cache to just fill up with stuff that rarely ever gets re-used, with the exception of very few standard emails: bills from the same companies, etc.

I do not have SM installed here so I'm not sure what you mean by "fairly easy to set up", but would that still hold true for TB? If so, can you point me in the right direction before I attempt to create that FF-like preference?

In the meantime I'm off to research the TB cache settings a bit...

Thanks!
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 26, 2021, 04:54:18 pm
My ramdisk does persist over warm reboots (last computer, it even persisted over a couple of seconds of no power) and every time the browser crashes, which is too often lately, the whole cache gets invalidated.
As for Thunderbird, I'd think the cache is mostly used when displaying web pages, which it is quite capable of doing. I even have an add-on, thunderbrowse, which exposes displaying web pages.
For SM, you can point the cache in the Preferences under Advanced-->Cache. I assume that for TB, you have to open the Options-->Advanced-->Config Editor and create the same preference as for Firefox.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 26, 2021, 05:58:22 pm
Quote
"browser.cache.disk.parent_directory;H:\mozilla\firefox"

I tried that, with the appropriate changes. I really don't notice any difference, but I do have Firefox set to clear everything at shut down anyway. My RAMDISK doesn't retain information over a reboot (my choice, and I never tried it), but that shouldn't change anything. I didn't spend much time with it, but now, how do I remove that entry properly?

I did up my cache to 256000, on my main machine. It created the cache, but then Firefox won't start, complaining that XUL is defective. I put it back to 132000 and it is working.

Then, I have been playing with LW, MIN and MAX buffers. That is possibly making some difference. I will know more on Monday morning, when I do my backups. There is an indication that the single processor machine doesn't like it much, so that will go back to what it was (the LW part anyway).
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 26, 2021, 07:04:56 pm
My ramdisk does persist over warm reboots (last computer, it even persisted over a couple of seconds of no power) and every time the browser crashes, which is too often lately, the whole cache gets invalidated.

Ahh...got ya! My machine does not retain the ramdisk contents, I had originally tried to do what you are talking about becuase I rarely actually physically shut OFF the machine. Instead it's almost always a re-boot.

...As for Thunderbird, I'd think the cache is mostly used when displaying web pages, which it is quite capable of doing. I even have an add-on, thunderbrowse, which exposes displaying web pages.
For SM, you can point the cache in the Preferences under Advanced-->Cache. I assume that for TB, you have to open the Options-->Advanced-->Config Editor and create the same preference as for Firefox.

Well, it didn't take long for me to read up on the TB specifics. The FF parameter setting is exactly what TB works with. I initially shifted my TB cache to the ramdisk, but then noticed something, see further details below...

Now there is something that seems different, although maybe that's just because I previously did NOT pay attention to it: in TB each time I click on a different folder I now have the clock icon pop up for just a couple of seconds. I honestly do NOT remember seeing this before, and of course I have no idea if this is a result of moving the cache to the ramdisk.

So to test this out, I've actually gone back to a DISK based cache, will give it a few days worth of use and see if that's what causing it.

EDIT
====
Each time the clock icon pops up I do see TCPIP traffic...so TB is certainly fetching something. Haven't captured that and pumped it through Wireshark yet to see what it's actually doing, but I suspect it's going to the server to read something.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dariusz Piatkowski on November 26, 2021, 07:11:36 pm
Hi Doug,

...I did up my cache to 256000, on my main machine. It created the cache, but then Firefox won't start, complaining that XUL is defective. I put it back to 132000 and it is working.

Then, I have been playing with LW, MIN and MAX buffers. That is possibly making some difference. I will know more on Monday morning, when I do my backups. There is an indication that the single processor machine doesn't like it much, so that will go back to what it was (the LW part anyway).

Is XUL marked to load high? Otherwise the only follow-up is to see if it's been LXLITE compressed? (not sure if DLLs get processed by LXLITE, I think they do...?)

Re: LW, MIN and MAX, same here. As I change these I continue to log to see they impact these are having.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Dave Yeo on November 26, 2021, 08:09:46 pm
Is XUL marked to load high? Otherwise the only follow-up is to see if it's been LXLITE compressed? (not sure if DLLs get processed by LXLITE, I think they do...?)

During make package, all the DLL's and the exe get lxlited. Raw xul.dll, 71,299,848 bytes.

@Doug,
Right click on the preference and choose reset would probably work. Otherwise with the browser closed, edit prefs.js and delete the line, after backing up prefs.js.
Title: Re: JFS cache sizing, and system "speed-up"
Post by: Doug Bissett on November 26, 2021, 11:10:59 pm
Quote
Is XUL marked to load high? Otherwise the only follow-up is to see if it's been LXLITE compressed? (not sure if DLLs get processed by LXLITE, I think they do...?)

They are all marked for high code. LXLITE is whatever they were shipped as. I am going to stay with /cache:132000 on my main machine. That seems to work as well as anything else. Changing Lazy write to /LW:16,60,12 doesn't seem to increase performance much. but it does seem to smooth out some of the peaks and valleys. I think I will put that back to /LW:8,30,6 and see what happens. /MINBUFFER:4500 /MAXBUFFER:15000 seem to be good numbers, for me.

Quote
@Doug,
Right click on the preference and choose reset would probably work. Otherwise with the browser closed, edit prefs.js and delete the line, after backing up prefs.js.

Okay, that works. It is not exactly obvious what Reset is going to do.