Author Topic: JFS cache sizing, and system "speed-up"  (Read 35809 times)

Dariusz Piatkowski

  • Hero Member
  • *****
  • Posts: 1343
  • Karma: +26/-0
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #15 on: November 22, 2021, 04:56:54 pm »
So I thougth I would provide a bit of an update given my experimenting with the JFS cache settings.

I have been using the OpenJFS utilities to gather the internal JFS metrics (ftp://ftp.netlabs.org/pub/snapshots/openjfs/).

Based on various on-line references I had modified an existing REXX script (which in it's on-line shape did not actually work) to do real-time logging of the 'cstats' to a CSV file.

Anyways, through multiple cycles I have been trying to understand how the various parameters affect what JFS does on my system (of course, my usage is an end-user type of use, so very different from a let's say server - web or file for that matter).

Regardless, I am now running a 1G JFS cache and in the current iteration of parameters I have to say the responsiveness of my system is a "day & night" type of a difference from where I started. In particular however, the latest move from a 768M cache to a 1G cache introduced a significant improvement that I'm not sure I can quite explain.

Specifically, using the cstats utility the 1G cache shows the following:

Code: [Select]
cachesize    262141   cbufs_protected       58960
hashsize     131072   cbufs_probationary    30920
nfreecbufs    75323   cbufs_inuse               0
minfree        6000   cbufs_io                  0
maxfree       60000   jbufs_protected       96286
numiolru          0   jbufs_probationary      642
slrun        155246   jbufs_inuse               0
slruN        174760   jbufs_io                  0
Other            10   jbufs_nohomeok            0

Meanwhlie, the os2stats utility show this:

Code: [Select]
NCache: lookup: 262141
        hit: 131072
        miss: 75323
        enter: 6000
        delete: 60000
        name2long: 0
JCache: reclaim: 262141
        read: 131072
        recycle: 75323
        lazywrite.awrite: 6000
        recycle.awrite: 60000
        logsync.awrite: 0
        write: 155246
LCache: commit: 262141
        page.init: 131072
        page.done: 75323
        sync: 6000
        maxbufcnt: 60000
ICache: n.inode: 262141
        reclaim: 131072
        recycle: 75323
        release: 6000

The cachesize field implies that actual data cache is only 256M, which is what I'm struggling with. Keep in mind, I came from HPFS386 world, where seemingly at least, it was pretty clear how big your cache pool was and what your cache hit/miss were. The JFS being a journaling FS deals with metadata, so it's not quite as clear as having a single cache for content data; at least that's my current understanding.

In fact I suspect my interpretation of what these fields mean may not be correct. The '262141' may not be a k-byte count, instead it may be a metadata unit count, I think? LOL

I am therefore curious if anyone else understands this better?

My next step is to look at the sources to see if I can decipher this.

Dariusz Piatkowski

  • Hero Member
  • *****
  • Posts: 1343
  • Karma: +26/-0
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #16 on: November 22, 2021, 05:15:23 pm »
Hi Doug,

Quote
so I bumped the cache up to 768M

Since I haven't tried this, for a long time, I decided to give it a shot. How did you get it to take 768m (I would assume that you used 768000 (K) as the cache size. When I try that, CACHEJFS shows me:
Cache Size:  131072 kbytes
which I believe is the allowed max now.
...

So my JFS cache parameters are all set in CONFIG.SYS with:

IFS=G:\OS2\JFS.IFS /CACHE:1048567 /LW:8,30,6 /AUTOCHECK:*

The absolute pre-requisite to increasing the JFS cache size was freeing up the high memory area (see the tail end of the 'QT5 browser' thread discussion where OS4User provides a pretty good explanation of how OS/2 allocates memory => https://www.os2world.com/forum/index.php/topic,2627.msg32933.html#new).

Anyways, previously I had found my system to run best with VAL=3072. However, that meant my JFS cache would only go to about 64M, any attempt at bigger value would produce that "out of memory" boot message and a default cache size being substituted.

Well, I was getting a little annoyed by seeing a quite sizeable NTFS cache allocation on my WIN7 boxes and started to dig into why our OS/2 JFS cache size was so seemingly limitted. As I was reading up on this it dawned on me that setting the VAL lower should allow me to free-up the 'system' memory, which would then become available to various device drivers, and JFS is certainly one of these.

I am now running with VAL=2048, and given that I have a 8G box with 4G allocated to RAMDISK, and 3.2G being recognized by OS/2 as "accessible memory", that allowed me to allocate more memory to FS cache, where I think the most "bang for the buck" exists today (given our platform and it's limitations).

...
Quote
/MINBUFFER:16000 /MAXBUFFER:84000

Did you find a description of what these actually do? I have run out of places to look, and can't find anything.

Yes and no. There were some Warpstock presentations that touched on this. See "Dynamically Tuning the JFS Cache for Your Job" by Sjoerd Visser from 2009. P22 of that deck starts getting into the details of the JFS cache design, which is really about the logic of how the different buffers are handled, and the differences between actual data and metadata.

Bottom line in all this is: tune the settings to your system usage patterns.

Doug Bissett

  • Hero Member
  • *****
  • Posts: 1599
  • Karma: +4/-2
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #17 on: November 22, 2021, 08:56:42 pm »
Quote
The cachesize field implies that actual data cache is only 256M, which is what I'm struggling with.

That would appear to be the number of 4K buffers (but that is only a guess). If true, it matches your defined cache size.

Quote
Quote from: Doug Bissett on November 01, 2021, 08:25:47 pm
Quote

        /MINBUFFER:16000 /MAXBUFFER:84000

    Did you find a description of what these actually do? I have run out of places to look, and can't find anything.

Yes and no. There were some Warpstock presentations that touched on this. See "Dynamically Tuning the JFS Cache for Your Job" by Sjoerd Visser from 2009. P22 of that deck starts getting into the details of the JFS cache design, which is really about the logic of how the different buffers are handled, and the differences between actual data and metadata.

Interesting. It seems that they are all numbers referring to 4K data blocks.

Quote
Anyways, previously I had found my system to run best with VAL=3072. However, that meant my JFS cache would only go to about 64M, any attempt at bigger value would produce that "out of memory" boot message and a default cache size being substituted.

This doesn't make any sense. It implies that the cache, over 64M, goes into unreserved upper shared memory space (above what VAL reserves). Could be possible, I suppose, and it might explain some of the weird crashes that I see when I try to use larger values for VAL.

I think we can safely assume that it does not use (or even know about) PAE memory, so any memory above about 3.5G is likely out of the picture, although it is quite likely that it could use PAE memory, if somebody programmed it to use it.

I use VAL=2560, and going larger causes instability in my system (don't know why). The biggest JFS cache, that I can use, seems to be 132M, no matter what larger value I set in the IFS startup line. However, I just tried setting it to 256M, in a new install that defaults VAL to 1536 (way too small for actual use), and it did take it. The Sentinel memory watcher (XCenter widget) appears to show that the memory was allocated (from somewhere). I tried 512M (then 384M, then your number 1048567), VAL is still 1536, but now cachejfs shows only 132M, and Sentinel seems to confirm that. I never see an "out of memory" boot message.

So, it seems that 256M is the largest value that doesn't default to 132M, for me (that is probably a bug, I expect that 132M is the maximum acceptable). I don't see any indication of where the cache memory is allocated (private low, private high, shared low, or shared high).

Since I seem to be able to use 8 times the default cache size, I would think that changing MAX and MIN to 8 times their default value, would make sense, but that is only a guess. I need to do more reading. Thanks for the reference.

Which version of JFS are you using? The one that I am using is v1.9.9 from AN.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4984
  • Karma: +110/-1
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #18 on: November 22, 2021, 10:31:07 pm »
Like Doug, testing on a new install with VAL set at 1536. Using the IFS=M:\OS2\JFS.IFS /CACHE:1048567 /LW:8,30,6 /AUTOCHECK:* setting in my config.sys failed with a small cache. I then tried a 256M cache and that worked.
I then thought of using Theseus to check the kernel's system object summary which showed that I had just over 500M of free system memory, so I jacked up the cache size to 700M and that took, and looking at the free system memory, I now have 67.172M free with the largest block being 44.871M andI now have 810.458M of system memory committed, 2492.579M allocated.
This raises the question of how Dariusz is getting so much system memory with VAL set at 2048 that he can use a GB for cache.
Using Theseus, System-->Kernel Information-->System Object Summary. At the bottom are the totals

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4984
  • Karma: +110/-1
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #19 on: November 23, 2021, 06:58:16 am »
System seems to have ended up unstable after that change, both SM and TB crashing frequently. Leaves me wondering if the kernel commits more memory at times. Different partition with VAL set to 3072 gives me 153M free system memory, 123M largest block.

OS4User

  • Sr. Member
  • ****
  • Posts: 406
  • Karma: +10/-0
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #20 on: November 23, 2021, 07:34:20 am »
System seems to have ended up unstable after that change, both SM and TB crashing frequently. Leaves me wondering if the kernel commits more memory at times. Different partition with VAL set to 3072 gives me 153M free system memory, 123M largest block.

I have  107M free system memory, 92M largest block.  FF is quite stable.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4984
  • Karma: +110/-1
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #21 on: November 23, 2021, 07:40:58 am »
System seems to have ended up unstable after that change, both SM and TB crashing frequently. Leaves me wondering if the kernel commits more memory at times. Different partition with VAL set to 3072 gives me 153M free system memory, 123M largest block.

I have  107M free system memory, 92M largest block.  FF is quite stable.

This was the latest beta with the updated NSPR and NSS so that may be part of it.

Doug Bissett

  • Hero Member
  • *****
  • Posts: 1599
  • Karma: +4/-2
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #22 on: November 24, 2021, 04:05:52 am »
Quote
This was the latest beta with the updated NSPR and NSS so that may be part of it.

Yeah. That version is not doing well. This is a typical ExceptQ report:

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4984
  • Karma: +110/-1
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #23 on: November 24, 2021, 05:04:39 am »
Yea, started to reply and same thing. Now using a build with intree NSPR4 and NSS, same versions as latest YUM verisons, they seemed stable when I built them as part of the Mozilla build previously. Same code, possibly different optimizations and now the newer GCC.
I guess I'll have to upload new builds if this stays stable.

Dariusz Piatkowski

  • Hero Member
  • *****
  • Posts: 1343
  • Karma: +26/-0
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #24 on: November 24, 2021, 04:01:31 pm »
Doug, Dave, everyone....

Quote
Anyways, previously I had found my system to run best with VAL=3072. However, that meant my JFS cache would only go to about 64M, any attempt at bigger value would produce that "out of memory" boot message and a default cache size being substituted.

This doesn't make any sense. It implies that the cache, over 64M, goes into unreserved upper shared memory space (above what VAL reserves). Could be possible, I suppose, and it might explain some of the weird crashes that I see when I try to use larger values for VAL.

This is precisely how it seems to work on my machine. The memory locations > VAL become available for SYSTEM use, and that appears to be what JFS cache is using.

...I use VAL=2560, and going larger causes instability in my system (don't know why). The biggest JFS cache, that I can use, seems to be 132M, no matter what larger value I set in the IFS startup line. However, I just tried setting it to 256M, in a new install that defaults VAL to 1536 (way too small for actual use), and it did take it. The Sentinel memory watcher (XCenter widget) appears to show that the memory was allocated (from somewhere). I tried 512M (then 384M, then your number 1048567), VAL is still 1536, but now cachejfs shows only 132M, and Sentinel seems to confirm that. I never see an "out of memory" boot message.

So, it seems that 256M is the largest value that doesn't default to 132M, for me (that is probably a bug, I expect that 132M is the maximum acceptable). I don't see any indication of where the cache memory is allocated (private low, private high, shared low, or shared high).

So here is what I think has a very direct impact on what you are seeing. Given your last response which shows the EXCEPTQ report, I find the following:

Code: [Select]
Hostname:         IREBBS7
 OS2/eCS Version:  2.45
 # of Processors:  2
 Physical Memory:  2793 mb
 Virt Addr Limit:  2560 mb

...however the matching result I see in any of my EXCEPTQ reports are:

Code: [Select]
Hostname:         NEUROBOX
 OS2/eCS Version:  2.45
 # of Processors:  6
 Physical Memory:  3199 mb
 Virt Addr Limit:  2048 mb

Notice the difference in the amount of 'Physical Memory' being reported?

My box shows about 400M more, this is without a doubt allowing me to run a larger cache (in combination with the appropriate VAL setting).

Something on my part that helps me get that is limitting the amount of video memory mapping that SNAP does. I have that set to 24M since that is all I need to support my two panels each running in 1920x1200 resolution @24bit colour.

Which version of JFS are you using? The one that I am using is v1.9.9 from AN.

Yup, same here, latest AN version.

I will get a better summary of all this following my next re-boot, which will give a clean-slate snapshot. Right now I'm on DAY3 cycle of JFS Cache stats gathering, and if there is anything I can learn from that I'll happily toss a deck together and share with you guys.

That will include the Theseus values that Dave mentioned as I happen to actually track my memory consumption using Theseus.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4984
  • Karma: +110/-1
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #25 on: November 24, 2021, 04:20:53 pm »
OTOH, here, where I could only get 700MB's cache,
Code: [Select]
Hostname:         4C4C454
OS2/eCS Version:  2.45
# of Processors:  4
Physical Memory:  3240 mb
Virt Addr Limit:  1536 mb

Even more memory accessible to the system.

Doug Bissett

  • Hero Member
  • *****
  • Posts: 1599
  • Karma: +4/-2
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #26 on: November 24, 2021, 07:24:04 pm »
Quote
Physical Memory:  2793 mb

Yeah. That seems to be common on newer machines (I have heard of one, that only leaves about 1 GB for the user). They fill up memory with stuff, and that leaves less room for the user. I don't think that has anything to do with what we are talking about though (I could be wrong). I should check to see what is left in UEFI mode.

In any case, it seems that all of this is very machine dependent, and the results of making changes can vary widely. The main problem is to determine if it is actually worth the effort.

Dariusz Piatkowski

  • Hero Member
  • *****
  • Posts: 1343
  • Karma: +26/-0
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #27 on: November 24, 2021, 07:37:56 pm »
Dave,

OTOH, here, where I could only get 700MB's cache,
Code: [Select]
Hostname:         4C4C454
OS2/eCS Version:  2.45
# of Processors:  4
Physical Memory:  3240 mb
Virt Addr Limit:  1536 mb

Even more memory accessible to the system.

Hmm...good point, which makes me think you are using up that upper memory with other device drivers.

For what it's worth, here is my Theseus=>System=>Nonswappable Memory Analysis output:

Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 00024000 ->     144K -> 0.141M
Process overhead = 004F2000 ->    5064K -> 4.945M
DD allocated     = 47799000 -> 1171044K -> 1143.598M
DOS              = 0001E000 ->     120K -> 0.117M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00064000 ->     400K -> 0.391M
Kernel code      = 000B3000 ->     716K -> 0.699M
Kernel data      = 017F9000 ->   24548K -> 23.973M
Kernel heap      = 00497000 ->    4700K -> 4.590M

Total            = 49A74000 -> 1206736K -> 1178.453M

That massive 1143.598M number in the 'DD allocated' field is primarily the result of my 1G JFS cache...sure, other things are in there, but that's the big-boy amongst them.

I'm curious what you guys see for your running systems?

Doug Bissett

  • Hero Member
  • *****
  • Posts: 1599
  • Karma: +4/-2
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #28 on: November 24, 2021, 09:50:54 pm »
Quote
I'm curious what you guys see for your running systems?

This is from my main system, with cache:132000:
Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 0003C000 ->     240K -> 0.234M
Process overhead = 002F1000 ->    3012K -> 2.941M
DD allocated     = 0D453000 ->  217420K -> 212.324M
DOS              = 0001B000 ->     108K -> 0.105M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00051000 ->     324K -> 0.316M
Kernel code      = 000B1000 ->     708K -> 0.691M
Kernel data      = 01166000 ->   17816K -> 17.398M
Kernel heap      = 00190000 ->    1600K -> 1.563M

Total            = 0EB93000 ->  241228K -> 235.574M

I am a bit puzzled about the DOS entry. DOS/WINOS2 is not installed on this system.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4984
  • Karma: +110/-1
    • View Profile
Re: JFS cache sizing, and system "speed-up"
« Reply #29 on: November 25, 2021, 01:21:49 am »
@Doug, it was an EUFI install I was testing on. Within 1 MB of the same as this MBR system. The DOS thing is a VDM that runs really early in the boot, can't remember its purpose right now,

@Dariusz, that was a new install of the latest 5.1 beta, so only the stock device drivers including the ram disk driver. Panorama, have to compare to a SNAP system later.
Here's my nonswappable memory on my regular system.
Code: [Select]
Nonswappable Memory analysis:
Apps & DLLs      = 0002B000 ->     172K -> 0.168M
Process overhead = 001EC000 ->    1968K -> 1.922M
DD allocated     = 083F2000 ->  135112K -> 131.945M
DOS              = 0001D000 ->     116K -> 0.113M
VDisk            = 00000000 ->       0K -> 0.000M
File system      = 00051000 ->     324K -> 0.316M
Kernel code      = 000B1000 ->     708K -> 0.691M
Kernel data      = 014D0000 ->   21312K -> 20.813M
Kernel heap      = 003F8000 ->    4064K -> 3.969M

Total            = 09FF0000 ->  163776K -> 159.938M
< End of THESEUS4 (v 4.001.00) output @ 16:18:23 on 24/11/2021 >