OK, so I've had my VIRTUALADDRESSLIMIT=3072 setup in CONFIG.SYS for quite some time, and utilized a JFS cache of 256M. Seemed to work fine, no issues. Keep in mind the underlying disk is a SSD (Samsung 850Evo).
Personally, I always use VAL=2560, which seems to be a sweet spot, for my usage. At least two of my systems get very unstable if I use more than 2900 (and don't run very long, if I go to 3000). I do monitor the available lower, and upper, private, and shared, memory (using Above512, and a logging script). I find that the lowest I ever get with upper shared memory, is somewhere in the 800K range. Lower shared memory does get down to the 35K range (when I run VBox), but most of the time, it is above 100K. When it does get below 100K, I try to prevent further programs from running, because it sometimes does go to zero, with bad results (it seems that some programs never check to see if they got lower shared memory, before they try to use it, and then they can't recover). Of course, I do use HIGHMEM to set as much as possible to use high shared memory. I find that it is necessary to run HIGHMEM again, after some updates. It doesn't always find something that got changed, but sometimes it does.
For those who don't know, VAL has absolutely nothing to do with available real memory. I use VAL=2560 on my antique ThinkPad, which only has 256 Mb of real memory. What it seems to do, is increase the amount of upper shared memory, that is available.
I find it interesting, that private memory (upper, or lower) rarely changes value. Almost everything allocates shared memory, but that could be because Above512 is lying to me.
So off I went experimenting a little bit:
Been there, done some of it. I usually use JFS cache at 132K (on machines with more than 1 GB of real memory), which seems to be a bit of a sweet spot (64K, or 10% of real memory, is default). Using more tended to end up not helping much (it has been a few years since I tried that), and I found that the systems became somewhat less stable. Usually the problem was an unexplained hang, although I seem to be able to reproduce a similar problem by writing huge (20 GB on JFS) files to USB devices (JFS), while using the default cache size.
I never played with buffer sizes, and cache size doesn't seem to matter if it is using a spinning disk, or a SSD. A SSD, of course, has no seek time, or spin, delay, so it is faster.
I gave up using HPFS, years ago, because the HPFS cache goes into lower shared memory, which is already in very short supply. JFS does have other, significant, advantages over HPFS.
The apps are loading consistently faster, on multiple attempts.
A lot of that "improvement" depends on timing. If the program parts have not been flushed from the cache, the load time is faster (even when using a small cache). Using a larger cache does increase the probability that most of it will still be there, but it depends on what you do between starts.
Obviously not on the first attempt, but openning OO the 2nd and 3rd and 4th time is faster.
If you are using the AOO QuickStart feature, that may be what you are seeing. From what I see, QuickStart keeps parts of the program available in memory, after you close it, but it looks, to me, like the first program load still needs to load everything. QuickStart does work around DLL unload problems when using upper shared memory, without the kernel fixes that ArcaOS has.
FWIW, I have turned off all Lazy Write settings (FAT32, and JFS). That doesn't seem to affect system response much, but it does seem to make the system less likely to hang, especially when writing large files to USB.