OS2 World Community Forum
OS/2, eCS & ArcaOS - Technical => Applications => Topic started by: Doug Clark on September 25, 2018, 06:43:44 pm
-
I have read that the AOS RAM disk uses memory above the 4 MB limit - meaning on a 8 MB machine I could assign 4MB to a ram disk and not impact any OS/2 applications or the operating system itself.
Yet the Low and High memory check boxes and available amounts don't seem to indicate that is true.
Does anyone know where RAM disk takes memory? And it it does use memory about the 4 MB limit, how does it do that?
-
I always thought, it is a 4 GB, not 4 MB limit. Am I not right?
-
Yes Valery, you are correct. My mistake. 4 GB limit.
-
Feel free to correct me if I am wrong. I believe one needs the OS/4 kernel and the QSINIT loader to utilized RAM above 4 GB.
-
Feel free to correct me if I am wrong. I believe one needs the OS/4 kernel and the QSINIT loader to utilized RAM above 4 GB.
OS/4 kernel (and emsFS.ifs) _OR_ the QSINIT (and some .ADD driver)
-
I always thought, it is a 4 GB, not 4 MB limit. Am I not right?
You may not be right indeed: https://en.wikipedia.org/wiki/Gibibyte (https://en.wikipedia.org/wiki/Gibibyte)... :P
-
Does anyone know where RAM disk takes memory? And it it does use memory about the 4 GB limit, how does it do that?
According to Arca Noae's Section 6.4.3 of README (https://www.arcanoae.com/wp-content/uploads/wiki/ReadMe-ArcaOS-2.txt)
6.4.3 RAM Disk
The new kernel supports a RAM disk (portion of memory which acts like a
physical disk and is assigned a drive letter). This RAM disk may be configured
to use memory above the 4GB boundary, normally limiting 32-bit operating
systems. Thus, on newer systems with large amounts of memory (8, 16, 24, 32, or
even 64GB), the space above 4GB may be configured for use as a RAM disk.
Arca Noae provides a utility for configuring up to two such RAM disks. The RAM
Disk applet located in the System Setup folder provides guidance and
configuration of such memory constructs.
32-bit x86-based CPUs are able to access up to 64 GB of RAM per physical core using something called Physical Address Extensions (PAE) allowing extra information in the paging table for 36-bits of physical address space rather than just 32-bits. Each physical process still only accesses 4 GB of virtual addresses, but they can be mapped anywhere in the 64 GB supported.
-
Each physical process still only accesses 4 GB of virtual addresses
Not anymore, see the previous link to Wikipedia.
-
Each physical process still only accesses 4 GB of virtual addresses
Not anymore, see the previous link to Wikipedia.
Do you mean it should be 4 GiB?
-
Feel free to correct me if I am wrong. I believe one needs the OS/4 kernel and the QSINIT loader to utilized RAM above 4 GB.
ArcaOS comes with a modified QSINIT as os2ldr, which helps with memory holes that the IBM os2ldr can't handle and gives access to the high memory through PAE.
-
I have read that the AOS RAM disk uses memory above the 4 MB limit - meaning on a 8 MB machine I could assign 4MB to a ram disk and not impact any OS/2 applications or the operating system itself.
Yet the Low and High memory check boxes and available amounts don't seem to indicate that is true.
Does anyone know where RAM disk takes memory? And it it does use memory about the 4 MB limit, how does it do that?
I found that I had to enable some BIOS setting here before I could access the high memory. I forget the exact one but it wasn't obviously related, might have been the VTx for virtual machines.
It is actually memory above about 3.5 GB, depending on MB. Currently with 4GB of ram, I have 768 MBs that are only usable as a ram disk. It is also possible to use lower memory in the ram disk as well.
Make sure you enable it and reboot so it can figure out your memory as well.
-
Feel free to correct me if I am wrong. I believe one needs the OS/4 kernel and the QSINIT loader to utilized RAM above 4 GB.
ArcaOS comes with a modified QSINIT as os2ldr, which helps with memory holes that the IBM os2ldr can't handle and gives access to the high memory through PAE.
I have 16GiB and 12GiB ram are used as RAM disk
-
I guess I am confused by the Ram Disk setup screen. I am using AOS
The screen shows two check boxes, with available memory next to them.
One check box says Low memory.
The other says High memory.
My laptop has 8 GB of RAM.
If "low memory" means everything below the 4 GB boundary, and "high memory" means above the 4GB boundary, then the available memory shown next to the "high memory" check box should show 4 GBs.
If the "low memory" check box means conventional memory, and "high memory" means the memory between conventional and 4GB, then how do you select the memory above 4GB
Do anyone know how the Ram Disk setup thingy work?
-
OK to answer my own question - based in the Ram Disk Memory Limits dialog
The low memory check box must mean all memory below 4gb
The high memory check box must mean all memory above 4gb
Something (BIOS video memory?) must be reserving a chunk of below 4GB memory, i.e. "low memory"
No matter what you do to the check boxes, the next time you open the Ram Disk setup thingy, both check boxes are checked. So the only way to know or check where the Ram Disk is being created is with a memory analyzer, or to look at os2ldr.cfg
Thanks guys for the answer about PAE.
I guess this means, in theory, it is possible for some of those memory hungry applications (FireFox, vBox) to also use memory above 4GB?
-
It's still weird that you only show having 56 MB's of low available. Here I have 3250MB's low available and 768 MB high memory, which is close to your 848MB's. Why the memory above 4GB isn't showing I don't know, probably something to do with your BIOS. I take it that other operating systems see close to 8GB's?
About the 4 GB's. While the 32 bit x86 can access 4 GB's, some of it is mapped into the system hardware space, PCI cards, video memory and such claim address at the top of the 4GBs address space, in your case 848MBs. The low memory should show the rest of the 4GBs. All I can think of is an address hole (common on newer systems) low down in memory that is confusing HiDisk.
Note also that you can use low memory for the ram disk as well but then OS/2 can't use it.
Unluckily, without a lot more work, which will probably never happen, Firefox etc still can't use the high memory. You can put things like %TEMP% on the ram disk which will speed things up, and it is self cleaning. You can put the swap file there too and in theory run a bunch of processes that use lots of memory and it would swap. In practice, IBM seems to have used 32 bit variables in too many places for this to work.
-
You can put the swap file there too and in theory run a bunch of processes that use lots of memory and it would swap. In practice, IBM seems to have used 32 bit variables in too many places for this to work.
I put my swapper.dat on the RAM disk just so that it wasn't writing to the SSD drive every boot but given we can't use over 4G I don't see how the swapper could ever be used.
-
What happens if you start up 5 processes that each use a GB of memory?
In theory you should just get some swapping but as I mentioned, OS/2 seems to use 32 bit variables that limit the size of virtual memory. The swap file itself is limited by using signed 32 bit based on what happens if you grow it more then 2GB (same thing as if you run out of disk space) based on experience. It makes sense as all file systems had a 2GB file limit before JFS.
The i386 is quite capable of handling 16 TB of virtual memory IIRC, but only 4 GB of address space
-
I always thought, it is a 4 GB, not 4 MB limit. Am I not right?
You may not be right indeed: https://en.wikipedia.org/wiki/Gibibyte (https://en.wikipedia.org/wiki/Gibibyte)... :P
I don't use term "GibiByte". It is not "real" term. I use only real term "GigaByte". 1000 MB is not a gigabyte. Gigabyte was always 1024 Megabytes.
-
Is the AOS RAM disk bound by the same limitations as file systems on "normal" disks,
or is there some magic going on in this app?
Without thinking I set my machine up some months ago with an 8GB HPFS RAM disk and have
been operating that way for months - although I don't think I have ever copied more that
2 GB to the RAM disk.
But what would happen if I did?
If I remove the line
IFS=C:\OS2\HPFS.IFS /CACHE:2048 /CRECL:4 /AUTOCHECK:*
from CONFIG.SYS I get the error
The specified disk or diskette cannot be accessed
C:\MPTN\BIN\VDOSCTL.EXE
The help for RAM disk says, in part
"The AOS loader can format the drive(s) it creates using FAT, FAT32, or HPFS; it can also leave them unformatted. Both drives will be formatted the same way. Note that if you choose FAT, any drive over 2gb will be left unformatted."
but it doesn't say anything about other file system types.
I tried using FAT32 but that makes the RAM drive VERY SLOW when copying files to and from the disk.
Finally - I tried setting the "Format partitions using" to none and using the WPS disk object
to format the RAM disk as JFS, and the application I was running (VLC) that was reading files
from the RAM disk started behaving badly.
And the final question: If I don't have any other partitions, other than the RAM disk,
formatted as HPFS, do I really need the /CACHE:2048 clause the IFS statement in CONFIG.SYS?
Doesn't seem to make sense to cache a RAM disk in RAM.
-
I think you could remove the cache statement, or at worst shrink it to 64 if removing it seems to screw things up.
Personally, I reformat as JFS, see my other reply.
-
I've setup the RAM disk but don't use it for anything except testing. I tried formated with HPFS and JFS but no matter what it's slower than any of my SSDs. Also tried HPFS (or FAT?) strategy 1 (?) which should be faster but that does not work with the applications I tried. FAT32 of course is the slowest file system for use on OS/2. So don't even think about it could be a good option on anything else then data interchange with other OSes. IMHO when you have a SSD in your system it's not worth playing with RAM disk.
-
Yes, I've been fairly disappointed with the speed of the ram disk. Still it is fast enough that I use it for %TEMP% etc, the Mozilla cache and as a scratch work area. The speed isn't too bad when formatted with JFS, and I need JFS for temporary files as building Mozilla and likely other stuff can result in temp files over 2GB.
The problem with SSD's is they have a limited lifetime, measured by the number of writes. This is more of an issue on OS/2. We're missing a trim command so can't tell the SSD which blocks you're finished with, which makes garbage collection etc harder for the SSD. You can mount the file systems under Linux and trim them though. Or backup and do a secure erased and then restore. The secure erase will write zero's on the whole device including the spare blocks.
I also find that once the DRAM cache (1 GB on my 1TB SSD), things slow down and sometimes garbage collection or such happens and the SSD stalls, things like deleting a large directory,part of the way through it stalls, you think something has failed and eventually away it goes again.
Other problems is it is very hard to align the JFS 4k blocks with the SSD 4k blocks with CHS partitioning so even writing a 1 byte file might see 2 blocks used on the SSD. A good reason to use GPT partitioning.
JFS is also not a very good file system for SSD's. The journal is always getting written, and just looking at a file causes its atime (I think that's the one) to be updated. Linux has fixes for both, the journal can be on a different device, and updating the atime can be disabled. How often do you care when a file was last read?
-
I always thought, it is a 4 GB, not 4 MB limit. Am I not right?
You may not be right indeed: https://en.wikipedia.org/wiki/Gibibyte (https://en.wikipedia.org/wiki/Gibibyte)... :P
I don't use term "GibiByte". It is not "real" term. I use only real term "GigaByte". 1000 MB is not a gigabyte. Gigabyte was always 1024 Megabytes.
Yes, it was
Since a long time now: GB = GigaBytes = 1000KB
while GiB (Giga binary bytes) = 1024KB "old GB"
-
.... IMHO when you have a SSD in your system it's not worth playing with RAM disk.
To correct myself this seems to be not valid for other/newer systems. The numbers Doug posted on the other thread are much better than on my system. On my old system I can get transfer rates up to 160-180MBytes/s for SSDs and conventional (spinning?) disks compared to the slow ~60MBytes/s to the RAM disk. Maybe I will change my mind when I build up a new system with NVM and faster RAM some times.
-
Andi,
Your post made me curious about the speed for my 11 year old Thinkpad T530 laptop - i5-3320m, 2.6 ghz CPU.
For that machine my SSD to RAM drive transfer speeds are pretty close to yours.
SSD to RAM
----------
70,153K
74,140K
73,819K
The SSD to SSD speed are copying a file from one directory to another on the same
drive - so it is probably limited by the write speed, age of the disk, and the conflicts of reading/writing
to the same disk.
SSD to SSD
----------
13,669k
I don't use my laptop all that often. But an ankle injury forced me onto the laptop these last few weeks and I am surprised how well it performs - even now compared to my newer Ryzen 5. The interesting thing is how different the same applications perform on the laptop and the desktop (Ryzen) - even though both are running AOS 5.1 On the new desktop (and on my previous desktop) I experience something similar to what Dave sees when he is compiling - the system seems to freeze for 4 - 8 seconds and then resume. It is almost like it is taking a short rest. I see it with VLC when watching movies. This doesn't happen with the laptop. On the new desktop Win-OS2 applications freeze the machine when running seamless mode - on the laptop they run fine.
Just a reminder I guess how complex machines are now and how difficult it is to support all the hardware that is out there.
Take care.
-
How, exactly, are you measuing speed? Read? Read/write? Write? What are the cache parameters? Which file system?
There are too many variables to even start to estimate speed, never mind try to compare it.
Using the DFSEE read/write speed test, on my old Lenovo T510, with AHCI type SSD, I get about 41 MiB/s Same machine with ArcaOS Ramdisk, tells me about 397 MiB/s. No doubt about which is faster.
I don't have exact numbers for my new Asus X570, but the RAMDISK, and NVME SSD are pretty close to the same speed. The advantage to using the Ramdisk, is that it doesn't cycle out the memory cells, in the NVME SSD, as much. Since memory, above 4 GB, isn't used much anyway, it doesn't really matter if those cells get cycled. An interesting thing, is that a 3.1 GB system dump, to the Ramdisk takes about 2 seconds. Then after I boot, that dump gets copied to more permanent storage, and that takes about 5 minutes (it is a FAT32 dump partition). FAT32 is VERY slow, even on a NVME type SSD.
-
Yeah - the tests aren't all that scientific. Each machine had a different processor, DRAM speed and type, and bus for the M.2/SATA SSD/SATA spinning.
The RAM disk is HPFS. All others are JFS.
The cache on HPFS (RAM disk) is
IFS=C:\OS2\HPFS.IFS /CACHE:2048 /CRECL:4 /AUTOCHECK:*
For JFS
IFS=C:\OS2\JFS.IFS /LW:5,20,4 /AUTOCHECK:*
The spinning disks have an internal cache of 32 MB. One drive is Western Digital, the other Toshiba.
All tests were copying a file from one disk to another using Larsen Commander, the speed reported comes from Larsen reported Average
I used various file sizes when copying across a network. The file size, destination drive and source drive didn't make much difference in the times. I think the network was the limiting factor.
On the Ryzen machine (RAM to M.2, RAM to RAM, Spinning to Ram, Ram to Spinning) the file size was 2 GB, which should have been large enough to swamp any cache.
On the Lenovo t530 laptop the file size was a little smaller. HPFS and JFS on that machine are setup the same as the Ryzen desktop.
I also tried to test SATA SSD to RAM on the WSeB/eComStation machine. But the RAM drive on that machine is only 64MB in size and a 64MB file copied so fast I could not get the time off the Larsen File Copy screen. The file system for that disk says "RAMFS" and I don't think you can make it larger than 64MB. But you can share it across a network with WSeB. I think it also runs in lower memory only.
I used various file sizes when
The file size
-
I am not going to try to duplicate your configuration. there is really no point in doing that. I will comment on your configuration (which is similar to what I used a number of years ago);
RAMFS is not a good choice: It takes up lower shared memory space, which is in critical short supply. The ArcaOS RAMDISK (or use the QS Loader - available from HOBESARCHIVE, which is the same RAMDISK). It can use memory above what OS/2 can use, meaning that only a small driver uses system memory. A simple script, in STARTUP.CMD, can format it to JFS (start with selecting FAT32 as the format), which has full support for EAs and things like the SWAPPER (which doesn't seem to be used anyway, when you have more than 2 GB of memory).
HPFS is no longer a good choice. It also uses critical lower memory space. You likely use JFS anyway, which, on it's own, performs better than HPFS (especially when a CHKDSK is required at boot time).
If you can, use VIRTUALADDRESSLIMIT=3072. If that doesn't work, try 2560, which should work on all ArcaOS systems (and probably ECS and earlier versions of OS/2).
-
Not quite RAM disk related, but for the folks who are experiencing that "...system seems to freeze for 4 - 8 seconds and then resume..." behaviour: if you are using JFS, this may be a symptom of your JFS settings causing it to run out of free buffers and needing to purge the cache(s) to free up the buffers.
This is well documented by Sjoerd Visser in his "Dynamically Tuning the JFS Cache for Your job" presentation deck from way back in 2009.
Bottom line: this can be brought on by several 'system use' activities, but it basically causes the JFS code to write out dirty buffers to disk. The key to dealing with this on my rather large JFS cache (1G) was to watch the typical system use (log cstats output) and adjust the MIN & MAX free buffer settings in pair with the overall LazyWrite setups.
This is a trial & error thing as your machine will be heavily driven by what your usage patterns are.
For what it's worth, here is what I have:
CONFIG.SYS:
IFS=G:\OS2\JFS.IFS /CACHE:1048576 /LW:32,128,8 /AUTOCHECK:*
CALL=G:\OS2\CMD.EXE /Q /C G:\OS2\CACHEJFS.EXE /LW:32,128,8 /MINBUFFER:8000 /MAXBUFFER:24000 >NUL
5 Days into using my box (last time it was up for 27 days, normal desktop stuff, nothing fancy), castat shows:
[G:\]cstats
cachesize 262144 cbufs_protected 35795
hashsize 131072 cbufs_probationary 22467
nfreecbufs 101902 cbufs_inuse 0
minfree 8000 cbufs_io 0
maxfree 24000 jbufs_protected 101075
numiolru 0 jbufs_probationary 894
slrun 136870 jbufs_inuse 0
slruN 174762 jbufs_io 0
Other 11 jbufs_nohomeok 0
...with the nfreecbufs never dropping so low that they show zero (0) as slruN approaches slrun value.
I started with MIN=8000 and MAX=16000, and that gave a pretty solid system, although sometimes I would get that tell-tale "hang" feeling. So I moved to MIN=4000 thinking that would free up the buffers for caching duties..welllll...no sirr....wrong move...that completely resulted in a solid and repeatable "system hang". So back to the drawing board so to speak, I set my MIN=8000 and increased MAX=24000. RESULTS => SOLID, the most solid system I have had for years.
The thing that 'ruins' my JFS cache (spoils it actually) is the nightly disk copy (rcopy) run. If it wasn't for that activity my cbufs_protected would stay very large, which means I have a good amount of content that's being successfully cached. Of course, the jbufs_protected is equally important as that allows JFS to quickly figure out where to "go" to retrieve the content as opposed to having to read the data from the disk itself. Again, which one should be the focus for you entirely depends on what disk access patterns you see.
Anyways...balance, somewhere out there are the right settings for your machine.
Last but not least, my JFS formatted RAM DISK results are (diskio):
Drive cache/bus transfer rate: 629714 k/sec
Data transfer rate on cylinder 0 : 680303 k/sec
Data transfer rate on cylinder 634 : 680021 k/sec
meanwhile the SSD results are:
Drive cache/bus transfer rate: 125467 k/sec
Data transfer rate on cylinder 0 : 286615 k/sec
Data transfer rate on cylinder 30399: 253408 k/sec
The ram disk is 2-3x faster here.
EDIT
====
One more thing to add to this, albeit this is easily identifieable and most likely NOT the situation everyone else is seeing: I tried running AHCI driver here and on my hardware that would result in what seemed like a complete HARD lock for about 4-8 secs at a time. Once that "event" passed the system was available for use once again. I tried a boat-load of different setups and configs but none of it helped. Subsequently i went away from AHCI.
-
Dariusz, did you ever publish your updates to diskio, also have you considered adding them to Sysbench, which basically uses diskio for disk benchmarking.
-
...
There are too many variables to even start to estimate speed, never mind try to compare it.
...
True but not fully. You can compare your own SSDs and RAMDISK and cache settings very well when you use the same test method. You also can give others a clue what seems to be possible on other systems compared to yours. F.i. before the post here I never saw any system with only nearly that fast RAMDISK as posted here. The same goes with NVM compared to SSD which is interesting to me. I didn't bother playing with NVM before and so don't have any experience myself until now.
On the other hand I don't rely on values of LarsenCommander UNTIL I've
- rechecked the same copy operation from command line and
- I repeated the test numerous times and
- the whole copy process (number of files and/or size of files) takes more than 15-30 seconds
Remember I'm the one how tweaked the copy algorithm in LCMD :-) Although I trust the values LCMD sums up you've to be careful to extrapolate what you're really measure (filesystem cache performance, SSD write/read performance or only SSD cache performance, ...).
-
hey Dave!
Dariusz, did you ever publish your updates to diskio, also have you considered adding them to Sysbench, which basically uses diskio for disk benchmarking.
Nope...b/c I started the work on converting it to handle SSD's 4K sector sizes and ran into some problems with DosDevIOCtl32 return RC=87, which I wasn't able to resolve.
However, for what it's worth, here is the DISKIO version that I previously published in the forum here on a separate thread => https://www.os2world.com/forum/index.php/topic,2676.0.html (https://www.os2world.com/forum/index.php/topic,2676.0.html).
-
Hello Dariusz
I found your config.sys very interesting.!!
At first I had no problems but it wasn't going well either. I have the feeling that I was missing something and maybe you are missing something.
I pass on my modifications to the config.sys, incorporating your slightly modified values.
IFS=C:\OS2\JFS.IFS /CACHE:1048576 /LW:32,128,8 /AUTOCHECK:*
CALL=C:\OS2\CMD.EXE /Q /C C:\OS2\CACHEJFS.EXE /LW:32,128,8 /MINBUFFER:8000 /MAXBUFFER:15000 >NUL
buffers=20
SWAPPATH=C:\OS2\SYSTEM 0 512000
SWAPPATH=C:\OS2\SYSTEM 2000 1045000
And surprise with the memory, I also attach the memory results with different values. And I really like what I see.
[C:\]mem /v
Total physical memory: 8 142 MB
Accessible to system: 3 054 MB
Additional (PAE) memory: 5 088 MB
Resident memory: 187 MB
Available virtual memory: 3 833 MB
Available process memory:
Private low memory: 340 MB
Private high memory: 2 240 MB
Shared low memory: 276 MB
Shared high memory: 2 227 MB
[C:\]cstats
cachesize 32768 cbufs_protected 10302
hashsize 16384 cbufs_probationary 2585
nfreecbufs 18938 cbufs_inuse 0
minfree 8000 cbufs_io 0
maxfree 13000 ** jbufs_protected 681
numiolru 0 jbufs_probationary 255
slrun 10983 jbufs_inuse 0
slruN 21844 jbufs_io 0
Other 7 jbufs_nohomeok 0
[C:\]mem /v
Total physical memory: 8 142 MB
Accessible to system: 3 054 MB
Additional (PAE) memory: 5 088 MB
Resident memory: 187 MB
Available virtual memory: 3 833 MB
Available process memory:
Private low memory: 342 MB *****
Private high memory: 2 240 MB
Shared low memory: 279 MB ******
Shared high memory: 2 227 MB
[C:\]cstats
cachesize 32768 cbufs_protected 10294
hashsize 16384 cbufs_probationary 2594
nfreecbufs 18941 cbufs_inuse 0
minfree 8000 cbufs_io 0
maxfree 14000 ** jbufs_protected 747
numiolru 0 jbufs_probationary 185
slrun 11041 jbufs_inuse 0
slruN 21844 jbufs_io 0
Other 7 jbufs_nohomeok 0
[C:\]mem /v
Total physical memory: 8 142 MB
Accessible to system: 3 054 MB
Additional (PAE) memory: 5 088 MB
Resident memory: 186 MB
Available virtual memory: 3 833 MB
Available process memory:
Private low memory: 353 MB *****
Private high memory: 2 240 MB
Shared low memory: 290 MB *****
Shared high memory: 2 227 MB
[C:\]cstats
cachesize 32768 cbufs_protected 10293
hashsize 16384 cbufs_probationary 2590
nfreecbufs 18944 cbufs_inuse 0
minfree 8000 ** cbufs_io 0
maxfree 15000 ** jbufs_protected 750
numiolru 0 jbufs_probationary 184
slrun 11043 jbufs_inuse 0
slruN 21844 jbufs_io 0
Other 7 jbufs_nohomeok 0
Pay attention that I'm using two swappaths, although only the last one is used, I haven't really tried to remove the first one, in case there is a difference or not.
Don't put the swappath on a ramdisk, it won't work well for you.
I get these memory values after restarting the computer.
Saludos
-
Hello roberto!
...
IFS=C:\OS2\JFS.IFS /CACHE:1048576 /LW:32,128,8 /AUTOCHECK:*
CALL=C:\OS2\CMD.EXE /Q /C C:\OS2\CACHEJFS.EXE /LW:32,128,8 /MINBUFFER:8000 /MAXBUFFER:15000 >NUL
...
[C:\]cstats
cachesize 32768 cbufs_protected 10302
hashsize 16384 cbufs_probationary 2585
nfreecbufs 18938 cbufs_inuse 0
minfree 8000 cbufs_io 0
maxfree 13000 ** jbufs_protected 681
numiolru 0 jbufs_probationary 255
slrun 10983 jbufs_inuse 0
slruN 21844 jbufs_io 0
Other 7 jbufs_nohomeok 0
So here is the thing: my JFS cache is really large, 1Gig, therefore the '/CACHE:1048576' setting.
When you attempted to run as big of a cache on your system, given that JFS wasn't able to allocate that much RAM it actually defaulted to 10% of your accessible RAM (or maybe 20%??? I don't remember the default at the moment), which gave you a JFS cache of about 128M.
This is confirmed by your cstats value of cachesize=32768, where each such buffer is 4096 bytes. You can get a true JFS cache size by running the following:
[G:\os2]cachejfs
SyncTime: 32 seconds
MaxAge: 128 seconds
BufferIdle: 8 seconds
Cache Size: 1048576 kbytes
Min Free buffers: 8000 ( 32000 K)
Max Free buffers: 24000 ( 96000 K)
Lazy Write is enabled
Therefore, if you are interested in increasing your JFS cache size you may want to inch-up on the 1Gig slowly...maybe go to 256M, than 512M, etc...
-
Hi Dariusz,
You are right about many things, in fact at the beginning, it only showed me 131mb. The cachejfs.
But reducing the virtualaddresslimit to 1024, it already shows me 1gb of cache.
The price is the reduction of memory.
I've still set up my ArcaOS NAS like this, but I don't see any performance improvements in file transfer.
I'll have to do more tests.
Your setup is very stable, I just reduced the value from 24000 to 15000.
[C:\]mem /v
Total physical memory: 8 142 MB
Accessible to system: 3 054 MB
Additional (PAE) memory: 5 088 MB
Resident memory: 1 143 MB
Available virtual memory: 3 809 MB
Available process memory:
Private low memory: 344 MB
Private high memory: 448 MB
Shared low memory: 280 MB
Shared high memory: 435 MB
[C:\]cstats
cachesize 262144 cbufs_protected 10288
hashsize 131072 cbufs_probationary 2594
nfreecbufs 248292 cbufs_inuse 0
minfree 8000 cbufs_io 0
maxfree 15000 jbufs_protected 776
numiolru 0 jbufs_probationary 187
slrun 11064 jbufs_inuse 0
slruN 174762 jbufs_io 0
Other 7 jbufs_nohomeok 0
[C:\]cachejfs
SyncTime: 32 seconds
MaxAge: 128 seconds
BufferIdle: 8 seconds
Cache Size: 1048576 kbytes
Min Free buffers: 8000 ( 32000 K)
Max Free buffers: 15000 ( 60000 K)
Lazy Write is enabled
[C:\]
Saludos
-
I've still set up my ArcaOS NAS like this, but I don't see any performance improvements in file transfer.
I'll have to do more tests.
Well, I answered me about the speed tests, and the improvement is noticeable.
A folder that normally took 23 minutes to copy is copied in 1 minute 37 seg. of 11gb.
This is from a 12tb gpt disk with 2tb x 6 partitions, to a 240mb gpt sdd disk on the same computer.
But yesterday from the network the result was poor, because the network is my bottleneck.
Saludos
-
Hello Roberto,
... But reducing the virtualaddresslimit to 1024, it already shows me 1gb of cache.
The price is the reduction of memory...
Yup, this JFS thing is all about balancing the hardware resources in a manner most beneficial to the way YOU use YOUR machine. So there really are no generalizations, other than perhaps some starting points?
With my particular hardware combo I'm able to run this large of a JFS cache with 'VIRTUALADDRESSLIMIT=2048' setup. I found that going lower than that will start causing problems with our current applications, as others have pointed out as well.
So the 'balance' thing here is probably more heavily skewed towards 'being able to run an app' as opposed to 'having a fast FS cache'.
...I've still set up my ArcaOS NAS like this, but I don't see any performance improvements in file transfer.
I'll have to do more tests...
I would NOT expect at all there to be file transfer performance improvements, other than I suppose the fact that the larger cache allows your target write operation to be cached.
...
[C:\]cachejfs
SyncTime: 32 seconds
MaxAge: 128 seconds
BufferIdle: 8 seconds
Cache Size: 1048576 kbytes
Min Free buffers: 8000 ( 32000 K)
Max Free buffers: 15000 ( 60000 K)
Lazy Write is enabled...
One last comment is re: SyncTime, MaxAge and BufferIdle. Be careful with these. I have my setups as large as they are because my machine is hooked up to a UPS fulltime. Therefore, the chance of power going out and the JFS cache NOT getting flushed out is very minimal. Still, a hard TRAP could still happen, so there is always some risk there.
Just for reference, here are my notes on this topic:
/LAZY:synctime,maxage,bufferidle
enables write cache with the following parameters:
- synctime : the interval at which the sync thread runs
default = 16
- maxage : is the longest time that a frequently modified file is kept in cache
default = synctime * 4
- bufferidle : is the time indicating a "recent" change. Changes newer than this
value are not written unless the last write was older than maxage.
default = MIN(1,synctime/8)
-
Hi Dariusz
Thank you for sharing your experience and effort.
Modifying the LW about 15 or 20 years ago, experimenting sent me a computer to disaster,
and I was left with the idea of not touching it. That's why I know it's difficult.
But your values seem very good to me, the processors on the server are at ZERO or almost zero that's very good.
Now I'm trying to set up my test equipment with twice the standard cache without losing memory, using the virtualaddresslimit on 3072.
And this looks great, even with resource-depleting applications, but... requires more testing.
Saludos
-
Lines to modify the config.sys to have a JFS cache twice the current one:131072=2 x 65536 maintaining or
even increasing the available memory.
-------------------------
IFS=C:\OS2\JFS.IFS /CACHE:131072 /LW:32,128,8 /AUTOCHECK:*
...
CALL=C:\OS2\CMD.EXE /Q /C C:\OS2\CACHEJFS.EXE /LW:32,128,8 /MINBUFFER:655 /MAXBUFFER:3679 >NUL
...
BUFFERS=16
...
SWAPPATH=C:\OS2\SYSTEM 2000 1045000
...
VIRTUALADDRESSLIMIT=3072
...
--------------------------
Because of the things and notices of my experience:
How to calculate the minbuffer:
Putting Minbuffer: 0, the system will calculate the new minimum automatically returning the 655 value
How to calculate the Maxbuffer, I have been increasing the value while Private Low Memory: or the Shared Low Memory:
increase. And I reduce it, if it decreases. Looking for the values of maximum memory available.
I believe that having the minimum, the important thing is the difference between these two values.
The buffers = 16, according to EDM, in case of configuring the cache, says reduce the buffers number.
20 It is also a good value.
I contribute link:
Now I can't find it.
Regarding SwaPPath, I remind you of this link, to be able to calculate Valid Swappat,
Another thing is that they work as it should, or that values can work better than others:
https://www.os2world.com/forum/index.php/topic,3232.msg38051.html?PHPSESSID=4clqlps3of12k9kp8tc7rmr3cc#msg38051
In fact, if someone intends to try a 2048 1045000 swappath, you can fail the hard drive. Because
The 2048 value is not valid for the Swapper.
Also comment, that activating Hyperrating HT in many bios, 2MB are lost in Private Low Memory: and another 2MB in Shared Low Memory:
But the processors work better. Could it be the opposite? I don't know.
And that's it.
It's not all, I understand that for a simple NAS type server with few applications it can be interesting
Have less memory and more cachejfs to give better service. In this case I have the:
Buffers in 20
and the
SwaPPath = C: \ OS2 \ System 2000 2000000
and the Cache Size: 1048576 Kbytes
Minbuffer 0 or 655
Maxbuffer 7655
with the virtualaddresslimit in 1024. Only this is valid of the Page Boundary, others not and checked.
This conbination works quite well, but here there may be better.
Saludos