OS/2, eCS & ArcaOS - Technical > Applications

LarsenCommander - new test version

<< < (4/11) > >>

Andi B.:
Just a quick not about -

--- Quote ---1) from SAMBA share to ramdisk
OLD:  Peak=58,270K, Avg=43,550K
NEW: Peak=56,220K, Avg=46,516K

...so I'm thinking this is more about being network bound (1G LAN, but of course the Samba share box - NAS is the limitation here)

2) from ramdisk to SSD drive (Samsung 860 Evo, SATA)
OLD:  Peak= 82,788K, Avg=  45,545K
NEW: Peak=310,118K, Avg=220,910K
--- End quote ---

On Samba shares I can not get more then about 36MBytes/s. No matter what the other system is running. I think this is limited by our netdrive/samba client setup. It's not the GBit network. You can check with ftp to the same system and will get something more close to 1GBit (80-90MByte/s IIRC).

Ramdisk - writting to our RAMDISK is not more than about 55MBytes/s here. It would be faster when using strategy 1 but this would limit to <2GB and does have other big problems which is the reason I stick without /1.

I wonder about your really fast SATA performance. I've tested here with Samsung 850 Pro, Samsung EVO 870, Samsung 860 QVD and various HDDs including my WD black. It seems my 9 year old 4 core system can not handle more than about 160MBytes/s. So I currently I can't test with systems as fast as yours.

Dariusz Piatkowski:
Andi,


--- Quote from: Andi B. on December 03, 2023, 11:13:01 am ---...
On Samba shares I can not get more then about 36MBytes/s. No matter what the other system is running. I think this is limited by our netdrive/samba client setup. It's not the GBit network. You can check with ftp to the same system and will get something more close to 1GBit (80-90MByte/s IIRC).

--- End quote ---

Totally agree, here is what Kai Uwe Rommel's excellent NETIO utility reports ('orclsrvr' is my Win7Pro box on the LAN):


--- Code: ---[G:\]netio -t orclsrvr

NETIO - Network Throughput Benchmark, Version 1.30
(C) 1997-2008 Kai Uwe Rommel

TCP connection established.
Packet size  1k bytes:  108.65 MByte/s Tx,  82.77 MByte/s Rx.
Packet size  2k bytes:  109.66 MByte/s Tx,  108.81 MByte/s Rx.
Packet size  4k bytes:  108.43 MByte/s Tx,  111.03 MByte/s Rx.
Packet size  8k bytes:  106.60 MByte/s Tx,  105.70 MByte/s Rx.
Packet size 16k bytes:  104.48 MByte/s Tx,  9.06 MByte/s Rx.
Packet size 32k bytes:  110.89 MByte/s Tx,  111.15 MByte/s Rx.

--- End code ---

So the pipe is big enough, althought the severe drop with the 16k packet is always there, but otherwise it's pretty much using the full Gig.

Just to stay consistent in my testing (given that the network throughput is being tested against a different box - not the NAS one) I re-ran the same test but just with the NEW LCMD version:

1) from SAMBA WIN7 share to ramdisk
NEW: Peak=74,012K, Avg=50,996K

The NAS box is just a little slower, although I do NOT see this when moving stuff between the Win7 and NAS boxes, so that tells me it is indeed our Samba client.


--- Quote from: Andi B. on December 03, 2023, 11:13:01 am ---...Ramdisk - writting to our RAMDISK is not more than about 55MBytes/s here. It would be faster when using strategy 1 but this would limit to <2GB and does have other big problems which is the reason I stick without /1.

I wonder about your really fast SATA performance. I've tested here with Samsung 850 Pro, Samsung EVO 870, Samsung 860 QVD and various HDDs including my WD black. It seems my 9 year old 4 core system can not handle more than about 160MBytes/s. So I currently I can't test with systems as fast as yours.

--- End quote ---

For my RAMDISK I just have the following in my CONFIG.SYS:

'BASEDEV=HD4DISK.ADD /V'

...and my QSSETUP.CMD shows:

'ramdisk Y: jfs'

It's been a while since I was testing the performance of that configuration...probably not since I moved from HPFS386 to JFS actually.

Here are the DISKIO metrics for my SSDs:

1) Evo 850

--- Quote ---[G:\]diskio -hd 2
DISKIO - Fixed Disk Benchmark, Version 1.20

Dhrystone 2.1 C benchmark routines (C) 1988 Reinhold P. Weicker
Dhrystone benchmark for this CPU: 6089057 runs/sec

Hard disk 2: 255 sides, 30401 cylinders, 63 sectors per track = 238472 MB
Drive cache/bus transfer rate: 129526 k/sec
Data transfer rate on cylinder 0   : 349038 k/sec
Data transfer rate on cylinder 30399: 318076 k/sec
CPU usage by full speed disk transfers: 6%
Average latency time: 0.1 ms
Average data access time: Disk read error.

Multithreaded disk I/O (4 threads):
 124476 k/sec, 4% CPU usage

--- End quote ---

and here is the Evo 860

--- Quote ---[G:\]diskio -hd 4
DISKIO - Fixed Disk Benchmark, Version 1.20

Dhrystone 2.1 C benchmark routines (C) 1988 Reinhold P. Weicker
Dhrystone benchmark for this CPU: 6047660 runs/sec

Hard disk 4: 255 sides, 15960 cylinders, 240 sectors per track = 476929 MB
Drive cache/bus transfer rate: 151264 k/sec
Data transfer rate on cylinder 0   : 373430 k/sec
Data transfer rate on cylinder 15958: 384912 k/sec
CPU usage by full speed disk transfers: 21%
Average latency time: 0.0 ms
Average data access time: 0.3 ms

Multithreaded disk I/O (4 threads):
 216995 k/sec, 29% CPU usage

--- End quote ---

My box is pretty ancient by today's standards as well (MSI 890FXA-GD70), but it has SATA3 SB850 controller, which did make a difference when I moved to that motherboard from my old SATA2 controller one.

...but anyways, that's just a tanget to this core LCMD changes feedback.

Andy Willis:
Thank you for your work.
I pulled the source from git and find there were build changes:
call envtk45.cmd -noansi
call envicc40.cmd $ -noansi
These cmd files are not in the repository and I do not have them in my build environment.  Are these files you built or should I have them in my environment?

Andi B.:

--- Quote from: Andy Willis on December 05, 2023, 06:45:03 am ---Thank you for your work.
I pulled the source from git and find there were build changes:
call envtk45.cmd -noansi
call envicc40.cmd $ -noansi
These cmd files are not in the repository and I do not have them in my build environment.  Are these files you built or should I have them in my environment?

--- End quote ---
Thanks for testing the new repository. Obviously I've checked in some local changes I made long ago which I have never updated in the svn.
envtk45.cmd and envicc40.cmd are my local files similar to the setenv* commands for toolkit 45 and vacpp40. But instead changing to fixed inc/lib/paths I check before if the needed files are accessible and only add the new paths when they are not already set. So this is only to ensure that the proper toolkit and compiler environment is set.

I wonder if I should add these files to the repository cause they are kind of quick and dirty REXX scripts. And they include my fixed drive/path settings. Maybe I should rem them out in the repo. If you wonder about the -noansi - this is for running them within Slickedit to suppress the colored output which is funny for running from 4os2. I think in the Linux world the -color option would be the more common term for such behavior. You may found out I usually compile within Visual Slickedit calling vs_make.cmd by simply pressing <F5> (Project properties - Build options....).

Alfredo Fernández Díaz:
I'm glad to to see LC actively developed once again. Thank you, Andi - will be giving it a spin : )
Quick question - is NLS functional?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version