Andi,
...
On Samba shares I can not get more then about 36MBytes/s. No matter what the other system is running. I think this is limited by our netdrive/samba client setup. It's not the GBit network. You can check with ftp to the same system and will get something more close to 1GBit (80-90MByte/s IIRC).
Totally agree, here is what Kai Uwe Rommel's excellent NETIO utility reports ('orclsrvr' is my Win7Pro box on the LAN):
[G:\]netio -t orclsrvr
NETIO - Network Throughput Benchmark, Version 1.30
(C) 1997-2008 Kai Uwe Rommel
TCP connection established.
Packet size 1k bytes: 108.65 MByte/s Tx, 82.77 MByte/s Rx.
Packet size 2k bytes: 109.66 MByte/s Tx, 108.81 MByte/s Rx.
Packet size 4k bytes: 108.43 MByte/s Tx, 111.03 MByte/s Rx.
Packet size 8k bytes: 106.60 MByte/s Tx, 105.70 MByte/s Rx.
Packet size 16k bytes: 104.48 MByte/s Tx, 9.06 MByte/s Rx.
Packet size 32k bytes: 110.89 MByte/s Tx, 111.15 MByte/s Rx.
So the pipe is big enough, althought the severe drop with the 16k packet is always there, but otherwise it's pretty much using the full Gig.
Just to stay consistent in my testing (given that the network throughput is being tested against a different box - not the NAS one) I re-ran the same test but just with the NEW LCMD version:
1) from SAMBA WIN7 share to ramdisk
NEW: Peak=74,012K, Avg=50,996K
The NAS box is just a little slower, although I do NOT see this when moving stuff between the Win7 and NAS boxes, so that tells me it is indeed our Samba client.
...Ramdisk - writting to our RAMDISK is not more than about 55MBytes/s here. It would be faster when using strategy 1 but this would limit to <2GB and does have other big problems which is the reason I stick without /1.
I wonder about your really fast SATA performance. I've tested here with Samsung 850 Pro, Samsung EVO 870, Samsung 860 QVD and various HDDs including my WD black. It seems my 9 year old 4 core system can not handle more than about 160MBytes/s. So I currently I can't test with systems as fast as yours.
For my RAMDISK I just have the following in my CONFIG.SYS:
'BASEDEV=HD4DISK.ADD /V'
...and my QSSETUP.CMD shows:
'ramdisk Y: jfs'
It's been a while since I was testing the performance of that configuration...probably not since I moved from HPFS386 to JFS actually.
Here are the DISKIO metrics for my SSDs:
1) Evo 850
[G:\]diskio -hd 2
DISKIO - Fixed Disk Benchmark, Version 1.20
Dhrystone 2.1 C benchmark routines (C) 1988 Reinhold P. Weicker
Dhrystone benchmark for this CPU: 6089057 runs/sec
Hard disk 2: 255 sides, 30401 cylinders, 63 sectors per track = 238472 MB
Drive cache/bus transfer rate: 129526 k/sec
Data transfer rate on cylinder 0 : 349038 k/sec
Data transfer rate on cylinder 30399: 318076 k/sec
CPU usage by full speed disk transfers: 6%
Average latency time: 0.1 ms
Average data access time: Disk read error.
Multithreaded disk I/O (4 threads):
124476 k/sec, 4% CPU usage
and here is the Evo 860
[G:\]diskio -hd 4
DISKIO - Fixed Disk Benchmark, Version 1.20
Dhrystone 2.1 C benchmark routines (C) 1988 Reinhold P. Weicker
Dhrystone benchmark for this CPU: 6047660 runs/sec
Hard disk 4: 255 sides, 15960 cylinders, 240 sectors per track = 476929 MB
Drive cache/bus transfer rate: 151264 k/sec
Data transfer rate on cylinder 0 : 373430 k/sec
Data transfer rate on cylinder 15958: 384912 k/sec
CPU usage by full speed disk transfers: 21%
Average latency time: 0.0 ms
Average data access time: 0.3 ms
Multithreaded disk I/O (4 threads):
216995 k/sec, 29% CPU usage
My box is pretty ancient by today's standards as well (MSI 890FXA-GD70), but it has SATA3 SB850 controller, which did make a difference when I moved to that motherboard from my old SATA2 controller one.
...but anyways, that's just a tanget to this core LCMD changes feedback.