OS/2, eCS & ArcaOS - Technical > Applications

LarsenCommander - new test version

<< < (8/11) > >>

Andi B.:
Speed tests with SSDs vary on my system. I've tested with -
Samsung SSD 850 PRO 256GB
Samsung SSD 870 EVO 2TB
Samsung SSD 860 QVO 1TB
Crucial CT500MX500SSD1

It's sometimes not clear why copying from one SSD to another is faster with some setting while it's a bit slower with another target or source. I ever thought the maximum throughput you can get with an OS/2 system is copy from CLI cause a GUI program with progress indicator will always lower performance (the progress indicator needs to be update while CLI copy only waits for disk to accept new data, CLI does not even check CTRL-C). To my findings this is still true with disks although now LCMD comes very close to CLI. But in the last months I've even found scenarios where copying with LCMD is much faster than CLI copy (which TTBOMK uses 4k fixed buffer).

You may think copying with LCMD should be faster when not much else is running at the same time. But I've even scenarios where copying is actually a bit faster when LCMD lost the focus and some other heavy task are running. Once I even need a reboot to get the copy speed from/to a specific source/target to the level I'm used to. It was about 20% slower for unknown reason before.

Bottom line is it needs a lot of tests to get reproducible results. In most scenarios you can't get the same full speed with LCMD than with CLI but we are very close to that. Facts based on my tests for single big files are -
- Samba is way slower than ftp (more than 1,5 times slower here), both via netdrive *)
- for SSDs and harddisks LCMD is not exactly as fast als CLI copy but very close to it
- Ramdisk is way slower than SSD or harddisks (JFS is faster than HPFS but still 3 times slower)
- Peak values may be very high cause of SSD (and JFS) cache

*) Samba is way faster than FTP with a lot of small files I think because of it's directory caching which FTP is missing

Andi B.:
I've just released v1.09.00. See https://sourceforge.net/projects/lcmd-git/files/

For me this is a major improvement as now the EA logic finally works without memory leaks. At least for my (extensive) testing. Also the focus switching problem improved a lot. Maybe not fully solved but at least the test cases I had. The copy buffer size limit setting now will be honored again. Mind now in MBytes not KBytes. Not sure if translations in all languages are correct.

Unfortunately there is still one problem I couldn't solve. But during extensive testing and debugging I'm pretty sure this is a file system driver (JFS) problem. Copying huge amount of files can lead to file system hangs. Sometimes after 80'000 files, sometimes after 800'000 files. First I thought it's a problem with log file overflow (beyond 2GB). But now as I changed this to LFS support too and numerous test with and without logging file enabled I'm pretty sure it's something very low level in JFS or NVME or ? driver. Given that no one else mentioned this problem until now and given 1.09 is able to copy at least 10 times more files without problems than any version before I think it's time to make it public. Enjoy v1.09.00.


--- Code: ---Change History:
---------------
20250222 v1.9.0
- Improved Extended Attribute handling again. Now it should handle all sizes of EAs and checks and limits to <64k
- EA handling now really fixes memory leaks
- Reenabled max copy buffer size setting in but now use MB instead of KB because it is in HigherMem arena since
1.8.0 (?) anyway
- Improved focus switching logic. Don not set focus on unrelated windows after copy/del operations.
- Skip sending lot of GUI (PM) messages for updating progress dialog with fast disks while copying and deleting files.
This is to work around PM memory leaks (crashes) when hammered with very much messages in a short period of time.
With fast disks copying or deleting small files takes few ms or even less. The previous logic to update the
progress bar after each file (and during a file when it is bigger) is changed to not send more than about one
message in 125ms. PM seems to be not safely ignore to much unhandled messages and starts to eat up shared memory.
Eventually this crashes the whole system when the process is not closed before. F.i. this skipps more than 69000
messages when copying the ApacheOpenOffice source code tree. The same skipping logic improves speed when deleting lot
of files. F.e. 33s instead 53s for AOO source code tree (about 70k files). More than 76000 messages where skipped with
that new delete logic.
- Change LOG file handling to LargeFileSupport to allow more than 2GB log files. This is needed cause with very heavy
logging previous versions hang the whole file system starting with the log file directory at 2GB log file size
(> 100k files copied with DEBUG)

--- End code ---

Mentore:

--- Quote from: Andi B. on February 23, 2025, 04:22:34 pm ---I've just released v1.09.00. See https://sourceforge.net/projects/lcmd-git/files/


--- End quote ---

Hi Andi, I'm about to test it  right now. I'm heavily using LC, since I'm so used to the old Norton Commander interface. I also would like to ask you if it's possible to add an "Open With" option on files.
Since I use it really much when I develop or port OS/2 software, something like that would be of great help in order to let me, say, open a source code file with an appropriate editor like VisualSlickEdit, EPM, plain old E, FTE, or maybe a more complex IDE like VisualAge or OpenWatcom.
Was that already on your mind?
In any case, thanks for your good work on this file manager, I feel really at home using it.

Mentore

TeLLie:
Hi Andi,

Thankz for this new version :)

greetz TeLLie

Lars:

--- Quote from: Andi B. on February 23, 2025, 04:22:34 pm ---I've just released v1.09.00. See https://sourceforge.net/projects/lcmd-git/files/

For me this is a major improvement as now the EA logic finally works without memory leaks. At least for my (extensive) testing. Also the focus switching problem improved a lot. Maybe not fully solved but at least the test cases I had. The copy buffer size limit setting now will be honored again. Mind now in MBytes not KBytes. Not sure if translations in all languages are correct.

Unfortunately there is still one problem I couldn't solve. But during extensive testing and debugging I'm pretty sure this is a file system driver (JFS) problem. Copying huge amount of files can lead to file system hangs. Sometimes after 80'000 files, sometimes after 800'000 files. First I thought it's a problem with log file overflow (beyond 2GB). But now as I changed this to LFS support too and numerous test with and without logging file enabled I'm pretty sure it's something very low level in JFS or NVME or ? driver. Given that no one else mentioned this problem until now and given 1.09 is able to copy at least 10 times more files without problems than any version before I think it's time to make it public. Enjoy v1.09.00.

--- End quote ---

I would suspect this to be a JFS problem: https://github.com/bitwiseworks/libcx/issues/36

Unfortunately, I think "breaking up a large transfer" would mean that the copying application has to do it (unless you are using libcx and the gcc compiler and not calling the OS/2 API directly). If possible and as a test case, you could create a HPFS partition and copy from that and see if that avoids the problem.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version