Draw backs of HPFS, long chkdsk time, hate to think how long a chdsk /f:3 on a 2TiB partition would take
Lack of large file support, the 2GiB file size limit is a problem. It is possible to work around this with the DEADBEEF flag, which changes seeking from per byte to per sector with caller having to manage things.
I don't know much else about the structures.
I've been reading about this tonight. It looks like large file support could be overcome, as the theoretical maximum is over 7+GB as is. That 7+GB limitation could be defeated with a tweak it could be extended out as far as is needed.
It seems the only true limit right now to very large files in HPFS is in the legacy API, which only receives 32-bit value for file-related offsets. That could also be defeated with our own open source kernel and new API which supports 64-bit file operations.
-----
If I had money, I would devote a team of developers to creating our own open source OS/2 clone from the ground up, kind of like what Mark Shuttleworth did with Ubuntu Linux, although he still built atop the community base, and ours would be a full ground-up replacement effort, but I would be supporting the full legacy OS/2 API so all of our code could be compiled on existing OS/2 code bases for testing, and would also compile on the new OS/2.
Ah to dream...
Thank you,
Rick C. Hodgin