Hi All
This started as a side discussion to
http://www.os2world.com/forum/index.php?topic=1026A brief recap: After installing JFS-1.09.06 from Arca Noae my system can "lose" a volume at boot. The first time this happened I only noticed after the Desktop had loaded and I opened the Drives folder to use the "missing" drive.
Running LVM it looked as the the volume had not yet been added to the drives available list as it was not in column with the other drives but off to the left - as though the volume had only just been created.
I made absolutely NO changes in LVM but used the Save and Exit option when quitting.
The "missing" drive appeared in the Drives folder.
This is something that happened frequently while I had this release of JFS installed - but not with previous builds ie 1.09.05 and earlier.
It happens on 2 AMD based desktop systems and 2 Intel based laptops.
I am assured there is nothing wrong with the driver... Strange that it happens across a range of hardware though.
A suggestion was put forward that it could be some sort of shutdown problem; possibly the disk cache is not being fully flushed before the system powers off or reboots.
Yet again, not a problem prior to JFS-1.09.06
I was encouraged to install the latest XWP beta as that has some updates which may help and retest the JFS-1.09.06 driver. The XWP beta does not make any difference - does not seem to be anything wrong with the XWP beta, works smoothly and is here if interested
ftp://ftp.netlabs.org/pub/wlan/xwp-3.unofficial-1-0-10.zipWhat can I be doing wrong on my range of hardware?
All systems have eCS2.2beta2 installed; the desktops also have eCS2.1+ with updates from eCS2.2beta2 plus Arca Noae.
Prior to installing JFS-1.09.06 I run chkdsk against all drives and check that they report "clean".
I then install the JFS package and reboot.
I had not performed this next step before yesterday when I reinstalled JFS-1.09.06 for testing with the XWP beta: When the Desktop has loaded and the system has "settled down" I ran chkdsk aginst all the JFS formatted volumes that were *known* to be clean immediately before the JFS package was installed.
Several of the volumes reported this problem:-
CHKDSK Incorrect data detected in disk allocation structures.
CHKDSK Incorrect data detected in disk allocation control structures.
.
.
.
CHKDSK File system is dirty.
CHKDSK File system is dirty but is marked clean. In its present state, the
results of accessing i: (except by this utility) are undefined.
Strange... why should these drives be "dirty but is marked clean"? I guess that is where the shutdown possibility occurs - and could be the cause of the "lost" volume at boot.
I ran chkdsk against the drives with problems and found that chkdsk did not want to play with 1 of my drives something like "Cannot open for write access. Performing readonly check" or something very similar. What??? That sounds serious but, usually, is not if JFS-1.09.06 is installed - it means that it needs a different JFS package that works.
I had installed the JFS-1.09.06 package to my eCS2.1+ system; on the same system I have a bootableJFS eCS2.2beta2 installation and another eCS2.2beta installation on a HPFS formatted volume; the HPFS installation still has the earlier JFS-1.09.05 package installed and has this line in the config.sys file:-
IFS=M:\OS2\JFS.IFS /LW:5,20,4 /AUTOCHECK:+*
Yes, the HPFS install exists mainly to perform a full forced chkdsk during boot when JFS-1.09.06 stuffs up.
(A side note here: For those who do not know the above config.sys line does not work with JFS-1.09.06 as support for the "+" option has been dropped. My suggestion: Do *not* use JFS-1.09.06 on a "headless" server.)
The volume that JFS-1.09.06 could not chkdsk was processed by chkdsk/JFS-1.09.05 without problems, applications installed on that volume worked fine.
After a reboot back to the eCS2.1+ install chkdsk again reported several volumes as "dirty but is marked clean" - yet again, possibly a shutdown problem...

This time chkdsk was able to process all volumes without any problems.
When I next performed a chkdsk - probably around 3 hours later just before shutdown for the night - guess what I discovered? Yes, several volumes "dirty but is marked clean".
The system had not been shutdown between the earlier chkdsk that "fixed"? whatever problems and this chkdsk. That seems to rule out shutdown as a problem and leave JFS-1.09.06 as the culprit - unless anyone can come up with any other possibilities?
I guess any of the applications I used or the data I saved may have caused the problem - but, the problem does not seem to exist with JFS-1.09.05...
Any thoughts?
Pete