• Welcome to OS2World OLD-STATIC-BACKUP Forum.
 

News:

This is an old OS2World backup forum for reference only. IT IS READ ONLY!!!

If you need help with OS/2 - eComStation visit http://www.os2world.com/forum

Main Menu

Help: Recovering a volume

Started by RobertM, 2011.10.21, 20:56:33

Previous topic - Next topic

RobertM

Hello all,

So, I've got an interesting scenario where a RAID failure has caused two volumes to be inaccessible. Both partitions (the information), one for each volume, are viewable in LVM and dfsee. The drive letters are semi-viewable in dfsee (ie: listed as "-d" and "-e" instead of "D:" and "E:"). Trying to reassign a drive letter via dfsee ends up with similar results. Inotherwords, using the LVM tools in dfsee and changing "D:" (which is how it is listed in the LVM tools) to "I:" results in the LVM information showing as "-i".

One partition is HPFS, while the other is JFS.

Any ideas would be greatly recommended.

Thanks,
Robert


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


IBManners

Hi Robert,

I think your best bet would be to replicate one of the disks and work on that. Depending on the RAID used, if the error is in the file tables then it may have been 'replicated' to both drives. That's the problem with RAID sometimes, it's really ment for hardware failure.

Once you have replicated a drive (on the assumption its possible depending on the RAID you are using) what happens if you simply mark the drives as clean with DFSEE ?

Also check out Jrescuer at http://en.ecomstation.ru/projects/jrescuer/ if you havent already.

Cheers
Ian
I am the computer, it is me.

StefanZ

#2
Just a sidenote:

I had similar results caused by different issue source: I tried to install new Ubuntu, which did something daemonic with my extended JFS partition (350GB E:). As a result, this E: volume was not accessible, although in DFSee it was visible as "-e" and seemed to be in order. eCS just refused to access it.

One - although rather drastic - trick helped, proposed by Jan van Wijk (DFSee developer): manual change of partition type from original 0x35 (LVM) type to standard IFS Bootable JFS eCS type 07. This helped and suddenly the partition started to be readable again, in fact, I'm using it successfully until now.

The point being that "something" in Ubuntu (later versions of GRUB?) touched without my knowledge the extended JFS partition and changed "something" in there. The result of this partition type change solution was that eComstation started to use bootable version of JFS drivers to init & access the partition in question.

Nevertheless, the cause in your case seems to be quite different, so the question is if this is really worth a try  ???

St.

Pete

Hi Robert

I gather chkdsk is not an option?

If you have a Registered DFSee I suggest that you ask Jan for advice/help.

Regards

Pete

ivan

Hi Robert,

What level of RAID is it?

If it's RAID 1, have you tried pulling one drive and replacing it with a blank drive and letting the RAID be rebuilt?

I had a similar problem with one of our NAS - one of the drives developed a fault, fortunately the NAS indicated the problem drive and when I replaced it we were able to rebuild the RAID array which cleared up the access to the volumes.

ivan

RobertM

Hey guys, here's where things get tricky...

The OS was OS/2 Warp Server for e-Business v4.52 CP2 PF with all available updates installed and running HPFS386 with local security *off*.

The RAID array is RAID 1E (striped and mirrored for those not familiar with IBM's RAID 1E setup) using 4 disks. One disk was already defunct. That seems standard with the setup I have (I can never keep the 3rd disk online). I've had all sorts of answers as to why, the most consistent one is because they are not IBM drives (or the special Seagate or Hitachi, etc drives) designed for the ServeRAID 4 card in the system. That aside, when another drive falls out of RAID (ie: 2 drives total), I simply boot from the ServeRAID disk and put it back online and all is well.

This time, something crashed hard - this happened once before, but I didn't really care - I had a full backup and simply reinstalled the OS and everything else instead of trying to solve it.

So, this time, when I put the drive back online, Drive C was visible, Drive D and E were "visible" (dirty, but showing up) and Drive C had a bunch of errors that chkdsk didn't properly resolve. I reformatted Drive C and reinstalled OS/2. Something happened during the installation and the other drives no longer are accessible - I suspect, especially from what dfsee is showing, that the LVM information has been changed for the drives.

I'll try the partition type changes and see how that goes... can't hurt at this point. I've got backups, but they are not recent (other than a memory module failing, the machine has been running flawlessly for a while and I've been neglectful - I didn't even replace the memory module (they are NOT cheap for this machine) as the machine simply flags it and turns off the whole bank).

I'll letcha know if the partition type change works for the JFS partition - but that still leaves the HPFS partition offline...



A little more information...

4 147GB Maxtor 10K U320 SCSI drives set up as one RAID 1E container (ie: all volumes/partitions are written to what appears ONE 294GB hard drive).
3 partitions as follows:
- Boot C: approx 8GB HPFS(386)
- Apps D: approx 50GB HPFS(386)
- Data E: approx 240GB JFS
All on the same channel of an IBM ServeRAID 4M card

The setup has been running for four years with no issues but the 3rd drive falling out of RAID (defuncting itself) after short use.


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


RobertM

Quote from: Pete on 2011.10.22, 02:00:46
Hi Robert

I gather chkdsk is not an option?

If you have a Registered DFSee I suggest that you ask Jan for advice/help.

Regards

Pete

Re: Chkdsk:
Sadly, the drives never get "mounted" as the drive letter isn't assigned in LVM - it's marked on the partitions, but not in the new LVM tables created during the reinstall to Drive C. Oddly, Drive C *was* and still is in the LVM table.

Re: dfsee license:
I do... dunno where. Though I think with how helpful his tool has been, I'm going to simply register another copy as a show of support. I've also got JFSRescue (registered) someplace... might try that on the JFS drive. IF that works (which is why I haven't tried yet) I've got to figure out how to get the files off... no other JFS drives on that machine, and not enough space to simply copy the files off anyway. And, there's no more space in the front hot-swap bays for additional drives. It's a massive beast to move to hook up more SCSI drives to (or to even get to the rear port to run a cable from), but I might have to take that route or get enough of the current OS/2 install configured to write the files to the other server, a chunk at a time (I think I'm about 100GB short on the space I need - hence the laziness in doing another full backup... my needs have outgrown the space available, and my budget for SCSI drives).


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


ivan

QuoteThe setup has been running for four years with no issues but the 3rd drive falling out of RAID (defuncting itself) after short use.
That third drive problem tends to point to the RAID bios chip having a problem - usually caused by temperature.  A defective chip could also have caused the problem you are seeing.

Have you tried using DFSee to rewrite the LVM information of the partitions that have the problem?

Out of curiosity, how were you running RAID 1E on only 3 drives, our NAS protests until fixed if we try that.  OK I know you are running it in a server but surely there should be some warning every so often.

ivan

RobertM

Quote from: ivan on 2011.10.22, 15:20:06
QuoteThe setup has been running for four years with no issues but the 3rd drive falling out of RAID (defuncting itself) after short use.
That third drive problem tends to point to the RAID bios chip having a problem - usually caused by temperature.  A defective chip could also have caused the problem you are seeing.

Have you tried using DFSee to rewrite the LVM information of the partitions that have the problem?

Out of curiosity, how were you running RAID 1E on only 3 drives, our NAS protests until fixed if we try that.  OK I know you are running it in a server but surely there should be some warning every so often.

ivan

Four original drives in the RAID1E configuration.

Anyway, I suspect the SCSI backplane if anything. Though neither heat nor humidity should have been the cause. 65 degrees (or cooler) in here, with 74 degree air coming out of the cabinet at the ceiling, and under 22% RH at any given moment


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|