• Welcome to OS2World OLD-STATIC-BACKUP Forum.
 

News:

This is an old OS2World backup forum for reference only. IT IS READ ONLY!!!

If you need help with OS/2 - eComStation visit http://www.os2world.com/forum

Main Menu

JFS eating CPU resources?

Started by RobertM, 2010.04.28, 23:36:59

Previous topic - Next topic

RobertM

Hello all,

I have now noticed this on two of my machines. It seems JFS really really does not like doing more than one thing at a time. I have a 64MB cache configured on one, and a 32MB cache configured on the other. The first is definitely the latest version of JFS I could find (and latest SMP kernel). One is U160 SCSI, the other is a combination of SATA and IDE.

The machines:
- Netfinity 7000 M10 4GB RAM, U160 SCSI RAID configured to RAID 1E (striped and mirrored) running WSeB CP2 PF on FOUR CPUs.
- Intel 845 Chipset board with one IDE and one SATA drive and relatively recent DANI drivers running eCS v1.2MR on single 2.8GHz P4

The problem:
When transferring decent sized files (FTP, disk to disk copy, etc), it seems to eat a lot more CPU than it should. HPFS386 does not seem to have this problem (especially at the TWO concurrent FTP transfers or copies I limit these actions to).

I've attached a screenshot of the CPU status bar/etc. This baffles me. The CPUs should be at between .1% to 5% - instead they are hitting up to 25%

Upload or download doesnt seem to matter too much (upload to the server seems a little worse).

Best,
Robert


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


Saijin_Naib

My only thought is that the kernel and/or JFS.IFS are the debug versions? Maybe the CPU rape is from that? Can you force debugging off or revert to a non-debug/test JFS.IFS?

I've not seen any issues here in VM while using JFS but I'm not using a testcase, just the vanilla one that ships with RC7.

RobertM

Quote from: Saijin_Naib on 2010.04.29, 00:17:35
My only thought is that the kernel and/or JFS.IFS are the debug versions? Maybe the CPU rape is from that? Can you force debugging off or revert to a non-debug/test JFS.IFS?

I've not seen any issues here in VM while using JFS but I'm not using a testcase, just the vanilla one that ships with RC7.

Non debug. :(

Wondering if it has anything to do with how JFS handles things on disks near full. It was at about 11GB free of 250GB with a LOT of files.


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


Saijin_Naib

I've got nothing man. I've only used eCS on an 8gb/2mb and 40gb/8mb IDE drive and in both cases, the drive was pretty much empty. I used JFS on both because it was faster than HPFS (not 386, don't have a license for it) and I didn't notice any slowdown.

Could it be an issue with the drive not having NCQ on and getting the command queue backlogged while seeking the disk?

mobybrick

JFS is subject to fragmentation far more than HPFS - this could cause a lot more work for the disk. Remember, JFS tries to get its performance by maintaining the disk head near where the next read/write will be and if there is heavy fragmentation then this performance objective is lost.

JFS is a ring 3 FSD (rather than a ring 0 for HPFS386) but this shouldn't cause the problem that you are seeing as the JFS code is much more tuned for SMP.

I'd look at the disk driver... I remember that some of the Adaptec drivers prevented the drives own cache working...

Moby

RobertM

Quote from: mobybrick on 2010.04.29, 01:01:23
JFS is subject to fragmentation far more than HPFS - this could cause a lot more work for the disk. Remember, JFS tries to get its performance by maintaining the disk head near where the next read/write will be and if there is heavy fragmentation then this performance objective is lost.

...

Moby

ooh... didnt even remember about the fragmentation issue! The machine transfers GIGS a day via FTP with files being written, deleted, re-written all the time... it's gotta be massively fragmented by now.

Any recommendations? I'm guessing it's not as easy as HPFS where a simply copy operation will defrag the FS?

Thanks Moby!

(The driver running, btw, is an IBM ServeRAID 4 series driver, most recent)


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


abwillis



ooh... didnt even remember about the fragmentation issue! The machine transfers GIGS a day via FTP with files being written, deleted, re-written all the time... it's gotta be massively fragmented by now.

Any recommendations? I'm guessing it's not as easy as HPFS where a simply copy operation will defrag the FS?

Thanks Moby!

(The driver running, btw, is an IBM ServeRAID 4 series driver, most recent)
[/quote]
Well, I think it will defrag with a copy operation but with the drive so full it will probably the best it could get would still be bad.  From what I could determine, it seems fragmentation begins around 40% utilization.  There is a defrag that comes with JFS but it was never really completed I think.  I understand IBM had left it for a third party to develop better defrag utilities.  In fact, I've trapped running the defrag before. 
I had hoped the Graham utilities would add the capability but hasn't so far.  Speaking of Graham utilities, the Apache/MySql thread that I can't view when logged in (grey screen when logged in but can view when not logged in but then can't reply to) you mention compacting the memory.  There is a tool in the Graham utilities to compact memory:
http://www.warpspeed.com.au/Products/OS2/GU/Manual/TaskMgr.htm
and there is also a tool that moves memory to swapper to clear memory (which the above may be doing too)
http://os2site.com/sw/util/memory/allocmem.zip
Andy

The Blue Warper

I use the defragfs util to defrag my JFS drives, but I didn't know it never got finished.
It seems that a JFS defrag utility was never released for Linux.  I found this project on sourceforge:
http://jfs.sourceforge.net/
but, as far as I understand, it doesn't include any JFS defrag.

I also found this thread containing some OS/2 related news (that I'm not able to confirm nor verify though):
http://www.mail-archive.com/jfs-discussion@lists.sourceforge.net/msg00571.html

Finally, there's a warning in the JRescuer FAQ against using the defragfs util on JFS drives:
http://en.ecomstation.ru/projects/jrescuer/?action=faq-jfs
(See Q17)

Andi

It seems to me your cache size is very small. I usually have hundreds of MB for the JFS cache especially with >1GB RAM. Though I can not go beyond 500MB even on my machine with 4GB RAM.

There was a presentation from Sjoerd Visser at Warpstock Europe 2009 about JFS. You can find it when searching for APP02-JFS-Cache.pdf. There's a lot of things you can trim with JFS but I forgot all these details. But think to remember that the max. default of 64MB set by the IBM installer is quite small on modern systems.

HTH

RobertM

Quote from: Andi on 2010.04.29, 13:35:09
It seems to me your cache size is very small. I usually have hundreds of MB for the JFS cache especially with >1GB RAM. Though I can not go beyond 500MB even on my machine with 4GB RAM.

There was a presentation from Sjoerd Visser at Warpstock Europe 2009 about JFS. You can find it when searching for APP02-JFS-Cache.pdf. There's a lot of things you can trim with JFS but I forgot all these details. But think to remember that the max. default of 64MB set by the IBM installer is quite small on modern systems.

HTH

Hi Andi,

I have the default so low because on one machine, Apache/MySQL/PHP eat up the low memory arena as it is. I wasnt sure whether JFS uses that arena or not.

On the other machine, it is set so low because I also have a pretty large HPFS386 cache and didnt want to eat up the low memory arena.

Will look for the doc you indicate and see if I can learn something from it.

Thanks,
Rob


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


cytan

Quote from: RobertM on 2010.04.29, 19:47:19
...

On the other machine, it is set so low because I also have a pretty large HPFS386 cache and didnt want to eat up the low memory arena.

Will look for the doc you indicate and see if I can learn something from it.

Thanks,
Rob

Man, I really, really hope that version 3 of ecomstation will actually fix the low memory area problem. So many modern apps are using this space that we need a solution.

cytan

RobertM

Quote from: cytan on 2010.04.29, 20:28:02
Quote from: RobertM on 2010.04.29, 19:47:19
...

On the other machine, it is set so low because I also have a pretty large HPFS386 cache and didnt want to eat up the low memory arena.

Will look for the doc you indicate and see if I can learn something from it.

Thanks,
Rob

Man, I really, really hope that version 3 of ecomstation will actually fix the low memory area problem. So many modern apps are using this space that we need a solution.

cytan


Hmmm... well it seems:


  • JFS doesnt use the low memory arena... BUT is thusly limited to resources defined by VIRTUALADDRESSLIMIT (also limiting other apps that use that memory space)

  • Using larger than 64MB JFS caches on non-SMP machines may not be smart
    (I've had JFS Traps that have resulted in a nightmare trying to recover the files trying that - never tried after upgrading JFS for fear of creating the same situation)

  • Tweaking a few otherwise undocumented settings can help (/MINBUFFER and /MAXBUFFER)

Ah... this should be fun... one machine IS SMP but is the main production server for Star Trek Phase 2 (FTP) as well as their (and 30 other clients') web server... the other is all my (and my customers') blog sites, forums, etc and is NOT SMP.

Which do I wanna risk blowing up first? LoL!  ;D  :-[  :'(


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


Andi

Quote from: RobertM on 2010.04.29, 20:44:40
....

  • JFS doesnt use the low memory arena... BUT is thusly limited to resources defined by VIRTUALADDRESSLIMIT (also limiting other apps that use that memory space)
Hm, I think HPFS386 eats the shared arena that's why I usually have it rather small.

Quote
[/li]
[li]Using larger than 64MB JFS caches on non-SMP machines may not be smart
(I've had JFS Traps that have resulted in a nightmare trying to recover the files trying that - never tried after upgrading JFS for fear of creating the same situation)
....
Is this still true for the latest JFS releases?

I remember JFS is said to be even more SMP optimized than HPFS386. But I even had several hundreds of MBs of JFS cache on my UNI system without problems. Of course a backup is ever recommended and with todays disk sizes and lvm/jfs there's no excuse for not having one.[/list]

Andi

Back on the topic - in the mentioned document there's a possible explanation for this. Syncing the cache maybe your problem. I still think it's worth to play with the JFS parameters. In your case I would even suggest you to email Sjoerd.

RobertM

Quote from: Andi on 2010.04.30, 10:40:02
Quote from: RobertM on 2010.04.29, 20:44:40
....

  • JFS doesnt use the low memory arena... BUT is thusly limited to resources defined by VIRTUALADDRESSLIMIT (also limiting other apps that use that memory space)
Hm, I think HPFS386 eats the shared arena that's why I usually have it rather small.

Correct. I set mine to either 64MB (the LDGW box) or 32MB (the Apache box)

Quote from: Andi on 2010.04.30, 10:40:02
Quote
[/li]
[li]Using larger than 64MB JFS caches on non-SMP machines may not be smart
(I've had JFS Traps that have resulted in a nightmare trying to recover the files trying that - never tried after upgrading JFS for fear of creating the same situation)
....
Is this still true for the latest JFS releases?

Not sure. My crash was with the original WSeB CP2 PF and eCS 1.2MR versions. I have since upgraded, but cannot really afford to test to find out.

Quote from: Andi on 2010.04.30, 10:40:02
I remember JFS is said to be even more SMP optimized than HPFS386. But I even had several hundreds of MBs of JFS cache on my UNI system without problems. Of course a backup is ever recommended and with todays disk sizes and lvm/jfs there's no excuse for not having one.[/list]

Not sure how they stack up... HPFS386 seems to use less CPU. And JFS seems to eat more and more CPU as the disk gets fragmented. So... at this point, JFS is performing slower than HPFS386 on the same SCSI RAID drive array using the exact same disks. But then again I've got hundreds of thousands if not millions of files that are surely fragmented on it. Sadly, even with those quantities, HPFS fragmentation is minimal in comparison.


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|