Author Topic: Firefox problem, was Re: IRC speakup with BWW this Saturday (3 of March)!  (Read 29507 times)

Neil Waldhauer

  • Hero Member
  • *****
  • Posts: 1020
  • Karma: +24/-0
    • View Profile
    • Blonde Guy
The transcript of BWW speakup is now up on the VoiCE website. I can see that the topic never came up.

I'm not looking for a "resolution" to problems with Firefox -- I have all the latest libraries on both eCS and ArcaOS. I know there is a lot of work going on by looking at checkins on github, but I don't know the goal of those checkins.

I will say that marking code to load high does make some difference to stability, and I'll comment again on this in a few weeks when I have more information. I think the highmem tool could be a little more user-friendly.

I don't think I can enter a bug report for "what are you guys trying to do with Firefox?"

Silvan said in the speakup that a browser is essential to the life of the operating system. How essential is a web browser to the success of the operating system? I can browse on a smartphone or tablet. There is no platform with a bug-free browser. Still, I'm inclined to think he's right. I'm using Firefox on ArcaOS to type this message.
Expert consulting for ArcaOS, OS/2 and eComStation
http://www.blondeguy.com

David Graser

  • Hero Member
  • *****
  • Posts: 869
  • Karma: +84/-0
    • View Profile
Firefox does what I call CPU trashing.  Based on the CPU monitor, it is extremely high.  Seamonkey does te same thing, but is nowhere near as high as Firefox. Right now I will use Seamonkey since it does a better job utilizing the CPU.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4775
  • Karma: +99/-1
    • View Profile
Hi, currently Bitwise is trying to get all of the Mozilla testsuite to run and pass, not a small job as it hasn't even worked since about FFv4 when Walter did a lot of work on it. The testsuite is what Mozilla expects to pass before releases and even after new commits.
So far, he has fixed a couple of memory related problems including fixes to libc and/or libcx and moved from mmap to native memory allocation in places. There's no way to say if running the testsuite will show our problems but it is a good start.
Currently dmik is trying to implement some type of tracing to see where the code is spending its time. Hopefully that'll help but Mozilla uses lots of JS, XUL and such to draw the browser so who knows.

As for SM being currently faster then FF, it is probably a newer build depending on which FF is used. I was also experimenting with different compiler options, eg optimizing for size rather then speed, which may be why the DLLs were different sizes. Unluckily enough time has gone by that I forget exactly what I did. about:buildconfig will show the compiler flags I (and dmik) used with -Os being for size and -O3 for speed. Probably best is -O3 for the JavaScript engine and -Os for the rest, smaller binaries take less memory.

Here I find that SM performance is great if started in safe-mode and as extensions are enabled, it gets worst, especially some extensions. TB is worse and just mouse overing it can peak both cores for seconds.

A lot of these problems are possibly intrinsic to the design of Mozilla and that is one reason that they're moving to the Quantum rendering engine. Other things are getting worse with all browsers as they split it into multiple processes for various reasons, mostly security.

David Graser

  • Hero Member
  • *****
  • Posts: 869
  • Karma: +84/-0
    • View Profile
I had to move back to Firefox.  Seamonkey would run for awhile and then would suddenly terminate.

What is wierd is that on my lastest bootup, Firefox CPU trashing is way down,  I can't explain it.  Before I could barely use Firefox.  Right now it is pleasant to use.

Doug Bissett

  • Hero Member
  • *****
  • Posts: 1593
  • Karma: +4/-2
    • View Profile
There are two things, that I find will help Firefox (and probably the other Mozilla variants). One is to use a small cache. I set it to 50 MB. Of course, individual setups may work better with other sizes.

The other is to use ANPM (RPM/YUM) to install FFMPEG. be careful there because FFMPEG will want to install the sdl (lower case) package, while other things need the SDL (upper case) package. Install SDL (upper case) first. If you already have sdl (lower case) installed, uninstall it (and watch what other packages get uninstalled with it). Then install SDL, and the other packages that got uninstalled, then install FFMPEG.

Those are not going to make FF work perfectly, but I find that it helps.

Quote
I can't explain it.  Before I could barely use Firefox.  Right now it is pleasant to use.

It is possible, that you got a bad web page into the cache, and going back to that web page would try to load the bad page. Eventually, that bad page would work it's way out of the cache, fixing the problem. I try to remember to clear the cache, after a crash.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4775
  • Karma: +99/-1
    • View Profile
Usually a crash or hang will invalidate the whole cache.

Doug Bissett

  • Hero Member
  • *****
  • Posts: 1593
  • Karma: +4/-2
    • View Profile
Quote
Usually a crash or hang will invalidate the whole cache.

Perhaps that is what it is supposed to do, but it is my experience, that whatever caused the crash is cached, and FF tries to go back to it, causing another crash. What i try to do, is force it to the default home page, ASAP after the program starts, then go and clear the cache, before going anywhere else. Sometimes, a crash might invalidate the cache, but that is rare, from what I see. A hang never invalidates anything.

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4775
  • Karma: +99/-1
    • View Profile
I have lots of experience with depending on the cache, and after a crash, every page needs to be reloaded. On dial-up I got into the habit of very frequent backups and restores after a crash to avoid having to reload everything, at least here with my usual profile and SM.
Guess it is possible that FF acts different but I doubt it, could be a profile thing.
Next time you have a crash, look before restarting the browser and the cache should have been renamed to something like cache.trash1234567890 with random numbers. If not, it's a bug. After starting it should delete the trash cache and build a new one.

David Graser

  • Hero Member
  • *****
  • Posts: 869
  • Karma: +84/-0
    • View Profile
I have noticed that the amount of CPU trashing changes with sites one goes to.  OS2World home page is low.  This other site is low

http://ecsoft2.org/

However when I go this another web page at the same site, the CPU usage goes up trememndously.

http://ecsoft2.org/popular

Go back to to the home page and it drops tremendously.  So it appears to be certain web pages that cause these browsers to go bonkers.  So what is it about sites such as this that cause the browsers to overreact?

I wonder if it is possible to build into Firefox or Seamonkey the ability to use another caching program such as Oops.  On a properties page select external cache program and have a text box to point it to it, similar to what FM/2 and PMMail do for for external editors.

ftp://hobbes.nmsu.edu/pub/os2/apps/internet/www/server/oops-1-5-24.zip
« Last Edit: March 06, 2018, 04:06:49 pm by David Graser »

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4775
  • Karma: +99/-1
    • View Profile
I have noticed that the amount of CPU trashing changes with sites one goes to.  OS2World home page is low.  This other site is low

http://ecsoft2.org/

However when I go this another web page at the same site, the CPU usage goes up trememndously.

http://ecsoft2.org/popular

Go back to to the home page and it drops tremendously.  So it appears to be certain web pages that cause these browsers to go bonkers.  So what is it about sites such as this that cause the browsers to overreact?

Likely just whatever JavaScript they're running, and some run a lot. Hopefully Bitwise will figure it out
Quote
I wonder if it is possible to build into Firefox or Seamonkey the ability to use another caching program such as Oops.  On a properties page select external cache program and have a text box to point it to it, similar to what FM/2 and PMMail do for for external editors.

ftp://hobbes.nmsu.edu/pub/os2/apps/internet/www/server/oops-1-5-24.zip

Sure, oops or squid are easy to use with Firefox or SeaMonkey as they're just proxies.
For SM for example, you'd install oops including creating its cache and adding it the startup folder or such so it is always running, and then Edit-->Preferences-->Advanced-->Proxies-->Manual proxy configuration: and put in localhost (or 127.0.0.1) and the correct port number, such as 3128.
Even on dial up, using oops or squid didn't seem to make much difference and with more sites becoming HTTPS, they have a harder time caching everything.

David Graser

  • Hero Member
  • *****
  • Posts: 869
  • Karma: +84/-0
    • View Profile
I was looking at a utility called highmem

http://www.os2site.com/sw/dev/util/highmem-20140406-os2.zip

Its developer "Yuri Dario (Paperino)" states the following:

"Loading code high should almost always be ok. That's because loading code high is mostly transparent to applications: it's the kernel's job to map high addresses to the physical memory where the code resides. Unless some code tries to map a linear code address (aka: a function pointer) to a segmented code address, see below but that does not happen very often.

For loading data high this is much more critical: of course, the same applies as for code but remember "thunking"? A lot of APIs make the silent assumption that a linear data address can be easily mapped to a segmented address and vice versa by a simple well known "thunking" algorithm. However that algorithm only works for "low memory". And OS/2 has enough 16-bit code in its bowels that can only access data via a segmented data address. If you have a big application like Firefox you never know if there is some low level OS component that makes that simple assumption, thunks a data address and subsequently gets it wrong if "high memory" data addresses are used."

This looks like what Firefox is doing. 

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4775
  • Karma: +99/-1
    • View Profile
We've been pretty careful to make sure any 16 bit API is called from low memory. Doesn't mean it doesn't happen but it sure happens a lot less then when this move to high memory started.
Unluckily, there are also problems in the kernel with high loaded code and data. Most of the recent kernels have been attempts to fix this but without the source code and the right to rebuild and distribute fixes kernels...

David Graser

  • Hero Member
  • *****
  • Posts: 869
  • Karma: +84/-0
    • View Profile
I made a STARTUP.CMD file and put it into my ArcaOS 5.02 root directory.

@ echo off
x:
md temp
exit

which creates a temp directory on drive X which is my RAM drive.

I then changed my config.sys to point to where my new temporary directory is located.

SET TMP=x:\\temp
SET TEMP=x:\temp
SET TMPDIR=x:\temp

Now the problem.

My Firefox 45.9.0 download directory in Options points to

f:\home\temp

When I download a file, it always fails.  When I click the retry, it shows it finishes the download.

However,the downloaded file is not found in the f:\home\temp directory.

I now find it in my X:\temp directory.

Before, Firefox would have no problem downloading and the downloads would always be found in the home\temp directory.

Have I done something wrong or is this a problem with Firefox?

ivan

  • Hero Member
  • *****
  • Posts: 1556
  • Karma: +17/-0
    • View Profile
Out of interest make a 'F:\home\download' directory and point the firefox downloads to that.  If it works everything is good.

I think the problem could be in the way the temp setting is handled.

David Graser

  • Hero Member
  • *****
  • Posts: 869
  • Karma: +84/-0
    • View Profile
I made a mistake.  Firefox points to the download directory which is in the home directory. The download directory does exist.