• Welcome to OS2World OLD-STATIC-BACKUP Forum.
 

News:

This is an old OS2World backup forum for reference only. IT IS READ ONLY!!!

If you need help with OS/2 - eComStation visit http://www.os2world.com/forum

Main Menu

Benchmarks on real vs virtualised hardware

Started by Paul Smedley, 2011.04.13, 01:30:24

Previous topic - Next topic

Paul Smedley

Hi there!

Quote from: Fahrvenugen on 2011.04.13, 18:34:35
Quote from: Paul Smedley on 2011.04.13, 04:05:54

So far, I've done no tuning.  It's basically a stock Virtualbox setup except I changed the NIC to an Intel gigabit adapter, and set the RAM for the virtual OS to about a gig.

I'd be interested in any tips to further improve performance!

Hi,

I've got a few theories on this, but it would require additional testing.

First, from what I understand, modern 64 bit CPU's operate more efficiently when running a 64 bit OS then when run in 32 bit mode.  The fact that you're running the 64 bit build of Ubuntu and then eCS (which we know is 32 bit) virtualized, I'm wondering if some of the extra efficiencies that you get by having the CPU in native 64 bit mode is making a difference.  To test this, it would be interesting to run the same test but using the 32 bit version of Ubuntu.

The second thing I'm wondering - is Virtualbox set up to emulate a single CPU, or SMP?  Also, is the OS/2 build of GCC set up to take advantage of SMP, or does it only use 1 core when compiling?  I seem to recall way back when (in the OS/2 2.1 SMP days) that some apps that were only coded to use a single core actually saw minor performance drops (usually less then 5%) when used on the SMP kernel with multiple processors.  Of course this wouldn't account for the over 50% difference that you're seeing.  Just thinking of theories...

The OS/2  build of GCC uses threads, and multiple cores are definitely used (according to cpu meter in xcentre). GCC isn't highly threaded, so running multiple make jobs can help improve performance on SMP systems.  For interest, I may re-run some of the benchmarks with make -j3 to see if that can help eCS native catch up to virtualised eCS :)

I'm not in a hurry to install a 32-bit Ubuntu - a lot of work :)

Paul Smedley

Hi Andreas,

Quote from: Andreas Kohl on 2011.04.13, 15:26:34
Virtual Hardware behaves similar to real hardware, so to improve processing and i/o you could use SCSI in favour of IDE/ATAPI for the emulated host bus adapter. VirtualBox supports emulated LSI Logic and BusLogic SCSI  adapters. By using VMDK you could even connect to physical disks or partitions.

I just read through http://www.virtualbox.org/manual/ch03.html#id397624 - there are a bunch of things that may further help performance here - including SMP support.

Will do some more benchmarks as time permits :)

herwigb

Paul,

where do the various temporary path statements point to during compile i.e. to what kind of drive?
Kind regards,
HerwigB.

Paul Smedley

Hiya Herwig,
Quote from: herwigb on 2011.04.14, 07:38:52
where do the various temporary path statements point to during compile i.e. to what kind of drive?

On the tests with real or virtual hardware?  In both cases they are on a local drive. ie not a ram drive or anything.  For the native tests, the drive is a 3.5" Sata drive.

Cheers,

Paul

RobertM

Quote from: Paul Smedley on 2011.04.14, 00:38:12
Hi Andreas,

Quote from: Andreas Kohl on 2011.04.13, 15:26:34
Virtual Hardware behaves similar to real hardware, so to improve processing and i/o you could use SCSI in favour of IDE/ATAPI for the emulated host bus adapter. VirtualBox supports emulated LSI Logic and BusLogic SCSI  adapters. By using VMDK you could even connect to physical disks or partitions.

Sounds interesting, will have to do some trials with this when I get some free time.

One thing I'm wondering if it's possible or not...

I'd like to somehow image my existing build drive and make it available to virtualbox.

I've got all the compiler tools in virtualbox already, but I'd like to move all my source code over, and xcopy ain't going to cut it :)

There's a little more to it than that, which will also explain your results.

Virtualizing software such as VirtualBox, VirtualPC, etc; will virtualize a device as a certain device. This gains certain limitations as well as certain advantages.
- The limitations are based on the capabilities of the virtual hardware - for instance, if the virtual video card/driver only supports certain color depths.

The advantages can be many. IF the host OS has full support for the features of the hardware, then assuming the guest OS has proper/full support for the virtualized hardware, it can and will utilize them.

As a for instance, let's say eCS or Warp does not support your video card in anything but VESA mode, or does not support your SATA hard drive in anything but IDE mode, then you will run into big performance penalties running eCS on the bare metal. BUT, if the host OS DOES support them fully, eCS/Warp virtualized, will take advantage of them as well (again, assuming eCS/Warp has full support for the virtualized hardware it's presented).

This scenario creates a situation where eCS/Warp is faster in a virtual machine than on bare metal. I suspect you are running into the same scenario. In theory, installing (assuming they existed) drivers that fully support the hardware on a bare metal eCS/Warp install will reverse that advantage.

This is something people running Windows 7 x64 on numerous Asus AMD based machines (and other AMD based machines) are finding, as they are realizing that support for things like ACHI are broken or horrendous (and forced to run their disk subsystem in IDE mode). BIG performance increase running W7x64 in a virtual machine on an OS that actually properly supports the mobo/chipsets (for disk intensive things).

On bare hardware eCS/Warp (when non-generic video card drivers are not available), for anything that's VIO intensive, the penalty increases dramatically. VIO writes are painfully slow and will hold up everything. The same is true for GUI writes depending on the app (I've got quite a few that, even though it's well supported, hate the S3 video card in one of our servers - not because of the support, but because the card is simply ancient and painfully slow - then there's Lotus Domino GoWebserver, which "bulk writes" log output to it's GUI window at amazingly fast speeds on even the slowest of hardware - but many apps are not designed in that fashion, and do line by line writes for such tasks).

The 64bit part of the equation doesn't seem to play too much of a role - except for running 64bit intensive apps.


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


Paul Smedley

Hiya Robert,

Quote from: RobertM link=topic=3131.msg19273#msg19273 date=1Hiya Robert,302762130
There's a little more to it than that, which will also explain your results.

Virtualizing software such as VirtualBox, VirtualPC, etc; will virtualize a device as a certain device. This gains certain limitations as well as certain advantages.
- The limitations are based on the capabilities of the virtual hardware - for instance, if the virtual video card/driver only supports certain color depths.

The advantages can be many. IF the host OS has full support for the features of the hardware, then assuming the guest OS has proper/full support for the virtualized hardware, it can and will utilize them.

As a for instance, let's say eCS or Warp does not support your video card in anything but VESA mode, or does not support your SATA hard drive in anything but IDE mode, then you will run into big performance penalties running eCS on the bare metal. BUT, if the host OS DOES support them fully, eCS/Warp virtualized, will take advantage of them as well (again, assuming eCS/Warp has full support for the virtualized hardware it's presented).

This scenario creates a situation where eCS/Warp is faster in a virtual machine than on bare metal. I suspect you are running into the same scenario. In theory, installing (assuming they existed) drivers that fully support the hardware on a bare metal eCS/Warp install will reverse that advantage.

This is something people running Windows 7 x64 on numerous Asus AMD based machines (and other AMD based machines) are finding, as they are realizing that support for things like ACHI are broken or horrendous (and forced to run their disk subsystem in IDE mode). BIG performance increase running W7x64 in a virtual machine on an OS that actually properly supports the mobo/chipsets (for disk intensive things).

On bare hardware eCS/Warp (when non-generic video card drivers are not available), for anything that's VIO intensive, the penalty increases dramatically. VIO writes are painfully slow and will hold up everything. The same is true for GUI writes depending on the app (I've got quite a few that, even though it's well supported, hate the S3 video card in one of our servers - not because of the support, but because the card is simply ancient and painfully slow - then there's Lotus Domino GoWebserver, which "bulk writes" log output to it's GUI window at amazingly fast speeds on even the slowest of hardware - but many apps are not designed in that fashion, and do line by line writes for such tasks).

The 64bit part of the equation doesn't seem to play too much of a role - except for running 64bit intensive apps.

I'm with you on the support SATA/video controllers impacting performance.... to a point.

My Current system, that I included in the benchmark results does have fully supported SATA and video controllers.

To minimise the impact of the unsupported video on the new system running native eCS, I minimised the compiler window, to try and eliminate the impact of the slow video speed.  I _only_ did this for the native eCS tests on the i7-2600, and not for the other two configurations.

Cheers,

Paul

RobertM

Very weird...

Can you email me your build environment and some test compiles?

-R


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


herwigb

Hi Paul,
Quote
Quote
where do the various temporary path statements point to during compile i.e. to what kind of drive?

On the tests with real or virtual hardware?  In both cases they are on a local drive. ie not a ram drive or anything.  For the native tests, the drive is a 3.5" Sata drive.

Given the fact that compiling speed improves a lot when putting the temporary paths on a RAMFS drive (I do that for Samba), my guess would be that the compiling eCS Vbox guest gets its speed by an effective caching mechanism of the filesystem on the host.

As I don't have a complete picture my guess might be wrong and I am fully aware that my RAMFS solution can be only used for smaller projects for obvious reasons.
Kind regards,
HerwigB.

abwillis

Quote from: herwigb on 2011.04.14, 12:01:50
Quote
I am fully aware that my RAMFS solution can be only used for smaller projects for obvious reasons.
Hmm, this causes me to ponder... on 64 bit machines with over 4G of memory (e.g. 8G), would it be possible to write a device driver that could turn the memory over 4G into a RAMFS?
Andy

Paul Smedley

Well FWIW - I'm now running Ubuntu on my new hardware.

eCS on the new hardware is currently unusable pending:
- either a fixed danis506 to recognise the SATA controller or an AHCI driver for eCS
- a way to set system MTRRs
- a working Panorama for 1920x1080

The above are the 3x killers, but a stable Firefox would also help.  For whatever reason, it's very crash prone here.

The switch to the new hardware was 'expedited' when we returned from a weekend away and my wife was editing photos on the 'old' hardware, and after a few beeps from danis506 (drive errors) the system hung and couldn't be rebooted as the drive refused to be recognised.

Fortunately I was able to mount the drive in a USB enclosure and get all the important data off - including (most importantly) the /dev directory which contains all my source code.  Including object files and built executables, this was over 2 million files and 26gb.  A safe copy of this is now on a second drive, and is in the process of being copied to my NAS, where it can be seen via Samba from virtualised eCS.

Old source is always useful to create diffs when porting new versions :)

Cheers,

Paul

Paul Smedley

#25
updated benchmarks with a test version of the eCS AHCI driver

To help in formatting:
System descriptions are:
(A)  Intel Core2Quad Q9400 running eCS 2.0 GA Natively
(B)  Intel Core i7-2600 running eCS 2.0GA natively in SATA generic mode, BIOS set to SATA legacy mode
(C)  Intel Core i7-2600 running eCS 2.0GA natively with test eCS AHCI driver, BIOS set to AHCI mode
(D)  Intel Core i7-2600 running Ubuntu 10.10, with eCS 2.0GA running under Virtualbox 4.04

                                  (A)               (B)             (C)                (D)
Bind 9.8.0                     3:17            4:04            2:54              1:47
Quassel 0.7.2                15:51          15:00           12:36             8:45
Ghostscript 9.02             5:42            5:18             4:04             3:13
MySQL 5.1.56               28:55            22:22           19:31            12:16

All times in minutes

djcaetano

Quote from: Paul Smedley on 2011.05.07, 03:03:41
updated benchmarks with a test version of the eCS AHCI driver

  Hi Paul,

  My board is already up and running Windows 7 (not Linux yet, because I do not want it to thrash my partitions like some said latest Ubuntu releases will do) and I think my PCH (chipset) temperature is somewhat high (about 57oC compared to CPU's 37oC). Can you please check your PCH temperature? Is it usually higher than CPU temperature, also?

   Regards!

PS: I believe mine will be somewhat higher than yours because I am using H67-i7 integrated video for now... but I am worried about the absolute temperature value.

aschn

Quote from: Paul Smedley on 2011.05.07, 03:03:41
updated benchmarks with a test version of the eCS AHCI driver

(C)  Intel Core i7-2600 running eCS 2.0GA natively with test eCS AHCI driver, BIOS set to AHCI mode

Do you have tried that also with

   BASEDEV=OS2AHCI.ADD /n

to activate Native Command Queuing?

Andreas

Paul Smedley

Hi Andreas,

Quote from: aschn on 2011.05.07, 19:24:13
Quote from: Paul Smedley on 2011.05.07, 03:03:41
updated benchmarks with a test version of the eCS AHCI driver

(C)  Intel Core i7-2600 running eCS 2.0GA natively with test eCS AHCI driver, BIOS set to AHCI mode

Do you have tried that also with

   BASEDEV=OS2AHCI.ADD /n

to activate Native Command Queuing?

No - this is on the todo list but things have been busy here recently....