OS2World OLD-STATIC-BACKUP Forum

OS/2 - Technical => Hardware => Topic started by: kim on 2008.02.04, 14:50:07

Title: Genmac Wrapper driver performance
Post by: kim on 2008.02.04, 14:50:07
Question if anyone have an idea regarding the performance of the genmac wrapper driver compared to native OS2 drivers. Asking since since to Intel and Broadcom based NICs there exists native OS2 drivers and as well the genmac wrapper. But, what would be the preferred solution?
Title: Re: Genmac Wrapper driver performance
Post by: Raiko on 2008.02.04, 17:04:30
Lots of newer Intel nics are not supported by the native drivers. On one system I have 2 Intel nics, for the one built in I use Genmac and for the other I use a native driver. Both are Pro/1000 and so far I have not noticed any difference in performance between the two.
Title: Re: Genmac Wrapper driver performance
Post by: kim on 2008.02.04, 17:20:51
Aware of the issue with newer NICs but since the system I'm trying out the eCS RC4 has a Broadcom NIC that has native OS2 drivers I was wondering if anyone made some tests of moving data back and forth to see how big the performance difference there might be and as well how it might affect the system itself when looking at CPU usage.
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.04, 19:03:59
Not to take away from your thread, but I have noticed a HUGE disparity between OS/2 and XP (same hardware) in terms of network performance. I use thespeakeasy speed test (only one eCS with old flash can use) and I find that its half as fast as XP (both in firefox, as close a test time as possible between reboot/change OS). Are the default settings for the NIC in MTPS a conservative setting, or is the poor performance due to the driver itself? (VIA RhineII driver)?

On the same machine, Vista more than doubles XP in terms of network performance @ speakeasy  soo, yeah. Whats up with that?

http://www.speakeasy.net/speedtest/
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.04, 21:18:25
Quote from: Saijin_Naib on 2008.02.04, 19:03:59
Not to take away from your thread, but I have noticed a HUGE disparity between OS/2 and XP (same hardware) in terms of network performance. I use thespeakeasy speed test (only one eCS with old flash can use) and I find that its half as fast as XP (both in firefox, as close a test time as possible between reboot/change OS). Are the default settings for the NIC in MTPS a conservative setting, or is the poor performance due to the driver itself? (VIA RhineII driver)?

On the same machine, Vista more than doubles XP in terms of network performance @ speakeasy  soo, yeah. Whats up with that?

http://www.speakeasy.net/speedtest/

Speakeasy is a poor test of speed for the following reasons:

Now, oddly, I have noticed the following. My WSeB box, using Intel Pro/1000MT cards, gets far faster download speeds than the XP box in our office, but the XP box (also gigabit ethernet, connected to the same router and switch) reports absurd upload speeds (ie: far faster than the connection we have can support).

So, I'm not sure exactly what SpeakEasy actually is measuring, but according to it, the XP box is pushing twice as much bandwidth upstream than our connection is capable of - while the WSeB machine is pulling slightly less than our maximum bandwidth downstream (and the XP box is pulling 2/3 of that and cant come close to our connection's actual downstream speed).

So, their results on different platforms make little sense to me.

To test actual performance of the NIC and driver, one would need an isolated, local area setup with a test machine that had known bandwidth capabilities, and use that to test the other machines from. Besides any weird variances created by SpeakEasy (Java, browser, etc) the Internet in and of itself will skew things further (besides, I doubt you or anyone here has a gigabit pipe to the Internet - making it hard to truly gauge what the card or driver can support).

And of course, the card itself, even with the best of drivers - as well as what else is on the same bus eating bandwidth, can often be a bottleneck. That's why some server class network cards are horrendously expensive (as much as a PC costs in some cases).

-Rob
Title: Re: Genmac Wrapper driver performance
Post by: djcaetano on 2008.02.06, 18:32:39
Quote from: RobertM on 2008.02.04, 21:18:25
To test actual performance of the NIC and driver, one would need an isolated, local area setup with a test machine that had known bandwidth capabilities, and use that to test the other machines from. Besides any weird variances created by SpeakEasy (Java, browser, etc) the Internet in and of itself will skew things further (besides, I doubt you or anyone here has a gigabit pipe to the Internet - making it hard to truly gauge what the card or driver can support).
And of course, the card itself, even with the best of drivers - as well as what else is on the same bus eating bandwidth, can often be a bottleneck. That's why some server class network cards are horrendously expensive (as much as a PC costs in some cases).

  This program is really good, but AFAIR it only works for NetBIOS/NetBIOS over TCP/IP. But it is a good test program:

http://hobbes.nmsu.edu/cgi-bin/h-search?sh=1&button=Search&key=netio&stype=all&sort=type&dir=%2F
 
  And it works on Windows too.
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.07, 00:59:20
My results as a client, my roomate as host. The images explain the OS in each instance. In all cases XP was marginally better than OS/2, but in cases of like OS to like OS, the performance was best between the two machines. We will hopefully test my roomate as vista host/client later tonight after he installs it. Its pretty clear his NIC is far better than mine because his speeds seem much higher.
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.07, 01:13:34
Here is loopback performance for me. These numbers are NUTS. What is going on here?
Title: Re: Genmac Wrapper driver performance
Post by: djcaetano on 2008.02.07, 18:24:28
Quote from: Saijin_Naib on 2008.02.07, 00:59:20
My results as a client, my roomate as host. The images explain the OS in each
...
this NIC is far better than mine because his speeds seem much higher.

  The test shows there is only a little difference between OS/2 and Windows. If you are using GenMac on eCS, then I believe the performance of GenMac is awesome. If you are not using GenMac, then there is already a base for GenMac comparison.
  The fact that made me think was the fact Windows reception seems indifferent to data size... even when using very small packets (like 1KB), which is somewhat weird to me. Anyway, a measured result is what it is. :)
Title: Re: Genmac Wrapper driver performance
Post by: djcaetano on 2008.02.07, 18:33:47
Quote from: Saijin_Naib on 2008.02.07, 01:13:34
Here is loopback performance for me. These numbers are NUTS. What is going on here?

  There is nothing nuts about them. They show the TCP/IP stack performance,
without interference of network driver and/or network lags. It seems OS/2's TCP/IP
stack is pretty faster than Windows stack. :)
  Also, the speed increase in OS/2 TCP/IP stack is measured as expected (it's
bigger for bigger packets). Windows TCP/IP stack has a somewhat inconsistent
speed increase (sending data in 4KB packets is slower than sending data in 2KB
packets... and receiving data in 8KB packets slower than receiving data in 4KB is
a weird result).

   If you are asking yourself "what the hell is the meaning of those numbers?", they
represent the maximum transfer speed your TCP/IP stack with the hardware you are
using.

   As a result, if your transfer is way below the speed of loopback, that means the
bottleneck can be the drivers (Windows and/or OS/2), your NICs or your machines'
connection.
   In that case, maybe simply exchanging the network driver will not give a good
comparison between GenMac and a "native" driver, because it's not that easy
to know what is the bottleneck. Since your connection seems to be an
100Mbps one (based on your tests, which had shown a result of 89Mbps), I
believe the bottleneck is the network cable or NIC.

   ( I am not expert in this area, I just had done some experiments in the past...
if I am talking nonsense, please, someone corrects me! ;) )
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.07, 23:24:34
My roomate got numbers in the 800mb range using PCLinuxOS on loopback. Is that feasible? I mean, Linux is quick on the internet, but it doesnt feel 200x as fast as WinXP, and certainly not 4x as fast as OS/2.
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 00:37:10
Quote from: Saijin_Naib on 2008.02.07, 23:24:34
My roomate got numbers in the 800mb range using PCLinuxOS on loopback. Is that feasible? I mean, Linux is quick on the internet, but it doesnt feel 200x as fast as WinXP, and certainly not 4x as fast as OS/2.

Hi,

Yes, it is quite feasible. Here are SOME reasons why (probably not nearly all).


In addition, I don't think the stacks are equally configurable. Namely, the Windows one. The OS/2 stack is highly configurable - but very hard to figure out how. Many aspects of configuration are nowhere to be found in the manuals that accompany OS/2 - and many of the aspects that ARE documented as configurable have very little in the explanation of what they do. Oddly, some aspects (including default values, max and min values) are more easily found using the commandline tools, than using the help docs - which only tell you "this is the value you can change, this is the default" and nothing more.

The other problems with such tests (as NetIO) in real world usage scenarios, is though they are a good and very valid starting point, they do not show scaling performance (probably a poor term, but what I mean is, "how does the stack handle multiple simultaneous connections?") The Linux and OS/2 stacks are better in this respect (handling more connection traffic at the same time) than the Windows stack. The problem is, I dont know if this is due to capability limitations in the MS stack, or due to the MS imposed restrictions in it - or both.


In addition, the SMP stack in OS/2 is (supposedly) an entire different ballgame when it comes to handling multiple simultaneous connections due to it's threading model and use of multiple CPUs. Never tested that, dunno how it actually impacts performance in SMP and non-SMP environments... but it definitely is a different stack.


As for the OS/2 stack, I've found (FAR) more "undocumented" settings to tweak it than I've found documented ones - some of which I've found references for online - others nothing. The same goes for various network cards. I cannot find any documented settings for the Intel Pro/1000MT cards, nor any parameters settable through OS/2 - but I know they exist... so in the meantime, I am stuck with the defaults as set by the driver... while other cards (as mundane in comparison as the Realtek 813# series) offer at least a few settings.



As a side note, one thing OS/2 and eCS is sorely lacking is a unified configuration tool to configure such stack parameters (with related documentation and min/max/default parameters)... for instance, someone tell me what the "mem" and "gdt" parameters are for, how they affect network performance, and where they go? And what are the tradeoffs of using them (or going with the defaults)? Bothing in the help guides indicates the usage I am talking about, btw... which (the one I am talking about) has nothing to do with NetBIOS or anything other than TCP/IP.


Another example of such (non-documentation) issues can be easily found by running:
inetcfg -g all

at the commandline... it will create an "ini file" (and tell you where it is - which you then need to open in a text editor) with the various settable values, their current, min, max and default setting... while the docs go into no detail, and neither explains what they are for.


-Robert
Title: Re: Genmac Wrapper driver performance
Post by: David McKenna on 2008.02.08, 00:47:41
Robert,

  Isn't the OS/2 TCP/IP stack taken from AIX? Maybe all these 'undocumented' settings are documented in AIX literature? Just thinking out loud. A quick websearch brought up this:  www.redbooks.ibm.com/redpapers/pdfs/redp0103.pdf 'How To: Easily configure TCP/IP on your AIX system'. Might be worth a read....
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 00:56:05
Quote from: David McKenna on 2008.02.08, 00:47:41
Robert,

  Isn't the OS/2 TCP/IP stack taken from AIX? Maybe all these 'undocumented' settings are documented in AIX literature? Just thinking out loud. A quick websearch brought up this:  www.redbooks.ibm.com/redpapers/pdfs/redp0103.pdf 'How To: Easily configure TCP/IP on your AIX system'. Might be worth a read....

Hi David,

Good question... I dont know. I know someplace it was referenced that it was a BSD compliant stack, and I know that the firewall was supposedly ported from AIX...

I'll check out the docs and see if the info in it matches the "extra" configuration parameters I have found - and if so, maybe write something to configure (and/or) explain them as it applies to OS/2.

-Robert
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 00:57:40
As an interesting side note, my RealTek 8139 10/100 card drastically out performs my Intel Pro/1000 card (which is on a dedicated PCI bus) according to NetStat...

Actually, the RealTek seems to perform as well as Saijin's gigabit card if NetIO is to be believed...


-Robert
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.08, 01:10:20
Neither me nor my roomie have gigabit cards in our computers. He has a Realtek 8139 NIC in his laptop and I have the Via Rhine II VT8235
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 01:17:42
Quote from: Saijin_Naib on 2008.02.08, 01:10:20
Neither me nor my roomie have gigabit cards in our computers. He has a Realtek 8139 NIC in his laptop and I have the Via Rhine II VT8235

Ooops... sorry. Getting the Velocity series mixed up with yours... (and thought that ASUS board had gigabit)... still leaves me wondering why the Intel Giga is performing far worse than the RealTek - both with native (and latest) drivers...
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.08, 01:21:12
I donnno, I always assumed the Realtek NICs were garbage :\
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 01:31:14
Quote from: Saijin_Naib on 2008.02.08, 01:21:12
I donnno, I always assumed the Realtek NICs were garbage :\

At first, so did I... and though I dont know about later incarnations, I have had some great success with the 8139 series (and with the 3Com Parallel Tasking Series)... better than what I've seen with other NICs. Haven't tried later Realteks in my servers (or much of anything else for that matter) so, maybe many are junk...
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 20:33:04
Quote from: David McKenna on 2008.02.08, 00:47:41
Robert,

  Isn't the OS/2 TCP/IP stack taken from AIX? Maybe all these 'undocumented' settings are documented in AIX literature? Just thinking out loud. A quick websearch brought up this:  www.redbooks.ibm.com/redpapers/pdfs/redp0103.pdf 'How To: Easily configure TCP/IP on your AIX system'. Might be worth a read....

Hi David,

Not particularly... :( The docs seem to refer to setting various parameters using AIX specific scripts which I cant seem to find in OS/2... Also, according to Wikipedia, "The TCP/IP stack is based on the open source BSD stack.[citation needed]" - which may be where the AIX stack was derived from as well, but still doesnt leave me much in the line of documentation...

For instance, the gdt and mem values are commandline options for the config.sys file for the socket driver... I'm not even sure if they apply to both the SMP and UNI socket drivers (but know they at least apply to the SMP one... and *think* they should apply to both). I am not sure where I found the information on them, as a search of the help files on my system turns up nothing.

I'll be digging up all my INF files and see what I can turn up - and then searching the web for the rest, and try to compile a list of all the settings and how they need to be (or can be) set/changed sometime in the near future... I do seem to recall that there were more than just those two parameters available. The two parameters I noted are for memory buffers... though I have also read that the operating system (or stack) dynamically configures them anyway.

Playing with the figures has helped prevent me from running out of buffer space during high load stress tests - but usually results in a crash eventually.

I'm also trying to find the correlation between those settings and the following (which all seem to be interlinked due to the way that OS/2 uses physical memory): maxthreads, HPFS386/JFS cache sizes, various TCP/IP settings that impact the memory pool.

I have found that increasing the HPFS386 cache to something too high results in an inability to create/use over a certain number of threads (guessing it is due to a lack of memory in the shared memory pool), or results in the HPFS386 driver telling me it cannot allocate the memory requested and that it will be defaulting to 12% (but instead allocates 3% of the 4GB... so I dont think that value is based off total memory)... with luck, (and some reading on the way OS/2 uses memory) I'll figure out some sort of "equation" to determine actual suitable values...

The one neat thing I have found is that OS/2 does indeed seem to see the whole 4GB in my server... while Windows (32 bit) will only see 3GB (and Vista 32 bit only sees 3GB, but after a recent "fix" will report 4GB even though it can only use 3GB of it).

I also need to figure out how the "VIRTUALADDRESSLIMIT" statement impacts that (I know the concept, but not how that translates into real world results when there are various components like the ones I have mentioned - and others - that impact actual memory usage).


I've got my work cut out for me... much of this was prompted by my own curiosity (and stress testing), some was prompted by a potential (web hosting) client who asked what settings (in the stack and NICs) were configurable... the list (of settings that I found so far) ended up being very very long... but with little or no explanation of what settings did what - or even what the default, minimum or maximum were on many.

Also sadly, many of the settings seem to be configured in different manners (or I have just simply yet to figure out how to configure them from the same place yet)... for instance the part that handles socket buffers is a switch for the config.sys file's socket_.sys driver, while over a dozen others get set with inetcfg, and yet others get set in (various) INI files... I am presuming many can be set in the ini files... and it seems some can be set or changed using inetcfg without rebooting (while others definitely require a reboot using the methods I have found for changing them - which means I may not have yet found a way to set them without reboot - or there isnt one)... so, I guess part of my project is to figure out which require a reboot to take effect and which dont.

It seems the stack is ridiculously configurable... but it also seems there is no single (or even multiple) authoritative reference(s) on how to do so.

The other odd thing, as Saijin helped point out, is that different NIC/driver combinations show more - or less - or no - configuration options for the NIC card - regardless of what is - or isnt - configurable. While his NIC has a lot of settings, my Intel Pro/1000 has "Media Speed & Duplex" (which is blank and auto configured correctly), "Slot/Device Identifier" (which since I have two, gets manually configured - though it seems the driver will do that anyway, even though the docs say it wont), and "Locally Administered Address" (which is blank). The TCP/IP card properties in MPTS show just "*Network Interface Type" (which works blank), while the other (same model) card running NetBIOS has all or most of the expected options (for NetBIOS... same 3 selections for the card itself). Meanwhile my RT8139 has an entirely different set of parameters available for the card... dunno what it's TCP/IP parameters are though because I have the Innotek Virtual Switch Driver installed...

Ugh,
Robert

Title: Re: Genmac Wrapper driver performance
Post by: David McKenna on 2008.02.08, 20:52:14
 Robert,

Yep... it didn't seem worth it after I read it it too. I'm going to keep looking around though...

Looking forward to what you can find out too...
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.08, 21:40:50
Quote from: David McKenna on 2008.02.08, 20:52:14
Robert,

Yep... it didn't seem worth it after I read it it too. I'm going to keep looking around though...

Looking forward to what you can find out too...

Hi David,

What it DOES (sadly) tell me though, is since I expect the stacks to have similar designs to some extent, that I have yet to find even half of the settings, parameters and capabilities of the OS/2 stack... a year ago, I found the DHCP guide (all 500 or so pages of it) and it goes into extreme detail about numerous capabilities in the TCP/IP subsystem that I once thought required add-on programs - which go far beyond DHCP.

I guess whatever all of us here can come up with will still be far more than what is currently out there - regardless of how incomplete it may end up being.

Thanks everyone for anything you can turn up!
-Robert
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.09, 00:02:32
The following is interesting....

http://www.redbooks.ibm.com/redbooks/pdfs/sg245393.pdf

and

http://www.redbooks.ibm.com/redbooks/pdfs/sg245280.pdf

-Rob
Title: Re: Genmac Wrapper driver performance
Post by: Saijin_Naib on 2008.02.09, 03:07:45
Sweet, some "light" reading. I wonder if I can get anything from this at all :P Now, off to break MPTS muahaha :)
Title: Re: Genmac Wrapper driver performance
Post by: David McKenna on 2008.02.10, 04:23:20
 Rob,

  Thanks for those links! I just finished reading the 'Inside OS/2 Warp Server for e-business' - whew! It was interesting as a review, but no real deep secrets revealed. The only possible thing of interest was this one tidbit I didn't know about:

7.15 Performance improvements
            Many of the enhancements introduced into OS/2 Warp Server for e-business
            have been for performance and reliability. TCP/IP and the related
            applications have also been enhanced to more quickly and reliably serve in
            this e-business transformation. In certain cases, applications taking
            advantage of the new enhancements will show up to a 40 percent
            improvement in performance over previous versions of TCP/IP.
            The sockets drivers now have two parameters to improve performance. For
            example:
            DEVICE=x:\mptn\protocol\sockets.sys /mem:# /gdt:#
            Where
            /mem:# is the number of 4KB clusters allocated at initialization time. The
            default is 75 and the range is 30 to 32766.
            and
            /gdt:# is the maximum number of 64KB blocks that the stack can allocate.
            The default is 80.

  I would think the defaults are conservative, this being IBM. Might be interesting to tweak around with these a bit...

Dave McKenna
Title: Re: Genmac Wrapper driver performance
Post by: mobybrick on 2008.02.10, 13:21:56
Changed these some time ago on some web application servers that I have been testing. Changing these settings seems to help with performance a bit. However, they greatly help if netstat -m shows  'failed to find space' or buffer exhaustion.

Moby.
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.11, 03:02:59
Quote from: mobybrick on 2008.02.10, 13:21:56
Changed these some time ago on some web application servers that I have been testing. Changing these settings seems to help with performance a bit. However, they greatly help if netstat -m shows  'failed to find space' or buffer exhaustion.

Moby.

I've found they have been quite useful under heavy load when the stack does not dynamically create space fast enough (at which times I've found that I need to kill things using the already open connections before I can complete or initiate any other connections).

My problem with it has been balancing disk cache (and I think specifically HPFS386 cache - though possibly JFS cache as well) with maxthreads otherwise something runs out of memory and crashes the computer...

Of course, the problem is I am probably using far more disk cache space (over 500MB total) than is needed (especially since it would be wiser to pre-cache frequently used "web objects" in the web server instead of relying on the disk caches since the caches are most likely flushed of the needed object before the next call unless I had a horrendous amount of traffic. Something I plan on resolving and testing further...

QUESTION:
Anyone have any idea of how (or IF) the VIRTUALADDRESSLIMIT statement impacts available "disk cache" memory space and "thread overhead" memory space? As well as (if it does or if it doesnt) if there is some equation that can be used to determine "If using ###MB diskcache, and allocating #### maxthreads, how much memory is available to things like the TCP/IP stack (buffers, etc), as well as to other objects that use that memory arena?"

I hope to document everything (everyone's numerous insites, tips and links to resources here; as well as everything I have found), to finally come up with both a definitive guide on the TCP/IP subsystem and it's settings & capabilities, as well as possibly a unified configuration tool to handle configuring the various settings all in one place (including the ones that need in the config.sys, the ones that can be configured using inetcfg, and the ones that can be configured in the various INI files).


QUESTION:
Anyone with knowledge of how all of this impacts other network services (such as NetBIOS and NetBEUI), please chime in as well or PM me what you can...

NOTE:
Once complete (the documentation and the config app), I'd also like to attribute all of the tips and help to everyone, so please feel free to PM or email me with your real name or however you wish it to be attributed in the docs and app - assuming you wish to be.

I've got a simple start already (and many notes from here, various IBM RedBooks things, the eCS ConfigTool and info gleaned from notes in various IBM software (such as the DominoGoWeb Docs), and will be building something GUI oriented soon using VX-REXX or DrDialog or GPF-REXX... i figure it doesnt need to be any more complex than that - and while the ConfigTool in eCS helps a lot, it doesnt provide a unified method to do everything (though it will do the sockets.sys settings).

Thanks in advance,
Robert
Title: Re: Genmac Wrapper driver performance
Post by: mobybrick on 2008.02.11, 12:28:56
Hi,

I doubt VIRTUALADDRESSLIMIT makes any difference to HPFS386 - it's so old, the problem will be that it was never designed to run with so much memory. My goodness, there were fixes for it in the good old OS/2 2.11 + LS4 days, so that it could run on PCs with 64Mb of RAM :)

If you need to run with large HPFS386 cache sizes, then you might want to look at the maxheap setting in the HPFS386.INI file. Sometimes, I found that some servers were not stable unless this was set high (.e.g. 32786) - others were not at all stable unless this was set low (e.g. maxheap=8192). I would try low first.

I've also used the IP stack settings to increase the ability to service incoming connections (over 900 connections/min) but the settings for the Intel 1000MT card are described in the Intel driver suite 10, in the LANMAN directory AFAIR. However, I've found that using some of the Intel settings 'breaks' the network card & corrupts its EEPROM, so that it never works under OS/2 again (i.e. no traffic out!)

Regards,
Moby.
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.12, 20:39:10
Hi Moby,

Thanks for the reply... I've found that the maxheap setting isn't even used by default - which may be part of the problem when I am using larger cache sizes.

As for the VIRTUALADRESSLIMIT section, though it may not allocate more memory for things like the HPFS386 cache, what about if it is set to a higher number, would that decrease the available memory for the cache (and similar things that use that memory area)?

I'll take a look at the setting for the Pro1000 card and see what I can find... though I'm now kinda leary with playing with any of them (though I do have enough brand new spare cards, that I guess I can toast a card or two).

The thing I still find odd is that according to NetIO, when testing TCP/IP only via loopback, my Realtek 8139 card outperforms the Intel Pro1000 card by a factor of 10...

...yet, the Realtek also has NetBIOS running (the Pro1000 is dedicated TCP/IP), the Realtek is only a 100Mb card (while the Pro1000 is gigabit and properly set up as it in full duplex mode), the Realtek is sharing a bus with various other PCI cards (while the Pro1000 has it's own PCI bus dedicated to it)...

Ugh...
Title: Re: Genmac Wrapper driver performance
Post by: mobybrick on 2008.02.12, 21:10:45
Hi,

I'd be suprised if the VIRTUALADDRESSLIMIT upset HPFS386 - IFS initialisation is done after BASEDEVS but before most other drivers. The only time I suspect that this *would* upset HPFS386 would be if you were using the IBM LAN Server software *and* were pushing its load. HPFS386 would then be asked to dynamically create 'big buffers' (as opposed to request buffers) - although again more of these can be pre-created at IFS initialization. But you really have to be pushing the LANServer code for more big buffers to be needed. Web stuff is, of course, not reliant on the LANServer code.

Yes, the v4 Intel device driver is poop  ::) It supports a few newer chipsets tho.

To get any kind of reasonable performance with the Intel cards, you will need to use the v3.6x driver. I can send it to you, if you'd like.

The settings for E1000.OS2 are as follows (Windows NT is shown in the docs sometimes, but these are the settings for OS/2). However, I've never found that changing anything interesting (such as TXLOOPCOUNT) does anything great for performance.:

Advanced Settings for PROTOCOL.INI
DRIVERNAME
This is the only parameter required for all configurations. This parameter is essentially an "instance ID". Each instance of the driver must create a unique instance name, both to satisfy DOS and OS/2 driver requirements, and to make it possible to find the parameters for the instance in the PROTOCOL.INI file.

When the driver initializes, it tries to find previously loaded instances of itself. If none is found, the driver calls itself "E1000$", and looks for that name in the PROTOCOL.INI file to find its parameters. If one or more instances are found, the driver calls itself "E100x$", where 'x' is one more than the value used by the most recently loaded instance. So, in this scenario, the second driver calls itself "E1002$", the third calls itself "E1003$", and so on; there is no driver called "E1001$". Up to 10 drivers can be loaded in a single system in this way.

Syntax: DRIVERNAME = [E1000$ | E1002$ | etc.] 
Example: DRIVERNAME = E1000$
Default: None, this is a required parameter.
Normal Behavior:  The driver finds its section in PROTOCOL.INI by matching its instance ID to the value for this parameter. 
Possible Errors: The device driver uses a DOS and OS/2 function to display the name of the driver it is expecting. This function cannot display a '$' character. For this reason, the user may see a message referring to this value without the '$'; the user must remember to enter the '$' character as part of the parameter's value. 

SPEEDDUPLEX
The parameter disables Auto-Speed-Detect and causes the adapter to function at the speed indicated.

Syntax:  SPEEDDUPLEX = [0 | 1 | 2 | 3] 
Example: SPEEDDUPLEX = 2 
Default: Auto-Speed-Detect 
Normal Behavior:  0 = 10Mbps half duplex
1 = 10Mbps full duplex
2 = 100Mbps half duplex
3 = 100Mbps full duplex 
Possible Errors: If the SPEEDDUPLEX parameter is set to an invalid value:
The parameter is ignored and the default (Auto-Speed-Detect) is used
A message indicates a "Parameter value out of range" error


SLOT
This parameter makes it possible for the driver to uniquely identify which of the adapters is to be controlled by the driver. The parameter can be entered in hexadecimal or decimal.

Syntax: SLOT = [0x0..0x1FFF]
SLOT = [0..8191]

Examples: SLOT = 0x1C
SLOT = 28

Default: The driver will Auto-Configure if possible.
Normal Behavior: The driver uses the value of the parameter to decide which adapter to control.
Possible Errors:  If only one adapter is installed, and the value does not correctly indicate the adapter slot:
A message indicates that the value does not match the actual configuration
The driver finds the adapter and uses it
If more than one adapter is installed, and the value does not correctly indicate an adapter slot:

A message indicates possible slots to use
The driver loads on the next available slot


NODE
This parameter sets the Individual Address of the adapter, overriding the value read from the EEPROM.

Syntax: NODE = "12 hexadecimal digits"
The value must be exactly 12 hexadecimal digits, enclosed in double quotes.

The value can not be all zeros.

The value can not have the Multicast bit set (LSB of 2nd digit = 1).

Example: NODE = "00AA00123456"
Default: Value from EEPROM installed on adapter
Normal Behavior: The Current Station Address in the NDIS MAC Service-Specific Characteristics (MSSC) table is assigned the value of this parameter. The adapter hardware is programmed to receive frames with the destination address equal to the Current Station Address in the MSSC table. The Permanent Station Address in the MSSC table will be set to reflect the node address read from the adapter's EEPROM.
Possible Errors: If any of the rules described above are violated, the driver treats this as a fatal error and an error message occurs, indicating the correct rules for forming a proper address.

CACHEFLUSH
Windows NT bypasses the normal driver "hooks" into the reboot sequence during a "push install" so the driver is unaware of a system boot occurring. Hence the driver may copy incoming frames to host memory during system initialization. This will cause unpredictable behavior (most likely the system will halt). Setting this parameter to any non-zero value enables a disk-cache flush monitor; which is an alternate method of watching for a reboot call. This parameter should not be used under normal circumstances.

NOTE: This situation is not being corrected.
 
Syntax: CACHEFLUSH = [0 | 1]
Example: CACHEFLUSH = 1
Default: 0
Normal Behavior: Use this parameter during a remote installation or push-install of Windows NT. 
Possible Errors: Any nonzero value sets this parameter to 1. The driver does not give any outward indication of the value of this parameter.

ADVERTISE
This parameter can be used to restrict the speeds and duplexes advertised to a link partner during auto-negotiation. If AutoNeg = 1, this value is used to determine what speed and duplex combinations are advertised to the link partner. This field is treated as a bit mask.

Syntax: ADVERTISE = [ 1 | 2 | 4 | 8 | 0x20]:
0x01 = 10H 0x02 = 10F 0x04 = 100H 0x08 = 100F 0x20 = 1000F

Example: ADVERTISE = 1
Default: 0x2F (all rates are supported)
Normal Behavior: By default all speed/duplex combinations are advertised.

Possible Errors: An error message is displayed if the value given is out of range.

FLOWCONTROL
This parameter, which refers to IEEE 802.3x flow control, helps prevent packets from being dropped and can improve overall network performance. Specifically, the parameter determines what flow control capabilities the adapter advertises to its link partner when auto negotiation occurs. This setting does NOT force flow control to be used. It only affects the advertised capabilities.

NOTES:
Due to errata in the 82542 silicon, the chip is not able to receive PAUSE frames if the ReportTxEarly parameter is set to 1. Thus, if ReportTxEarly =1 and the driver is running on an adapter using this silicon (such as the PWLA8490), the driver will modify the FlowControl parameter to disable the ability to receive PAUSE frames.

If half-duplex is forced or auto-negotiated, the driver will completely disable flow control.

Syntax: FLOWCONTROL = [ 0 | 1 | 2 | 3 |0xFF]
Example: FLOWCONTROL = 1
Default: 3
Normal Behavior: 0 = Disabled (No flow control capability)

1 = Receive Pause Frames (can receive and respond to PAUSE frames)

2 = Transmit Pause Frames (can send PAUSE frames)

3 = Both Enabled (can send and receive PAUSE frames)

0xFF = Hardware Default.

Possible Errors: An error message is displayed if the value given is out of range. 

SMARTPOWERDOWN
This parameter enables the Smart Power Down feature under OS/2. This feature is disabled by default on all devices except where the Device ID = 0x101E. On these devices, this feature is enabled by default.

Enabling this feature causes the software to put the device into a low power (D3) state when the link is disconnected. When the link is reconnected, the device is brought back to the D0 state.

Syntax: SmartPowerDown = [ 0 | any other value ]
Example: SmartPowerDown = 1
Default: 0 except when device ID = 0x101E
Normal Behavior: 0 = Disabled, any other value = Enabled
Possible Errors: None

APMPOWERDOWN
This parameter enables the driver to put the device into the low power (D3) state on a "suspend" under OS/2. That feature is enabled by default on all NICs; this parameter provides a way to disable it.

Syntax: APMPowerDown = [ 0 | any other value ]
Example: APMPowerDown = 0
Default: 1
Normal Behavior: 0 = Disabled, any other value = Enabled

Possible Errors: None

USELASTSLOT
This parameter causes the driver to load on the device in the last slot found in the slot scan. The default behavior of the driver is to load on the first adapter found in the slot scan. This parameter forces the driver to load on the last one found instead.

Syntax: UseLastSlot = [ 0 | any other value ]
Example: USELASTSLOT = 1
Default: 0
Normal Behavior: 0 = Disabled, any other value = Enabled

Possible Errors: None

TXLOOPCOUNT
This parameter controls the number of times the transmit routine loops while waiting for a free transmit buffer. This parameter can affect Transmit performance.

Syntax: TXLOOPCOUNT = <32-bit value>
Example: TXLOOPCOUNT = 10000
Default: 1000
Normal Behavior: Default
Possible Errors: None


--------------------------------------------------------------------------------

Example PROTOCOL.INI
DRIVERNAME = E1000$  (or DRIVERNAME = E100b$)

NODE = "02AA00123456" ; override the burned in MAC address
SPEEDDUPLEX = 0 ; 10Mbps half duplex
          = 1 ; 10Mbps full duplex
          = 2 ; 100Mbps half duplex
          = 3 ; 100Mbps full duplex
SLOT = 7 ; set this for each NIC if using more than one

CACHEFLUSH = 1 ; set this if doing an unattended installation of Windows NT 4.0 using this driver to make the initial connection

Regards,
Moby.
Title: Re: Genmac Wrapper driver performance
Post by: RobertM on 2008.02.12, 21:27:15
Quote from: mobybrick on 2008.02.12, 21:10:45
Hi,

I'd be suprised if the VIRTUALADDRESSLIMIT upset HPFS386 - IFS initialisation is done after BASEDEVS but before most other drivers. The only time I suspect that this *would* upset HPFS386 would be if you were using the IBM LAN Server software *and* were pushing its load. HPFS386 would then be asked to dynamically create 'big buffers' (as opposed to request buffers) - although again more of these can be pre-created at IFS initialization. But you really have to be pushing the LANServer code for more big buffers to be needed. Web stuff is, of course, not reliant on the LANServer code.

Yes, the v4 Intel device driver is poop  ::) It supports a few newer chipsets tho.

To get any kind of reasonable performance with the Intel cards, you will need to use the v3.6x driver. I can send it to you, if you'd like.

Yes! Please do!!! Robert (dot) Mauro (at) gmail... or I can set up FTP access...


Quote from: mobybrick on 2008.02.12, 21:10:45
The settings for E1000.OS2 are as follows (Windows NT is shown in the docs sometimes, but these are the settings for OS/2). However, I've never found that changing anything interesting (such as TXLOOPCOUNT) does anything great for performance.:

SPEEDDUPLEX
The parameter disables Auto-Speed-Detect and causes the adapter to function at the speed indicated.

Syntax:  SPEEDDUPLEX = [0 | 1 | 2 | 3] 
Example: SPEEDDUPLEX = 2 
Default: Auto-Speed-Detect 
Normal Behavior:  0 = 10Mbps half duplex
1 = 10Mbps full duplex
2 = 100Mbps half duplex
3 = 100Mbps full duplex 
Possible Errors: If the SPEEDDUPLEX parameter is set to an invalid value:
The parameter is ignored and the default (Auto-Speed-Detect) is used
A message indicates a "Parameter value out of range" error

This a typo in the original Docs you think? (no 1000Mbs?)

Quote from: mobybrick on 2008.02.12, 21:10:45
...Even more parameters...

Regards,
Moby.

Thanks Moby,

I'll add it all to what I have...

-Rob
Title: Re: Genmac Wrapper driver performance
Post by: mobybrick on 2008.02.13, 22:42:00
Hi,

File e-mailed, sorry for the delay.

I don't think that 1000 is supported unless negiotation is set. You will have to try modifying PROTOCOL.INI manually to see if it can be forced. However, I know that on some Intel switches, 1000 ports only go to gigabit if let set to 'auto' - it might be the same for the adapters - but that would make them at odds with all of the other gigabit adapters.

Regards,
Moby.