• Welcome to OS2World OLD-STATIC-BACKUP Forum.
 

News:

This is an old OS2World backup forum for reference only. IT IS READ ONLY!!!

If you need help with OS/2 - eComStation visit http://www.os2world.com/forum

Main Menu

OS2 & eCS kernel

Started by miturbide, 2007.05.16, 22:38:54

Previous topic - Next topic

demetrioussharpe

No worries. After almost 14yrs in the Army, my skin's a bit tougher than most. It's also the reason I tend to jump into things & try to just get them done. I'm hoping to help spark a new trend of OS/2 innovation.
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

demetrioussharpe

Quote from: RobertM on 2008.04.29, 22:01:44
SAB,

This bounty post OS2 & eCS kernel describes the general goal of the bounty.

This thread is to discuss specifically the less general goals that should be part of the successful completion of the bounty (ie: not drivers, not PM, not WPS, etc - except as related to how they interact with the kernel).

Thus, based off the general goals in the bounty, here is where the discussion should take place as to what needs to be satisfied to complete/satisfy the bounty.

So, for instance (to start off this thread):


  • Do we want a platform independent setup, similar in that aspect to the Mach kernel? (or is our goal currently Intel only?)
  • How do we wish to handle 32bit compatibility, and should we be handling that in the kernel? Should we have a virtualized 32 bit API? Or should we use something that remaps the APIs in a similar fashion that Odin does?

Robert


Hello all,

I'm back again & I'm wondering if these were the only goals that were outlined for a new kernel or if there were more goals listed. What all is necessary to claim this bounty? Does it have to be able to work with the software from the OS/2 or eCS install CDs or can it be a complete rewrite? Can it be another OS that's been repurposed into an OS/2 clone? What exactly is expected here?
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

Blonde Guy

As one of the contributors, I'd like to see this happen. Let me throw out some ideas..

1. Fairly OS/2 compatible. It should support most of the old device drivers. You can toss out support for IBM PS/2, 386/486 processors, maybe more, but it should be able to run most current OS/2 device drivers. It should either work with or replace DOSCALL1.DLL which is the OS/2 Control Program API.

2. Parsing and using Config.Sys is not a requirement, but many programs expect it.

3. Open source would be a big plus. And using someone else's kernel is no problem as long as it can run OS/2 device drivers and programs. The idea is to replace the current closed-source IBM kernel with something that can be fixed or enhanced.

4. It should go beyond what the current hardware limitations of OS/2, like the 2 TB limit on devices, the 4 GB limit on RAM, and 16-bit limits on system queues, semaphores, pipes and so forth. It should do this in a way that allows a well-written OS/2 program to take full advantage of new hardware.

5. It should support a robust trap recovery and debugging interface. It should make it possible to have a hard kill.
Expert Consulting for OS/2 and eComStation

RobertM

Well, I am no longer sure what were goals, or future plans or wants. Thus, some things on this list may overlap or solely fit into one of those categories. This list is probably not as extensive as it could be, based on the feedback and posts throughout this forum and elsewhere:

* 64bit support
This of course is highly problematic, because the kernel itself has not just 16bit entry points, but also contains a lot of 16bit code. In addition to that, most (or all) the non KEE drivers are 16bit. Since, in 64 bit mode, today's CPUs only support 64 bit and 32 bit instructions, the 16 bit code would need to be rewritten - and all 16 bit code that calls it would either need to be rewritten or run through a virtualization module, or a module that could do "on the fly conversion" to 32 bit codes (ie: perhaps in a similar fashion to how Odin detects the executable type and "translates" the Win32 calls into their OS/2 counterparts, while dealing with any weird mismatches in the capabilities between the calls/APIs).

* Maintaining a suitable threading model
This was deemed, from a technical standpoint, as well as from an experience (with Linux) standpoint as an aspect that would make simply dropping in a Linux kernel and a personality module an unweildly solution, since Linux does not support the type of multithreading that OS/2 does.

* Dealing with the few things that DO directly access the kernel
I don't think there are many - but the most notable one is one that is still used by a lot of big businesses, namely HPFS386 which (if memory serves) is (a) a Ring 0 driver, (b) one that has direct access to the hardware, and (c) one that has direct access to the kernel. IIRC, it's features were somewhat extensively used for certain Lotus Domino, Lotus Domino GoWebserver and DB/2 releases to allow direct device to device data transfer and direct hardware access/transfer.

* Full support for the 16 bit and 32 bit APIs in the kernel
Which brings us back to virtualizing and/or replacing them. It also brings us to what to do with the thunking layer.

* A method of dealing with the 16 bit device drivers (again, something we're stuck with as a problem for a 64 bit kernel)
There aren't enough 32 bit drivers out there - and from what I understand, there are still various 16 bit callbacks that even they use, simply due to OS/2's driver structure and kernel structure.

* Possible migration of pseudo-64 bit code to true 64 bit code
Such as the 64 bit data structures used by JFS and a few other replacement subsystems written for MCP/ACP. This of course is not really a priority, since they work as they are.

* Mach style kernel
At least at some point, that way as additional CPU design changes are made, the kernel can be easily swapped with one that supports the new architecture (if only the architecture specific kernels were as easy to make as dropping one in).

* Increasing the memory management and thread management capabilities
Regardless of a 64 bit kernel, increasing the thread management capabilities to something more akin to today's newer hardware is something various of us need - for instance, to allow high availability servers without exhausting the thread pool. Similar applies to processes. Sadly, these figures are hardcoded, as are their data structures and the underlying mechanisms that support them. So, while newer hardware has sufficient memory to allocate the management data structures for a lot more threads and processes, and CPUs are more than fast enough to handle the expanded thread set/data structures, this would require a lot of rewriting of the thread and process schedulers and their data structures.

This would possibly also require tweaks or changes to Aurora's/Merlin's new memory management scheme to ensure that memory pool exhaustion did not occur when dealing with the larger data structure set for threads and processes. Currently though, on a system using memory from that arena for other purposes (disk cache, etc), roughly 2,000 threads (give or take) is the reliable limit some of us have been running into - even though the kernel is designed to handle 4095 threads.

There were also suggestions about changing the memory management architecture to handle PAE mode - but that seems buggy and a kludge under any implementation.

There was also discussion about OS/2's already existing ability to "access" more than 4GB by paging and virtualization - I think there's an article someplace on EDM/2 that mentions it. It's a feature not used, and apparently barely understood by those who mention it.

* Dealing with the "kernel helpers"
Which in reality, act as extensions to the kernel itself (such as DOSCALL1, which you already mentioned, and the various other system DLLs - oh, and OS2LDR, which is quite a bit more than just a boot loader, and continues to run in conjunction with the kernel to provide kernel services). Of course, for a 64 bit kernel, this means once again dealing with the 16 bit code.

* Expanding the kernel for other OS's (namely Linux)
Via either pluggable APIs, personality modules, abstraction layers, etc. This probably doesn't require any additional work on the kernel though - just as Odin doesn't require much in that area.

* Replacing/rewriting OS2LDR
...to be able to work in conjunction with the new kernel as well as with larger hard drives, without the need for patching or kludges.

* Some method of support for all 16 bit calls
...since they cannot be run natively on a CPU in 64 bit mode. A virtualization layer? Something that remaps the calls a'la Odin to 32 bit calls? Whatever it is, it's needed for the variety of businesses that run apps and services that are 16 bit or 32/16 bit hybrid.

There's probably a bunch more... and, as I said, a few of these were probably not intended as part of the original bounty.

One big thing I've realized, though it's not mentioned much, is that consideration must be given for the large companies with big OS/2 and/or growing eCS installations - many of which have been running the same custom software since time immemorial. That's where the issues with HPFS386 and 16 bit (or hybrid 32/16 bit) apps comes in - and the importance of dealing with them - not to mention who knows how many other apps there may be out there that may be hybrids.


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


RobertM

Ah... Aurora kernel can handle 64TB of virtualized address space, with 4GB allocated as protected mode memory (sans what's mapped to the various system buses, etc). Supposedly.

This was supposed to allow up to xGB of memory per process, with up to xGB being the active memory space being used at any given time. xGB = the virtual address limit memory limitation setting in the Aurora kernel (ie: up to 3GB).

http://www.os2voice.org/VNL/past_issues/VNL0708H/feature_3.html


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


RobertM

Ooops, one more important one (at least for the 32 bit kernel). Cleaning up memory (ie: dealing with fragmentation that causes issues with memory allocation when there are no big enough contiguous blocks available).


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


RobertM

#21
All of this (and many of the other ones on my earlier list) only apply to a 64 bit kernel, as I guess you've probably already noted

* Re-implementing a suitable swapping mechanism
With the advent of a 64 bit kernel - and thus more memory being accessible, numerous other Linux ports or native apps can become available that will be able to take advantage of the expanded memory space.

While I have very very little experience with how Linux does this, Windows is absolutely pathetic at swapping - often swapping in use code to disk while plenty of memory is available, while OS/2 is very very decent at swapping the right code to disk, and only swapping active code/data to disk in the event of physical memory exhaustion.

(As one example) While memory is usually cheap, even in a 64 bit system, at say 16GB of RAM (after that, memory starts getting expensive per module for the bigger modules) it would be very easy to exhaust physical memory using something like a Blender rendering port.

* Creating compatibility arenas as needed for certain apps
Either as expanded or separate arenas, or arenas within the high memory arena, or as virtual arenas that can be allocated/loaded/activated as needed when a context switch is done to "activate" the code using that arena (in a similar fashion to how the kernel currently does it when switching between processes - but taking into account the larger amount of memory, instead of limiting such things to the current arena sizes).

* Reworking the shared arena
To take into account 64 bit code, larger memory availability and addressing capabilities, and so on - thus removing the limitations currently imposed on the shared arena. This is really part and parcel to the point above this.

* Revising/rewriting address allocation for bus/bus devices/APIC mappings
This one is pretty self explanatory - needs to work in a 64 bit memory implementation, thus, will probably need to be moved, as it should no longer be "mapped downwards" from the top of the 4GB 32 bit mapping space (otherwise, there will be a nice memory hole in the 64 bit memory space and some nice kludges needed to address that).

It also needs to take into account an increased mapping range -  it needs to be expanded to something more reasonable, or needs to be open ended (due to the far larger available addressing space). If I understand this situation properly, currently, eCS and Warp users are already running into problems with this, though often they don't understand why... problems such as 512MB video cards registering only 128MB in OS/2 due to frame buffer mappings in the system arena - as well as the hole in the actual addressable space created by various motherboards to allocate the APIC mappings in the top 512MB.


|
|
Kirk's 5 Year Mission Continues at:
Star Trek New Voyages
|
|


demetrioussharpe

Let's be clear, is this bounty for a 32-bit kernel or a 64-bit kernel? I'm asking because there's going to be a huge difference in development. However, both kernels would imply that a new device driver architecture, which isn't necessarily a bad thing.
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

demetrioussharpe

Also, keep in mind that we'll loose OpenWatcom as a development environment for building a 64-bit kernel. The OW team hasn't produced the code for building 64-bit targets yet.
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

miturbide

Hi Demetrius.

The description of the bounty is "Description: For future use of OS2 and eCS there is a need for a new 64-bit kernel and what ever the future technology might offer. "

But we are always open for discussion.

If a developer say offers to do something and will be according to the bounty rules (open source/ no copyright issues). We can always go back to the sponsors and discuss if we can adjust the requirements.
Martín Itúrbide
OS2World.com NewsMaster
Open Source Advocate

Skype - martiniturbide
Google Talk - martiniturbide@gmail.com

demetrioussharpe

Long post, but this is important, so please be patient with me.


First & foremost, I want it to be understood that I intend to work on this project regardless of it you guys award me the bounty or not. I know that this is a huge project. I also understand that I'm just one man, so this is something that'll be a long term process; not something that could be coded within 3-6 months.

Secondly, I have a preference for the OpenWatcom development platform. Though they aren't ready for compiling 64-bit targets, I'm sure that they'll get there sooner or later. Every other major platform has it's own staple development platform & I believe that the future of OS/2 & OW are intertwined.

Thirdly, be aware that a new kernel must bring new things with it. Some of these things we're ready for, others, we're not so prepared for. Undoubtedly, a new kernel would imply that there's a chance for a fresh look on device drivers & filesystem drivers. To be honest, there's not much to be gained from reusing the current OS/2 drivers & filesystems. The device driver interface is crippled & mentally retarded for today's purposes & the filesystem API isn't too much better. There are plenty of opportunity for things to be improved with new APIs for both of these subsystems; and to be honest, it really doesn't matter if we roll our own or if we 'borrow' the interfaces from other systems, just as long as we ditch the interfaces that we're currently using. Besides, a move to a 64-bit kernel would demand such a move anyway. Also, there's the issue of the host binary format. Mainly, there aren't many people who understand how to load LX binaries. And even after we jump that hurdle, there's still the matter of creating a new format for 64-bit OS/2 binaries. Since this wasn't done before IBM killed the commercial OS/2 product, we're on our own here.

Now, with that being said, I'll move on & attempt to address each of the points that have been touched on throughout the life of this forum topic. If I miss something, please don't think that it was on purpose. For many of these points, there may be a bit of overlap, so my replies to them may be similar or even the same; please bare with me! ;)

Here goes:


QuoteDo we want a platform independent setup, similar in that aspect to the Mach kernel? (or is our goal currently Intel only?)

To be honest, we have a better chance of survival if the kernel is structured to sit on top of a architecture dependent module. It doen't matter if it's called a HAL or an ARCH layer, just as long as all of the device specific code is contained in this layer & all of the platform independent code sits on top of it as a separate layer.

QuoteHow do we wish to handle 32bit compatibility, and should we be handling that in the kernel? Should we have a virtualized 32 bit API? Or should we use something that remaps the APIs in a similar fashion that Odin does?

The way I see it, the majority of 32-bit (& also 16-bit) support code can be contained within userspace emulation modules (the way dosbox does it). If we're talking about a 64-bit kernel, the we could use the approach that dosbox uses for a 32-bit layer & inside that 32-bit environment we could have a 16-bit emulation layer. Afterall, there really won't be much of a reason for 16-bit & 64-bit code to talk to each other. I'm really doubtful of the need for 32-bit & 16-bit code to talk to each other either, other than the fact that much of OS/2's kernel & driver systems were also 16-bit. If OS/2 had been fully 32-bit, there really wouldn't have been a real need for any 16/32-bit resource sharing or any of that thunking nonsense.


CONTINUED IN NEXT POST
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

demetrioussharpe

QuoteFairly OS/2 compatible. It should support most of the old device drivers. You can toss out support for IBM PS/2, 386/486 processors, maybe more, but it should be able to run most current OS/2 device drivers. It should either work with or replace DOSCALL1.DLL which is the OS/2 Control Program API.

Again, this is a non-issue. New kernel, new drivers that require less jumping through hoops to write/port. With a 64-bit kernel, DOSCALL.DLL & DOSCALL1.DLL would end up being part of the 32-bit emulation layer, but nothing would really change for the current apps.

QuoteParsing and using Config.Sys is not a requirement, but many programs expect it.

This needs to be emulated by the use of a configuration pseudo-driver. Using the *nix philosophy that everything is a file, this pseudo-driver could be written to & read from like a regular file, but should be backed by a registry system. This would not be a perfect solution for current OS/2 apps, but I'd expect newer apps to use the registry API for access to configuration data. However, this is also the opportunity to bake multiuser support into the system from the ground up & thoroughly enforce security & protection policies from the start. This means that we don't make the Windows mistake that everyone's account defaults to having Administrator rights & privileges.

QuoteOpen source would be a big plus. And using someone else's kernel is no problem as long as it can run OS/2 device drivers and programs. The idea is to replace the current closed-source IBM kernel with something that can be fixed or enhanced.

As stated before, new kernel, new drivers. However, if my work becomes used for this bounty, then there's no reason not to open source it. Still, what use is an open sourced OS if there're precious few left to work on it. Even the handful of developers left haven't stepped up to work on this task other than the OSFree guys. I hope they're progressing nicely, but who really knows.

If my work is not acceptable for this bounty, then perhaps I'll end up making a product out of it. Regardless of the outcome, I see it as a win/win situation for everyone involved.

QuoteIt should go beyond what the current hardware limitations of OS/2, like the 2 TB limit on devices, the 4 GB limit on RAM, and 16-bit limits on system queues, semaphores, pipes and so forth. It should do this in a way that allows a well-written OS/2 program to take full advantage of new hardware.

Undoubtedly, this should be standard in a replacement kernel, regardless of whether it's 32- or 64-bits.

QuoteIt should support a robust trap recovery and debugging interface. It should make it possible to have a hard kill.

I agree. This is something that really should be built in from the very beginning to allow effective debugging while attempting to bring the kernel up.

Quote* 64bit support
This of course is highly problematic, because the kernel itself has not just 16bit entry points, but also contains a lot of 16bit code. In addition to that, most (or all) the non KEE drivers are 16bit. Since, in 64 bit mode, today's CPUs only support 64 bit and 32 bit instructions, the 16 bit code would need to be rewritten - and all 16 bit code that calls it would either need to be rewritten or run through a virtualization module, or a module that could do "on the fly conversion" to 32 bit codes (ie: perhaps in a similar fashion to how Odin detects the executable type and "translates" the Win32 calls into their OS/2 counterparts, while dealing with any weird mismatches in the capabilities between the calls/APIs).

This should not be much of an issue, considering that proper DLL replacements appropriately implement the expected APIs. To drive home the point: It really doesn't matter who built engine that is under the hood of your car, as long as it functions the way you expect it to. The DLLs should take care of making sure that all requests are transformed into the right size for the underlying kernel.



CONTINUED IN NEXT POST
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

demetrioussharpe

Quote* Maintaining a suitable threading model
This was deemed, from a technical standpoint, as well as from an experience (with Linux) standpoint as an aspect that would make simply dropping in a Linux kernel and a personality module an unweildly solution, since Linux does not support the type of multithreading that OS/2 does.

To be honest, I've only seen one OS that had a wonderful threading model & that was the BeOS. With that being said, there are 2 ways forward from my point of view.

1). Use all available documentation about OS/2's threading model to recreate IBM's implementation as closely as possible.
2). Try to create something that's suitable & work to improve it.

Quote* Dealing with the few things that DO directly access the kernel
I don't think there are many - but the most notable one is one that is still used by a lot of big businesses, namely HPFS386 which (if memory serves) is (a) a Ring 0 driver, (b) one that has direct access to the hardware, and (c) one that has direct access to the kernel. IIRC, it's features were somewhat extensively used for certain Lotus Domino, Lotus Domino GoWebserver and DB/2 releases to allow direct device to device data transfer and direct hardware access/transfer.

I'm sure that this hurdle could be toppled with a robust VFS layer & an updated, reimplementation of HPFS386 that's built from the specs of the original.

Quote* Full support for the 16 bit and 32 bit APIs in the kernel
Which brings us back to virtualizing and/or replacing them. It also brings us to what to do with the thunking layer.

This really doesn't belong in the kernel. This belongs in userspace. There's no need for 16-bit APIs in the kernel when there's no 16-bit code in the kernel. If anything, the DLL that services 16-bit code should take care of converting it to the bit size of the underlying kernel.

Quote* A method of dealing with the 16 bit device drivers (again, something we're stuck with as a problem for a 64 bit kernel)
There aren't enough 32 bit drivers out there - and from what I understand, there are still various 16 bit callbacks that even they use, simply due to OS/2's driver structure and kernel structure.

New kernel, new drivers. No 16-bit code inside.

Quote* Possible migration of pseudo-64 bit code to true 64 bit code
Such as the 64 bit data structures used by JFS and a few other replacement subsystems written for MCP/ACP. This of course is not really a priority, since they work as they are.

JFS would need to be ported to the new kernel anyway, so there really wouldn't be much migration other than the basic porting work. A VFS layer should make this a bit easier.

Quote* Mach style kernel
At least at some point, that way as additional CPU design changes are made, the kernel can be easily swapped with one that supports the new architecture (if only the architecture specific kernels were as easy to make as dropping one in).

See my reply above about the HAL.



CONTINUED IN NEXT POST
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

demetrioussharpe

Quote* Increasing the memory management and thread management capabilities
Regardless of a 64 bit kernel, increasing the thread management capabilities to something more akin to today's newer hardware is something various of us need - for instance, to allow high availability servers without exhausting the thread pool. Similar applies to processes. Sadly, these figures are hardcoded, as are their data structures and the underlying mechanisms that support them. So, while newer hardware has sufficient memory to allocate the management data structures for a lot more threads and processes, and CPUs are more than fast enough to handle the expanded thread set/data structures, this would require a lot of rewriting of the thread and process schedulers and their data structures.

I agree. These are the kinds of things that really should be dynamic. If OS/2 was still being developed today, I'm sure that these things would have been addressed already. However, this is a great time to address these issues.

QuoteThis would possibly also require tweaks or changes to Aurora's/Merlin's new memory management scheme to ensure that memory pool exhaustion did not occur when dealing with the larger data structure set for threads and processes. Currently though, on a system using memory from that arena for other purposes (disk cache, etc), roughly 2,000 threads (give or take) is the reliable limit some of us have been running into - even though the kernel is designed to handle 4095 threads.

Again, I agree.

QuoteThere were also suggestions about changing the memory management architecture to handle PAE mode - but that seems buggy and a kludge under any implementation.

This is the kind of thing that should be inside of the HAL. It should also be configurable, so that the PAE mode can be enabled or disabled based on the host architecture.

QuoteThere was also discussion about OS/2's already existing ability to "access" more than 4GB by paging and virtualization - I think there's an article someplace on EDM/2 that mentions it. It's a feature not used, and apparently barely understood by those who mention it.

This should be seemless & invisible, without needing to resort to tricks to get it to work correctly.

Quote* Dealing with the "kernel helpers"
Which in reality, act as extensions to the kernel itself (such as DOSCALL1, which you already mentioned, and the various other system DLLs - oh, and OS2LDR, which is quite a bit more than just a boot loader, and continues to run in conjunction with the kernel to provide kernel services). Of course, for a 64 bit kernel, this means once again dealing with the 16 bit code.

I agree with the kernel extensions, however, I don't agree with the way they're used in OS/2. I'm not really a huge fan of the kernel helpers, because they seem to imply portions of the code that were implemented based on a time schedule & rushed, rather than attempting to find an elegant solution. Yet & still, it's something that needs to be taken in account.

Quote* Expanding the kernel for other OS's (namely Linux)
Via either pluggable APIs, personality modules, abstraction layers, etc. This probably doesn't require any additional work on the kernel though - just as Odin doesn't require much in that area.

Solvable with subsystem DLLs.

Quote* Replacing/rewriting OS2LDR
...to be able to work in conjunction with the new kernel as well as with larger hard drives, without the need for patching or kludges.

I think that the current bootloader to kernel interface should be scrapped & replaced anyway. This makes room for something that's more maintainable & flexible for future advances.



CONTINUED IN NEXT POST
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!

demetrioussharpe

Quote* Some method of support for all 16 bit calls
...since they cannot be run natively on a CPU in 64 bit mode. A virtualization layer? Something that remaps the calls a'la Odin to 32 bit calls? Whatever it is, it's needed for the variety of businesses that run apps and services that are 16 bit or 32/16 bit hybrid.

Subsystem DLLs.

QuoteOne big thing I've realized, though it's not mentioned much, is that consideration must be given for the large companies with big OS/2 and/or growing eCS installations - many of which have been running the same custom software since time immemorial. That's where the issues with HPFS386 and 16 bit (or hybrid 32/16 bit) apps comes in - and the importance of dealing with them - not to mention who knows how many other apps there may be out there that may be hybrids.

True, however, I'm sure that these large companies have also ran into these same problems that are listed on this thread, so I'd imagine that they'd be in the market for something more modern (& still compatible with their current OS/2 investments).

QuoteCleaning up memory (ie: dealing with fragmentation that causes issues with memory allocation when there are no big enough contiguous blocks available).

This shouldn't be as much of an issue on a more modern kernel.

Quote* Re-implementing a suitable swapping mechanism
With the advent of a 64 bit kernel - and thus more memory being accessible, numerous other Linux ports or native apps can become available that will be able to take advantage of the expanded memory space.

Swapping's something that's not always easy to achieve depending on the role that's being filled by the OS. Desktop OSs usually have different requirements than server OSs. This implies that all policies effecting this mechanism need to be dynamically changeable.

QuoteWhile I have very very little experience with how Linux does this, Windows is absolutely pathetic at swapping - often swapping in use code to disk while plenty of memory is available, while OS/2 is very very decent at swapping the right code to disk, and only swapping active code/data to disk in the event of physical memory exhaustion.

Sounds like there needs to be a better aging policy implementation.

QuoteWhile memory is usually cheap, even in a 64 bit system, at say 16GB of RAM (after that, memory starts getting expensive per module for the bigger modules) it would be very easy to exhaust physical memory using something like a Blender rendering port.

I don't think this will be much of an issue for the replacement kernel.



CONTINUED IN NEXT POST
The difference between what COULD be achieved & what IS achieved
is directly relational to what you COULD be doing & what you ARE doing!