Not quite RAM disk related, but for the folks who are experiencing that "...system seems to freeze for 4 - 8 seconds and then resume..." behaviour: if you are using JFS, this may be a symptom of your JFS settings causing it to run out of free buffers and needing to purge the cache(s) to free up the buffers.
This is well documented by Sjoerd Visser in his "Dynamically Tuning the JFS Cache for Your job" presentation deck from way back in 2009.
Bottom line: this can be brought on by several 'system use' activities, but it basically causes the JFS code to write out dirty buffers to disk. The key to dealing with this on my rather large JFS cache (1G) was to watch the typical system use (log cstats output) and adjust the MIN & MAX free buffer settings in pair with the overall LazyWrite setups.
This is a trial & error thing as your machine will be heavily driven by what your usage patterns are.
For what it's worth, here is what I have:
CONFIG.SYS:
IFS=G:\OS2\JFS.IFS /CACHE:1048576 /LW:32,128,8 /AUTOCHECK:*
CALL=G:\OS2\CMD.EXE /Q /C G:\OS2\CACHEJFS.EXE /LW:32,128,8 /MINBUFFER:8000 /MAXBUFFER:24000 >NUL
5 Days into using my box (last time it was up for 27 days, normal desktop stuff, nothing fancy), castat shows:
[G:\]cstats
cachesize 262144 cbufs_protected 35795
hashsize 131072 cbufs_probationary 22467
nfreecbufs 101902 cbufs_inuse 0
minfree 8000 cbufs_io 0
maxfree 24000 jbufs_protected 101075
numiolru 0 jbufs_probationary 894
slrun 136870 jbufs_inuse 0
slruN 174762 jbufs_io 0
Other 11 jbufs_nohomeok 0
...with the nfreecbufs never dropping so low that they show zero (0) as slruN approaches slrun value.
I started with MIN=8000 and MAX=16000, and that gave a pretty solid system, although sometimes I would get that tell-tale "hang" feeling. So I moved to MIN=4000 thinking that would free up the buffers for caching duties..welllll...no sirr....wrong move...that completely resulted in a solid and repeatable "system hang". So back to the drawing board so to speak, I set my MIN=8000 and increased MAX=24000. RESULTS => SOLID, the most solid system I have had for years.
The thing that 'ruins' my JFS cache (spoils it actually) is the nightly disk copy (rcopy) run. If it wasn't for that activity my cbufs_protected would stay very large, which means I have a good amount of content that's being successfully cached. Of course, the jbufs_protected is equally important as that allows JFS to quickly figure out where to "go" to retrieve the content as opposed to having to read the data from the disk itself. Again, which one should be the focus for you entirely depends on what disk access patterns you see.
Anyways...balance, somewhere out there are the right settings for your machine.
Last but not least, my JFS formatted RAM DISK results are (diskio):
Drive cache/bus transfer rate: 629714 k/sec
Data transfer rate on cylinder 0 : 680303 k/sec
Data transfer rate on cylinder 634 : 680021 k/sec
meanwhile the SSD results are:
Drive cache/bus transfer rate: 125467 k/sec
Data transfer rate on cylinder 0 : 286615 k/sec
Data transfer rate on cylinder 30399: 253408 k/sec
The ram disk is 2-3x faster here.
EDIT
====
One more thing to add to this, albeit this is easily identifieable and most likely NOT the situation everyone else is seeing: I tried running AHCI driver here and on my hardware that would result in what seemed like a complete HARD lock for about 4-8 secs at a time. Once that "event" passed the system was available for use once again. I tried a boat-load of different setups and configs but none of it helped. Subsequently i went away from AHCI.