Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Dariusz Piatkowski

Pages: [1] 2 3 ... 5
1
Hardware / SNAP and dual-head display setup recommendations...
« on: September 17, 2019, 03:28:05 pm »
Hi Everyone,

So I got my dual-monitor setup up and running: ATI X850 Pro with two Samsung 245T panels. With their native resolution of 1920x1200 that gives me an OS/2 Desktop of 3840x1200...yey!!!

...but as I'm starting to discover there are some things that aren't quite right...no biggie any of them (no hard errors or app crashes, more of an annoyance), but I'm looking for suggestions and/or helpfull hints that others with such dual-head setups have found beneficial.

Currently using the option to display pop-up windows on Head_0.

Thanks!

2
Networking / NcFTP - RPM package application not quite right...
« on: September 09, 2019, 02:19:19 am »
OK, so continuing on with my "mission" ::) to deploy RPM replacements for some of my current apps I decided to tackle NcFTP. Easy app I thought, and being CLI it was pretty suitable for being tossed into the giant \usr\bin bucket...

Install went fine, I moved my bookmarks, etc. to the \home\.ncftp directory, that was recognized. However, strangely enough I ran into a problem when actually attempting to VIEW the bookmarks in the app. Case in point, when issueing 'bookmarks' command inside NcFTP window I get the following response:

Code: [Select]
ncftp> bookmarks
sh.exe: 1: more: not found

To use a bookmark, use the "open" command with the name of the bookmark.

...but if I do something like "open hobbes" that certainly works...so the bookmarks are there and are being recognized. The separate bookmarkmanager app works fine, that does actually show the bookmarks and allow you to manipulte then as needed.

Alright, so that error message appeared to have something to do with the 'sh' shell and perhaps 'more' command? I confirmed that 'sh' is installed here and working fine, 'more' on the other hand is not. But the 'more' is just a guess on my part, one could easily confuse that for something like 'dir | more', right?

What is a bit weird is that the 'more' command is something that should be executable in OS/2 to start off with. You can certainly pull up HELP on that command and yet trying 'more' by itself @CLI only shows:

Code: [Select]
[I:\]more
SYS1041: The name more is not recognized as an
internal or external command, operable program or batch file.

So am I inherently missing something here in my base OS/2 install?

3
Networking / Samba - cached server IP address?
« on: September 05, 2019, 04:08:35 am »
I run static IP addresses on my LAN, all of it served out by the router where the MAC address to IP address assignment is made. Most devices (where possible) are set to DYNAMIC IP, but of course the router serves up the pre-defined static IP out to the device. If that is not possible some devices actually have the IP address set to STATIC and it's directly updated on the device itself.

OK, all good.

The other day we picked up a Google Home Hub Display device, my wife wanted to have something in the kitchen that would serve up the cooking recipes and give her easy access to on-line stuff w/o having to rely on a laptop or another display. Easy enough right?

Well, I pre-allocate my IP blocks per groups of devices, meaning: media devices are lumped into an IP address range, as are the PCs, the mobile devices, and so on.

I needed to make room for the Google Hub and that meant I had to bump my NAS box from it's IP address to something new. No problem there, router was updated, NAS re-booted, it picked up the new IP.

I then updated my HOSTS file to reflect the new addresses, re-booted (just to be safe) only to discover upon a re-boot that my NetDrive mapping would no longer find the NAS box. So off I went investigating and found that if I replace the alias I was using in the NetDrive resource mapping with an actual IP address the connectivity is back up (see this logging in the debug listing below).

That leads me to believe that the Samba client is maintaining a mapping of the actual IP address the alias was previously pointing to and re-using it instead of querrying it up each time.

The question therefore is: does anyone know where that alias to IP address mapping is cached in the Samba configuration?

While the static IP address setup works, I really do not want to use that in the NetDrive resource setups and would prefer to use the alias instead and rely on the HOSTS file driven mapping.

btw: doing 'ping nas' from CLI successfully shows the new IP address, I can also point the browser to that nas box and reach it successfully.

Code: [Select]
Samba client 3.6.0 build 20190324 based on 4.10.6
This build is maintained by netlabs
2019/09/04 21:31:20.24: 9 1: Working with 64 bit fileio NDFS
2019/09/04 21:31:20.38: 9 1: NdpIOCTL init thread
2019/09/04 21:32:00.74: 9 2: NdpCreateConnection in
2019/09/04 21:32:00.74: 9 2: NdpCreateConnection send CONNECT
2019/09/04 21:32:00.74: 1 2: Connecting to \\admin:*********@home:nas\vol1. Master HOME:1
2019/09/04 21:32:05.79: 4 2: Connection to nas failed (Error NT_STATUS_IO_TIMEOUT)
2019/09/04 21:32:05.79: 9 2: NdpCreateConnection [0] 87
2019/09/04 21:32:05.79: 9 2: NdpCreateConnection in
2019/09/04 21:34:37.75: 9 2: checkMountResource in tid#2
2019/09/04 21:34:37.75: 1 2: Connecting to \\admin:*********@home:nas\vol1. Master HOME:1
2019/09/04 21:34:42.77: 4 2: Connection to nas failed (Error NT_STATUS_IO_TIMEOUT)
2019/09/04 21:34:42.77: 9 2: NdpMountResource rc=58
2019/09/04 21:43:58.58: 9 2: NdpMountResource in
2019/09/04 21:43:58.58: 9 2: dircache_create: 10 seconds, 32 entries
2019/09/04 21:43:58.58: 9 2: dircache_create: 0x6d3f20, rc = 0
2019/09/04 21:43:58.58: 9 2: checkMountResource in tid#2
2019/09/04 21:43:58.58: 1 2: Connecting to \\admin:*********@HOME:192.168.1.4\vol1. Master HOME:1
2019/09/04 21:43:58.59: 4 2: Server supports NT1 protocol
2019/09/04 21:43:58.62: 4 2:  session setup ok. Sending tconx <vol1> <********>
2019/09/04 21:43:58.63: 4 2:  tconx ok.
2019/09/04 21:43:58.63: 9 2: NdpMountResource rc=0
2019/09/04 21:43:58.63: 9 2: NdpRsrcQueryInfo in
2019/09/04 21:43:58.63: 9 2: NdpRsrcQueryInfo 0
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo in
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo 0
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo in
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo 0
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo in
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo 0
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo in
2019/09/04 21:43:58.81: 9 1: NdpRsrcQueryInfo 0
2019/09/04 21:44:05.21: 9 2: NdpCreateConnection in
2019/09/04 21:44:05.21: 9 2: NdpCreateConnection send CONNECT
2019/09/04 21:44:05.21: 1 2: Connecting to \\admin:*********@HOME:192.168.1.4\vol1. Master HOME:1
2019/09/04 21:44:05.22: 4 2: Server supports NT1 protocol
2019/09/04 21:44:05.23: 4 2:  session setup ok. Sending tconx <vol1> <********>
2019/09/04 21:44:05.24: 4 2:  tconx ok.

4
Applications / ZOC startup error...
« on: June 20, 2019, 03:47:14 am »
It's been a while since I actually had my ZOC 4.15 up and running...to be honest, probably some 4 yrs ago (or longer) when I replaced the battery in my APC UPS and needed to re-set the battery constant.

Well, as it happens here I am again...NEW UPS battery and my APC box is stuck thinking it's still got a bad battery. This is a known issue (SmartUPS 1500) and the fix is to connect through a terminal and issue some commands to re-set the internal UPS microprocessor to a default/NEW battery status.

The alternative is to drain the UPS by discharging with a series of 100W bulbs, which I'm not really crazy about.

OK, so back to ZOC...LOL, starting the darn thing up produces the following in my POPUPLOG.OS2:

Code: [Select]
06-19-2019  21:00:00  SYS2070  PID 0066  TID 0001  Slot 00af
G:\APPS\COMM\ZOC\ZOC.EXE
XFRCEPT->OSYSOS2.os_MuxSemMgr
127

Does anyone have any ideas what this is caused by?

Given how many changes have occured on my system over the years I have no idea what may be impacting ZOC today.

Thanks!

5
Hi everyone,

I'm looking to upgrade my ePDF printing to support a higher dpi colour output. I have been using my Brother HL-5470DW printer as the device driver for my ePDF printer object. This, for those of you who might not know, allows you to do an easy PDF print. Basically, from any application which does not support native PDF creation you simply send the print job to your ePDF printer, which in turn passes it through the PSMON service and onto the ePDFutility.

This has been working here very well, but recently I needed to preserve the colour on a couple of image to PDF conversions and since my Brother printer is a black&white laser this driver simply won't do. I installed the recommended "Apple Color Laserwriter 12/600PS" (from the PostScript package) driver, as per the ePDF readme, however that maxes out at 600 dpi. Subsequently the PDF output quality suffers a bit.

Therefore, I'm curious what other printer devices, that exist in the standard IBM PostScript driver pack, are capable of 1200 dpi in colour?

I know the ePDF folks released a test ePDF driver, available as an updated IBM driver pack. My only problem with installing it is that it would overwrite my Brother HL-5470 DW driver which I had manually adjusted to take advantage of printer features that were not supported in the out-of-the-box IBM drivers. Since that is my primary printer I am hesitant to make any changes here.

Any suggestions?

Thanks!
-Dariusz

6
So I've been toying with the idea of moving all of my local (OS/2) docs to the LAN NAS. The NAS has about 8TB space, gets backed up every night while the OS/2 box has a live nightly backup but only to an internal HD. Yes, I know I could probably use the rsync job to push stuff from the OS/2 box to the NAS/Backup but given that I want these docs to be accessible from all the other home devices it just seems like NAS is the ideal storage place.

Anyways, my "Documents" folder is about 2 Gig of data, some 12k files. I executed a copy to the NAS, WPS copy failed, but Larsen Commander came through - I suspect the EAs were a problem for WPS since my NAS does not support them, just a guess, I haven't debugged it further.

Once on the NAS I proceeded to test access, basicaly standard "find the file, open through WPS associations", etc, etc.

All good until I hit my PDFs. So I've got three PDF readers installed here: Lucide, GSView 5.0, and QPDFView. Both Lucide and QPDFView exhibit this weird behaviour: all PDF loading is extremely slow, I can actualy see chunks of 64K being received and sent over my TCP/IP monitor, this continues until the 1st page is rendered and repeats when succeeding pages are loaded. GSView on the other hand just chomps away magnitudes faster, so any big PDFs just show up as a big spike on the monitor. On the LAN it is very quick.

Sometimes the Lucide/QPDFView "load chunks" get larger, it seems to depend on how big of a PDF I'm actually openning. The bigger file - 38M in size - caused the chunks to increase to about 700K on the receive and a matching 200k on the send.

So here is the thing, Lucide is my default viewer. Given that both it and QPDFView exhibit the same behaviour I'm thinking it is something with poppler?

Therefore, I'm curious if anyone else has experienced something similar?

What could I do to try to debug this further?

Right now it is just about the only thing preventing me from moving to the NAS.

7
Internet / HOSTS file and spam-ad-fish avoidance
« on: January 23, 2019, 05:17:11 am »
Team!

I have been using the modified HOSTS file for a while now. Previously I was using Privoxy as well, but eventually dropped this in favour of just HOSTS and the FF add-on 'NoScript'.

The other day I came across a Unified HOSTS distribution, which is actually a tad over twice as big as the one I have been using, so about 1.1M vs 450K. Having deployed this on my OS/2 box I wanted to report that despite it's size (is there a real limit to this on our aging TCPIP stack?) it appears to work fine.

Now what is the benefit of using this 'Unified HOSTS'? Well, it is a GitHub project which assembles numerous individual HOSTS distros, scans them over to remove duplicates and produces a single file with some 40K entries.

I deployed this on my Win boxes as well, and the difference is tremendous, so I give that unified HOSTS approach a pretty good quality rating.

Anyways, if you want to try it out here are the links to the two releases:

1) HOSTS - http://winhelp2002.mvps.org/hosts.htm
2) UNIFIED HOSTS - https://github.com/StevenBlack/hosts/blob/master/hosts

8
Utilities / man-db package setup?
« on: December 08, 2018, 06:19:11 am »
I have been using Alex Taylor's man.cmd util (which uses CAWF) up until now, but it (or CAWF) occasionally screws up the rendering of the man page, so I thought I would try something else.

To that end I installed man-db, which required the following packages:
- groff-base
- libiconv
- libiconv-util
- libpipeline

All of these installed successfully.

So I attempted to execute a very simple test command 'man ls', which produces the following:

Code: [Select]
[G:\]man ls
man: fork failed: No such file or directory

LIBC PANIC!!
LIBC fork: Child aborting fork()! rc=0xfffffffe
pid=0x00ec ppid=0x00ea tid=0x0001 slot=0x00cd pri=0x0200 mc=0x0000 ps=0x0010
G:\USR\BIN\MAN.EXE
Process dumping was disabled, use DUMPPROC / PROCDUMP to enable it.

So judging by the above there appears to be a missing setup, maybe a variable pointing to the MAN pages?

Strange that the app itself has such an ungraceful exit (LIBC PANIC!), but it is what it is.

Here are the specific environment variable values I currently have in CONFIG.SYS, which are configured to support Alex's man.cmd:

Code: [Select]
REM ******************************
REM ***   MAN PAGES - START    ***
REM ******************************
SET CAWFLIB=g:\usr\local\cawf
SET TERMCAP=g:\util\misc\termcap.dat
rem set TERM, valid values are: os2, ansi
SET TERM=os2
rem Set the pager to OS/2 MORE command, otherwise uses LESS
rem SET PAGER=MORE
SET MANPATH=G:\usr\share\man
REM ******************************
REM ***   MAN PAGES - END      ***
REM ******************************

...and running the manpath util produces the following output:

Code: [Select]
[G:\]manpath
manpath: warning: $MANPATH set, ignoring /@unixroot/etc/man_db.conf
G:\usr\share\man

Can anyone point out where I'm going wrong with my configuration?

9
Networking / Fastest tcp/ip throughput in OS/2?
« on: November 12, 2018, 05:12:52 am »
Yeah...as simple as it sounds, what are the fastest TCP/IP rates you are seeing in Firefox (or Seamonkey I suppose) in OS/2? (u/l and d/l)

The reason I ask: I recently upgraded to a Gigabit Fibre to Home setup. Previously was using a Fibre to Box and Coax to home and that was getting me a 125 Mbps, which was steadily supplied and no complaints on my OS/2 machine (other then the initial trouble I had, but that was a cable issue).

Well, now with the full Gigabit (confirmed with a modem speed test itself) I am just barely hitting 140Mbps in OS/2 (testing on https://www.speedtest.net/) while the Win7 box is hitting 340 Mbps (yeah, it sucks, my crappy Netgear X6 R8000 router is a POS, advertised as Gigabit, but hell no...nowhere near). This is on a hard-wire side, not even talking WiFi.

If you are hitting higher speeds, what are your setups? I'm using the MMRE 1.1.3 drivers here which drive the motherboard NIC (RealTek 8139C+/8169/8169S/8110S/8168/8111/8101E family).

Thanks!

10
Setup & Installation / SYSLOGD and OS/2 config options
« on: August 07, 2018, 10:10:39 pm »
Who is running the syslogd process and how do you have it configured?

syslogd has been in my configuration for a long time now. The syslogd.conf file is not terribly complex, I simply attempted to functionaly separate out the various processes into separate (matching) log files.

Today I decided to deploy the smartmontools (smartmontools-6.6-os2-20171116.zip) functionality, primarly since I installed the new SSD in my machine and significantly re-configured my HDD usage. The HDDs are fairly old so I figured I should start watching the SMART reporting a bit closer.

I got the smartd working fine, it reports back in debug mode, so that makes me believe my core setups are fine. I enabled syslog server on my NAS box, that way I have a centralized LAN location for all LOG files, goal being to make it easy to obtain information from any machines. So far all of that seems to work fine. I am getting the smartd messages logged to the local \tmp\log direcotry as well as the NAS box itself.

What I am somewhat unclear about is the syslog.conf file contents, specifically, there is an option to forward all LOGs to an IP address. It can be done either through command line, '-t ip_add' switch, or through the use of the '%tcpipforward=192.168.1.5' command in the syslog.conf file itself. OK, no problem there, I am using the syslog.conf option, but it seems like the '%tcpipforward=192.168.1.5' command must be present first, followed by a specific redirection of a particular log (smartd sends to local0 here), or a re-direction of all logs.

So here is what that looks like :

Code: [Select]
# *******************************************************************
# Configuration file for Syslogd/2
...
# Valid facility names:
#  auth, authpriv, cron, daemon, ftp, kern, lpr, mail, mark,
#  news, security, syslog, user, uucp, local0, local1, local2,
#  local3, local4, local5, local6, local7
...
# lines beginning with:
#  # - comments
#  % - alternate to starting with command line options
#
# actions beginning with:
#  -  send to console
#  =  file to log messages
#  @  IP address to forward messages
...
# *******************************************************************

%tcpiplisten
%tcpipforward=192.168.1.5

*.*             -CON
#*.*             @192.168.1.5
auth.*          =g:\tmp\log\identd.log
authpriv.*      =g:\tmp\log\authpriv.log
cron.*          =g:\tmp\log\cron.log
daemon.*        =g:\tmp\log\daemon.log
ftp.*           =g:\tmp\log\ftp.log
snmp.*          =g:\tmp\log\snmp.log
snmptrap.*      =g:\tmp\log\snmptrap.log
kern.*          =g:\tmp\log\kernel.log
mail.*          =g:\tmp\log\mail.log
lpr.*           =g:\tmp\log\lpr.log
mail.*          =g:\tmp\log\news.log
security.*      =g:\tmp\log\security.log
syslog.*        =g:\tmp\log\syslog.log
user.*          =g:\tmp\log\user.log
uucp.*          =g:\tmp\log\uucp.log
local0.* =g:\tmp\log\smartd.log
local0.* @192.168.1.5
local1,local2,local3,local4,local5,local6,local7.*       =g:\tmp\log\local.log
...

Am I understanding this correctly? The syslog.conf file must have '%tcpipforward=192.168.1.5' redirection listed first, followed by an optional specific re-direct of a particular log, such as 'local0.*   @192.168.1.5' in my case?

This appears to work very well, but since I haven't done any extensive syslogd configuration before I didn't want to just 'fire & forget', only to discover some time later that I messed up the setups...LOL!

BTW: Is there any info out there about this? The stuff that comes with OS/2 (Warp 4.5 CP2) has nothing...at least not in any of the INF or PDF files I've looked through, meanwhile on-line searches bring up a pile of the Linux and AIX (in the case of IBM) stuff.

11
Utilities / WCD broken when running against JFS???
« on: July 26, 2018, 06:59:06 pm »
Ugghhh????!!!  :o

Yeah...strange right?

So I am literally coming out of a move from my multi-HPFS386-partition setup to a much simpler and (hopefully) more efficient JFS partition setup.

Since all HPFS partitions are limitted to 64Gig it was just getting to be too painfull watching the system boot, let alone the occasional CHKDSK execute here and there. In combination I also deployed a SSD drive (Samsung Evo850 - 250Gig) to replace my aging WD Raptor (BTW: if you have not thought about it...I am telling you, there is no computing like "SSD Computing" LOL...that sweet!!!).

Anyways, so I managed to get the partitions migrated (xcopy, sysinstx, etc), all came up fine. I replaced multiple HPFS386 partitions with a single JFS boot partition. So I went from multiple 64Gig to a single 233Gig drive.

Now here is where it gets weird. I used to run WCD (RPM 6.0.2, 2018-05-10 drop) very successfully in the past. Since I got rid of a pile of HPFS386 partitions I figured I would re-do the WCD scan and get it down to just my single JFS partition...lo and behold, the 'WCD -S g:\' CLI causes one of my CPU cores to immediately spike to 100% use, RAM to be consumed (pretty fast too) and eventually producing a trap:

Code: [Select]
[G:\usr\bin]wcdos2 -S g:\
Wcd: Please wait. Scanning disk. Building treedata-file g:/home/wcd/treedata.wcd
 from g:/

Killed by SIGSEGV
pid=0x0068 ppid=0x0065 tid=0x0001 slot=0x00e2 pri=0x0200 mc=0x0001 ps=0x0010
G:\USR\BIN\WCDOS2.EXE
LIBC066 0:0004cecb
cs:eip=005b:1d6dcecb      ss:esp=0053:00139d94      ebp=0013a058
 ds=0053      es=0053      fs=150b      gs=0000     efl=00010212
eax=00000000 ebx=ab030000 ecx=ab030000 edx=2003013c edi=00000009 esi=00139ecd
Process dumping was disabled, use DUMPPROC / PROCDUMP to enable it.

Before I log a ticket (because this is really weird, I have never seen anything like this before), has anyone experienced this?

12
Applications / REXX script to set default folder 'Open As'?
« on: June 06, 2018, 05:56:08 pm »
I am migrating my OS/2 based media (photo & video albums for starters) to our NAS box (ZyXel 325v2). The NAS box configuration does not support EAs. The vendor has been non-responsive to my inquiries to make the change, and since this is not a widely used NAS product the community supported changes/enhancements are scarce, ie I haven't found a way to implement the required smb.conf changes on the NAS Samaba server.

OK, so this means that by using NetDrive and Samba plug-in I have a SHADOW of the NAS Photo folder on my Desktop, however, since the shadow does not support EAs I am unable to default the standard 'Open As' to use the XWP's "Xview" mode. Or at least, when I set it following a re-boot, that change is not persistent and will revert back to the standard 'Icon view' following a re-boot.

Therefore, my thinking is to write a REXX script to be executed as part of the WPS start-up routine, to set the 'Open As' to 'Xview' mode. That way any future attempts to open the folder will produce the structure I am looking for.

Sooo...any ideas how to go about this? I have some REXX experience, mostly reading scripts and writting some small stuff, but this type of stuff is well beyond my comfort zone.

Thanks everyone!

13
Networking / NetDrive traps - since latest LIBCX drop...
« on: April 29, 2018, 05:57:24 pm »
Since the latest libcx/libc release (0.6.2-1.oc00 / 0.6.6-36.oc00) I started to see a consistent NetDrive crashes:

Code: [Select]
------------------------------------------------------------
04-28-2018  18:44:32  SYS3175  PID 0021  TID 0003  Slot 0061
G:\UTIL\NDFS\NDCTL.EXE
c0000005
1e754d45
P1=00000001  P2=2105002c  P3=XXXXXXXX  P4=XXXXXXXX 
EAX=0289f92c  EBX=006f5380  ECX=00000001  EDX=00000004
ESI=2105002c  EDI=0289f92c 
DS=0053  DSACC=f0f3  DSLIM=ffffffff 
ES=0053  ESACC=f0f3  ESLIM=ffffffff 
FS=150b  FSACC=00f3  FSLIM=00000030
GS=0000  GSACC=****  GSLIM=********
CS:EIP=005b:1e754d45  CSACC=f0df  CSLIM=ffffffff
SS:ESP=0053:0289f8ac  SSACC=f0f3  SSLIM=ffffffff
EBP=0289f8d0  FLG=00010212

LIBC066.DLL 0001:00054d45
------------------------------------------------------------

Has anyone else noticed anything similar to this?

I wonder whether the newly introduced 'SET LIBCX_HIGHMEM' option is causing some problems?

Currently the whole OS install runs with the default, which I believe is to do memhigh allocation whenever possible. I have not tried defaulting to 'SET LIBCX_HIGHMEM=2' instead, although that is my next test to see if NetDrive is impacted in any way.

Thanks!

14
Internet / Firefox - 45.9.0 for OS/2 GA1.1...anyone...anyone???
« on: April 20, 2018, 04:19:23 pm »
OK, what is happenning here??? LOL  :o

FF 45.9.0 (rel-3 I believe), which brought us up to GA 1.1 was published 2 days ago, yet the forum is DEAD QUIET...you people are starting to freak me out...!!!

C'mon, get out there, install, is the result better then previous releases? https://github.com/bitwiseworks/mozilla-os2/releases

Truth be told, other than the email notification I get on the testers' list, I did not see any other announcements...so I figured I'd post here. I did the d/l through the above link, as a ZIP file, un-packed to a non-RPM location, I do not see the official RPM package out yet, but then again, I am only looking at the public netlabs-rel repo.

Anyways, it's out there...check it out, I figured I'd give it a few days' worth of runtime before sharing my assessment with the wider audience of the forum.

15
Setup & Installation / RAM disk and temp folders...what else?
« on: April 13, 2018, 05:58:25 pm »
So for a number of years I used to run a 4Gig RAM machine build. Then a couple of months ago I read a thread on the forum regarding the configuration of a RAM disk and I thought: "heck, why not, let's give it a try"...LOL. I had two spare DDR3 sticks laying around, installed them, for a total of 8Gig, configured the RAM disk to use just 1Gig and it's been pretty good so far.

I use the RAM disk (ramfs.ifs) for the following:

1) ZIP file processing
2) PMMail
3) GCC

...but I think there is more opportunity there.

So I'm curious, if you have a RAM disk deployed, what are you putting on there?

For example, I have the following 'temp' directories currently defined in my CONFIG.SYS:

SET TMPDIR=G:\var\tmp
SET TEMP=G:\TMP
SET TMP=G:\TMP

I think I could readily point TMP and TEMP to my ramdisk...not so sure about TMPDIR. That location in the Unix world is meant to be a temporary persistent storage location, meaning, it should survive a re-boot but may be occasionally purged. So far the YUM/RPM install is using this and I'd rather not mess around with it. However, TMPDIR is used by GCC, so for now I also have this set in my GCC environment config file.

The TMP and TEMP on the other hand store a bunch of runtime stuff, for example Firefox always creates etilqs_xxx files there. I noticed that if FF crashes they remain behind, it appears to be safe to delete them, at least I occasionally do since they are supposed to be just temporary SQLITE database files.

What else though? Any suggestions?

Pages: [1] 2 3 ... 5