Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Dariusz Piatkowski

Pages: 1 ... 3 4 [5] 6 7 ... 12
61
Setup & Installation / Pathmappers, HOME, ETC and others...my oh my!!!
« on: January 20, 2021, 05:03:07 pm »
Dave's response to a question Rick posted in this thread (https://www.os2world.com/forum/index.php/topic,2714.msg0.html#new) has reminded me of something.

A while back - as I was transitioning to RPM/YUM - I embarked on a house-cleaning project, meaning: try to reconcile the various application config with the new RPM/YUM/*nix porting approaches.

That has meant figuring out what to put where, and what really are the preferred/correct uses and the matching locations for things such as:

1) \MPTN\ETC
2) \ETC
3) \HOME

The 'kLIBC - Pathwrewriters' utility allows you to configure some of this stuff. Truth be told though, I am no longer certain whether what I have in there right now is part of the "default" install for stuff like RPM/YUM/ANPM, or was it perhaps tweaked by me at some point in time (maybe part of that transitioning effort I mentioned above?).

So here are the contents of my 'kLIBC - Pathwrewriters' utility:
1) /var/log => %LOGFILES%
2) /tmp => %TMP%
3) /etc => %YUM_RPM_ETC%

where these environment variables are in turn mapped to:

1) %LOGFILES% => \tmp\log
2) %TMP% => \tmp
3) %YUM_RPM_ETC% => \etc

The other, somewhat standard (for the most part I think) environment variables are:
1) %HOME% => \home
2) %ETC% => \mptn\etc
3) %TMPDIR% => ramdisk:\GCC (really for the single DEV purposes)
4) %TMP% => ramdisk:\tmp
5) %TEMP% => ramdisk:\tmp
6) %LOGFILES% => \tmp\log

Further on, even the RPM applications (the ported stuff) usually stores stuff into either %ETC% (which is \mptn\etc here), or %YUM_RPM_ETC% (which is \etc here) or %HOME% (which is \home here).

Soooo....yeah, to this guy here it still feels like stuff is "all over the place"  ???

Therefore, I am curious what strategies (if any) have others developed to reconcile this stuff and create a better, perhaps more consistent configuration?

62
Applications / Lotus 123 - macro to re-order worksheet columns...?
« on: January 15, 2021, 09:55:53 pm »
Alright...this is not a "OS/2 and application" type of a question, rather Lotus 123 specific inquiry and I'm posting it here in hopes that someone knows Lotus 123 well enough to answer it for me (on-line searches did not produce an answer so far). Most of what I found was on the topic of actual Marco functions, but unless you know what to look for it's hard to figure out where to even start.

OK, so how about this "new tech meets old tech" scenario: I have some vehicle data logs 2020 Subaru STI in CVS format. I need to pull that into 123 (which works just fine). However, the ordering of the columns is not to my liking, which means I manually shift stuff around, we are talking buckets and buckets of data here, so yeah, it's a little sllooooowwww! lol

I am therefore wondering if it's possible in Lotus 123 to write a marco which will basically re-order the existing worksheet columns to suit my needs? The goal here is to avoid the manual shifting of the columns of data.

Thanks!

63
Applications / PMView - converting ICO to OS/2 BMP...background options?
« on: January 06, 2021, 03:48:13 am »
I need to take a couple of ICO (icon) files and convert them to an OS/2 BMP format. Seems easy enough, right?

Well, my problem is that doing the conversion does not preserve the ICON's background transparency...why not?

Basically, the converted images all have WHITE background, which I need to avoid. I do know the actual colour of the background I need, but I would rather have a transparent background instead.

Any ideas? Is this caused by the OS/2 BMP format not supporting the alpha channel? (https://www.axialis.com/tutorials/tutorial-misc002.html)

So if transparent BMP is off the table, what's the easiest way for me to apply a particular colour as background to an ICO file so it's still preserved in the BMP image itself?

64
Programming / REXX: RxStartSession, what is wrong with this call?
« on: January 03, 2021, 11:37:25 pm »
In my NAS box periodic 'connectivity check' script I'm trying to do away with running a separate REXX cmd script to set the DEFAULT folder VIEW. I decided to just bring the calls in that one-off script directly into my main REXX script.

Here is what I have:

1) the stand-alone cmd script has a call to oo.exe to default the folder VIEW
Code: [Select]
...
/* Setup the default VIEW for the NAS Samba shares - FOLDERS */
'@g:\util\misc\oo.exe /a v:\photo "DEFAULTVIEW=XVIEW;"'
'@g:\util\misc\oo.exe /a v:\video "DEFAULTVIEW=XVIEW;"'
'@g:\util\misc\oo.exe /a v:\music "DEFAULTVIEW=XVIEW;"'
'@g:\util\misc\oo.exe /a v:\public\documents "DEFAULTVIEW=XVIEW;"'
'@g:\util\misc\oo.exe /a v:\public\syslog "DEFAULTVIEW=XVIEW;"'
...

2) the main connectivity check script is using this call
Code: [Select]
...
/* OK, now run the WPS settings utility */
api_rc=0
api_rc=RxStartSession('g:\util\misc\oo.exe','/a v:\photo "DEFAULTVIEW=XVIEW;"','C','F',,'O','A')
...

Now this thing flat out fails all the time. I haven't been able to figure out why, all I can tell is that the stared session VIO window pops up with the following in it:

Code: [Select]
invalid or malformed path

Now, I cannot for the life of me tell what oo.exe is complaining. However, I've tried to successfully convert this in mulitple ways, even did a quick convert of REXX to EXE (REXX2EXE utiility) and strangely enough whenever I do actually get the oo.exe to execute the very same error message is displayed.

So I'm thinking there is some kind of a session issue here...???

EDIT
====
I grabbed the oo.exe sources (which Rich Walsh included in his release) and found where oo is tossing that error. Will debug from there...

65
Programming / INCLUDE directory ordering: VACPP vs OS2_Toolkit vs GCC
« on: December 27, 2020, 11:42:33 pm »
I'm wondering if someone can provide references, pointers and/or explanation as to the correct setups when it comes to compiling a standard OS/2 program using the VACPP 3.65 release (3.08 should be fairly similar I think?) and the OS2_Toolkit?

Beyond this core concept, how should the GCC stuff tie into all of the above?

I think of the GCC stuff as replacing the VACPP compiler functionality, and therefore as such would still want to use the OS2_Toolkit includes I believe?

In my current configuration I have all three installed, haven't really done much other than basic compiles with all of them. They all seem to work fine. But once you start building the more complex stuff I'm running into difficulties figuring out which INCLUDE directories to use and specifically what order they should all come in?

Here is a very specific use-case as it applies to the DISKIO work I started:

1) TIME.H header file

a) VACPP install (\code\tools\Ibmcpp\include\TIME.H) shows the following

"11-08-00  1:16a         5,715      0 a---  time.h"

Code: [Select]
/********************************************************************/
/*  <time.h> Header File                                            */
/*  IBM C and C++ Compilers for OS/2, AIX and for Windows NT,       */
/*  Version 3.6                                                     */
/*  Licensed Material - Property of IBM                             */
/*  (C) Copyright IBM Corp. 1991, 1997. All rights reserved         */
/*                                                                  */
/********************************************************************/

b) OS2_Toolkit install (\code\tools\toolkit\h\libc\TIME.H) shows the following:

"1-27-16 11:42a         4,181      0 a---  time.h"

Code: [Select]
/********************************************************************/
/*  <time.h> header file                                            */
/*                                                                  */
/*  (C) Copyright IBM Corp. 1991, 1995.                             */
/*  - Licensed Material - Program-Property of IBM                   */
/*  - All rights reserved                                           */
/*                                                                  */
/********************************************************************/

The OS2_Toolkit stuff is the Netlabs release (Warp 4_52 Toolkit - os2tk45-4_5_2-6_oc00.zip - Sep_2017), I believe the os2tk45-4_5_2-9_oc00.zip is the most recent one that I should install.

My approach to upgrading has been to overlay the original OS2_Toolkit install with the Netlabs updates (all appropriate directory structure having been mapped to the original OS2_Toolkit structure as opposed to dumping the Netlabs stuff into the /usr/include/os2tk45...).

I've attached both versions of TIME.H so you can see the differences.

66
Programming / CFLOW - latest version?
« on: December 25, 2020, 10:14:08 am »
I grabbed CFLOW from Hobbes, v1.1, meanwhile the official release is now at v1.6.

Quick check on the possibility of an RPM packaged turned up nothing, nor was I able to find a RPM package that perhaps contained CLFOW (but that could have been just my screwup).

Does anyone know if we have anything more recent?

CFLOW btw is a neat little utility which parses your C source code and creates a handy map: what functions/procs are called, the hierarchy, etc....all in all a pretty quick and handy way to visualize what the code is doing.

This came up as I tried to make more sense of the DISKIO structure.

Thanks!

67
Storage / DISKIO - updated version in need of TEST
« on: December 20, 2020, 01:46:36 am »
Hi Everyone,

I figured I'd take a stab at resolving the annoying issue with DISKIO not reporting correct throughput rates for the modern hardware, i.e. stuff like the fast SSD devices, etc.

Alright, so to that end, here is a DEBUG run for my SSD (Samsung 850 Evo - OK, I said "modern", yes, it's a few years old  ;), but it's what pushed DISKIO over the edge on my box here):

Code: [Select]
Dhrystone 2.1 C benchmark routines (C) 1988 Reinhold P. Weicker
Dhrystone benchmark for this CPU: 2283670 runs/sec

Hard disk 2: 255 sides, 30401 cylinders, 63 sectors per track = 238472 MB
Drive cache/bus transfer rate:
 DEBUG => nData is: 1189762560
 DEBUG => nTime is: 10260
115961 k/sec
Data transfer rate on cylinder 0   :
 DEBUG => nData is: 3087028224
 DEBUG => nTime is: 10264
300762 k/sec
Data transfer rate on cylinder 30399:
 DEBUG => nData is: 2904620544
 DEBUG => nTime is: 10244
283543 k/sec
CPU usage by full speed disk transfers: 23%
Average latency time:
 DEBUG => nTime is: 10036.599360
 DEBUG => nCnt is: 85352.196980
0.1 ms
Average data access time: Disk read error.
Multithreaded disk I/O (4 threads): 121669 k/sec, 19% CPU usage

The stuff marked as "DEBUG => " is the raw data, if you're going to run this on your machine please report back the full output.

And here is the RETAIL output:

Code: [Select]
Dhrystone 2.1 C benchmark routines (C) 1988 Reinhold P. Weicker
Dhrystone benchmark for this CPU: 2088687 runs/sec

Hard disk 2: 255 sides, 30401 cylinders, 63 sectors per track = 238472 MB
Drive cache/bus transfer rate: 118449 k/sec
Data transfer rate on cylinder 0   : 332803 k/sec
Data transfer rate on cylinder 30399: 302953 k/sec
CPU usage by full speed disk transfers: 23%
Average latency time: 0.1 ms
Average data access time: Disk read error.
Multithreaded disk I/O (4 threads): 122717 k/sec, 23% CPU usage

As you can tell, the numbers are much better (no more overflow) and they for the most part jive with what I see in SysBench.

There are a few outstanding issues that I'm aware of:

1) this build is optimized for Pentium2, that's old stuff, I need to either remove or optimize for Pentium4, I just haven't looked up all the ICC flags & options, may be the reason why the Dhrystone value reported by this build is about 50% of what the official build has, not sure yet

2) the "Average data access time: Disk read error" is still here, this only fails on my SSD drive, I need to dig into the DosDevIOCtl API

3) "Drive cache/bus transfer rate:" shows a smaller rate than SysBench, not sure why that is, need to review further

4) RAMDISK numbers are still bad, or just flat out error out

ZIP attachment has two builds :

1) DEBUG - has the extra raw data dumps
2) RETAIL - if you just want an updated (hopefully correct) result and that's all you want

Feedback is welcome and appreciated!

-Dariusz


68
Storage / FAT32 - Netlabs or AN?
« on: November 27, 2020, 08:35:47 pm »
The subject says it all...what have you picked/installed, and why?

I ask b/c I have had the Netlabs 0.9.13 release on my machine from quite some time now. It has worked for me quite well since the extent of using FATanything was to simply pull the pics off of my digital camera. That's a FAT32 card, easy to mount, easy to copy off of, done!

However, as storage capacities got larger and now it's somewhat easy to have a multi-gig FAT device I figured I should perhaps update my install.

Now, here is the decision point: run with the updated Netlabs release, or move over to AN one? If I understood the AN release notes, their goal is to have the basic FAT32 capability, no other fancy stuff. Seems simple enough....but that also appears to really narrow things down to just the stuff that's pure FAT32 as opposed to having capability to mount exFAT, etc.

So I'm curious how others have gone about this? What have you deployed, why and how is that working for you?

Thanks!

69
Programming / MAKEFILES suck!!!
« on: November 25, 2020, 04:51:23 am »
...alright, maybe not that badly, I have used them for much smaller projects in the past and they were great, but I'm attempting to compile the PUMonitor utility here and that provided makefile doesn't work with my VAC 3.6.5 install.

So help me out please, my eyeballs are red from reading about NMAKE32 (as opposed to NMAKE), but clearly I'm still confused...LOL!  :-\

Here is what I have:

1) structure of the project source files
Code: [Select]
Directory of G:\code\source\os2\pumonitor\src

11-24-20  9:25a         <DIR>      0 ----  .
 8-25-18  5:42p         <DIR>    369 ----  ..
 2-19-02  1:13p           302      0 a---  build.opt
 8-25-18  5:42p         <DIR>      0 ----  include
 2-19-02  1:13p           105      0 a---  library.rsp
11-23-20 11:50p         2,151     35 a---  makefile
 8-25-18  5:42p         <DIR>      0 ----  obj
11-24-20 10:24p         2,330     35 a---  pumonitor.mak
 8-25-18  5:42p         <DIR>      0 ----  source

Basically the *.c and *.cpp is all in source, *.h and *.hpp in include, and the remaining *.ico, *.rc and *.lib files are in source.

2) 'makefile' is the original makefile distributed with the project source, but despite the fact that it reads like it's meant to work with VAC, it doesn't work with my 3.6.5 setup here

3) 'pumonitor.mak' is my attempt at converting the project sources and the build process as I understand it to be structured in the original 'makefile' into a VAC 3.6.5 version

Sooo...having said that, when attempting to process with NMAKE32 I get the following error:

Code: [Select]
MAK3035: Do not know how to make target 'cell.cpp'.

Now this part is really confusing for me, because if I force the full path for the source files in the target, all works fine, here is the section we are talking about:

Code: [Select]
cell.obj    : cell.cpp .\include\cell.h .\include\cvars.h .\include\util.h

...substituting with

Code: [Select]
cell.obj    : .\source\cell.cpp .\include\cell.h .\include\cvars.h .\include\util.h

...now allows me to pass that test but still fails with the following:

Code: [Select]
MAK3035: Do not know how to make target 'cell.obj'.

...which of course can be addressed if I provide an explicit command, such as:
Code: [Select]
$(CC) /c /Fo$@  $<

...then compiles, although it still producs a pile of errors:

Code: [Select]
...
icc.exe /c /Focell.obj .\source\cell.cpp
IBM* C and C++ Compilers for OS/2*, AIX* and for Windows NT**, Version 3.6
(C) Copyright IBM Corp. 1991, 1997   All Rights Reserved.
* registered trademarks of IBM Corp., ** registered trademark of Microsoft Corp.

.\source\cell.cpp(19:10) : error EDC3008: Source file <cell.h> cannot be opened.
.\source\cell.cpp(20:10) : error EDC3008: Source file <cvars.h> cannot be opened.
...

All of this begs the question: why isn't my Inference Rule working? It clearly spells out where to find all the *.h and *.hpp files....so what gives?

Here is the Inference Rule section in my makefile:

Code: [Select]
# The Make utility looks in the directory specified by frompath for files with the fromext extension.
# It executes the commands to build files with the toext extension in the directory specified by topath.
# {frompath}.fromext{topath}.toext
# commands
# :

{.\source}.c{.\obj}.obj:
    $(CC) /c /Fo$@  $<
   
{.\source}.cpp{.\obj}.obj:
    $(CC) /c /Fo$@  $<
   
{.\source}.lib{.\obj}.obj:
    $(CC) /c /Fo$@  $<
   
{.\source}.rc{.\obj}.res:
    $(RC) /r $< $@
   
{.\include}.h{.\obj}.obj:
    $(CC) /c /Fo$@  $<

{.\include}.hpp{.\obj}.obj:
    $(CC) /c /Fo$@  $<

This is far beyond anything I've tried before...so obviously I'm lost!!! lol

70
Hardware / Multi core CPUs - how many are running > 4?
« on: November 20, 2020, 08:28:38 pm »
Alright...how many of your are running CPUs with more than 4 or 5 cores?

I ask b/c with the newer motherboard I can finally boot up with with the full 6 cores enabled on my AMD Phenom II X6 1100T CPU . This is with AOS ACPI 3.23.15 drivers.

The old motherboard would TRAP each time I attempted to do so.

However, my CPU monitors show on-going CPU spikes and I cannot for the life of me figure out where this is coming from. Basically, the ACPI Power Manager is disabled, which means the cores are running "all-out", no throttling back, etc...which is fine as this is my desktop box and running 24x7.

Therefore, if the ACPI PM is not active, than the only reason for CPUs to be spiking is either the ACPI driver itself, or something in the kernel???

So I'm curious if anyone else is seeing this?


71
Programming / REXX - RXLVM library - issue?
« on: November 16, 2020, 10:29:32 pm »
Hi Everyone,

I'm encountering an issue that suggests re-loading the RXLVM library is not successful, while several other REXX library are going through exactly the same process and are NOT showing this issue.

Please take a look at the below samples of my code and tell me where things may be going wrong?

1) load the REXX library
Code: [Select]
...
/* check if the RxLvm - RXLVM.DLL - is already loaded */
   rxlvm_api = RxFuncQuery('RxLvmLoadFuncs')
   msg_text = "RXLVM: API LOAD RC="||rxlvm_api
   CALL debug debug_flag calling_code msg_level msg_text
   If rxlvm_api <> 0 Then do
      Call RxFuncAdd 'RxLvmLoadFuncs', 'RXLVM', 'RxLvmLoadFuncs'
      Call RxLvmLoadFuncs
      end
   msg_text = "RXLVM: Using version "||RxLvmVersion()
   CALL debug debug_flag calling_code msg_level msg_text
...

2) unload the REXX library
Code: [Select]
...
   /* check the RXLVM library */
   If rxlvm_api <> 0 Then
      Call RxLvmDropFuncs
...

I use the rxlvm_api variable to tell me whether I need to un-load the library when my code is done. The purpose is to avoid doing so if the library was already loaded prior to my code starting, presumably something else out there loaded it first and perhaps requires it to remain loaded.

Now, 1st time through running my script I'm getting rxlvm_api=1, which means I did not have the RXLVM library loaded already and therefore need to load it. OK, good stuff, it works and the version info call (RxLvmVersion) works fine. The program then ends and the RXLVM is completely un-loaded.

However, re-running this script in the very same CLI session the 2nd time around produces the following error:

Code: [Select]
[G:\code\source\rexx\os2utils]nas_check v:
DEBUG : MAIN => Initializing...
DEBUG :    ==> library_load => Initializing...
DEBUG :    ==> library_load => RXU: Using version v1.a
DEBUG :    ==> library_load => REXXUTIL: Using version 2.00
DEBUG :    ==> library_load => RXUTILEX: Using version 0.1.6
DEBUG :    ==> library_load => RXEXTRAS: API LOAD RC=0
DEBUG :    ==> library_load => RXEXTRAS: Using version 1.G
DEBUG :    ==> library_load => RXLVM: API LOAD RC=0
   242 +++     msg_text = 'RXLVM: Using version ' || RxLvmVersion();
REX0043: Error 43 running G:\code\source\rexx\os2utils\nas_check.cmd, line
242: Routine not found
    67 +++   Call library_load;

What's weird about this is that rxlvm_api is clearly set to 0, which implies that it's already loaded and perhaps the un-load call actually failed?

Either way, if it was already loaded, why does the call to RxLvmVersion fail?

BTW: the same process is repeated for several other libraries, they work fine.

72
Applications / PMView - colour vibrancy gone after save?
« on: November 14, 2020, 06:53:09 pm »
This one is hard to illustrate since saving the as-is image will clearly strip the "vibrancy" off of it...LOL, but for the sake of comparisong I've attached the JPEG and PNG images or the same content (CPU monitor).

But here is what I'm seeing: capturing any images on my machine with PMView 3.82 (which I believe is the latest version) shows a perfectly matching colour intensity to the original image. However, saving it and re-openning now shows a much more toned down image. The vibrancy is gone.

This only happens with JPEG format (even if but using 100% quality), saving in GIF or PNG does not show this behaviour.

Has anyone seen this?

73
Programming / REXX - how to kill a process?
« on: November 04, 2020, 05:15:20 pm »
So I need to kill a process given a particular flag being set.

I figured this would be easy in REXX, and most likley would rely on a suitable API call or a built-in function. However, no such thing has been found (REXX newbie here, so take it easy... ;))

But, I did start looking at RXU library, indeed, I can see that I could use:

dosrc = RxQProcStatus(stemname [,flags])

to get the equivalent of PSTAT result.

From there I could fish out the matching record for a particular module (an EXE in my case) and once I have the PID I could call:

killrc = RxKillProcess(pid [, action])

But all this still seems a little convoluted, I mean is there no simpler way to kill a process than having to point to it by a PID? Yes, I understand the reason for this, but if I have a single instantiation of a particular EXE I know that killing it by module name is just fine.

Any suggestions where to look next?

Oh, sure, I could call one of the utilities that do this, but I was hoping to push my REXX "boundaries" a tad and see how this could be implemented, any suggestions?

Thanks!

74
Mail-News / Thunderbird and CHAT feature...
« on: October 30, 2020, 04:07:07 am »
Alright, so I figured since the TB email client is here to say I might as well give the CHAT feature a whirl.

Setup went OK, the IDs and accounts are recognized. All appears to work (test msgs are fine), "Typing..." indicators are fine, etc. However, none of the message history appears to be retained.

Basically, once I shut TB down and re-start any previous messages are lost. The history still shows a day of the week that prior messages ocurred on, I can click on these at which point in time i would expect the actual msg window to show the chat history. That does not happen.

Has anyone seen this?

75
Mail-News / Thunderbird and Contact import...what works?
« on: October 25, 2020, 05:05:15 pm »
Just recently I deployed Dave's release of the Thunderbird email client. So far so good, connected to my Gmail IMAP server fine, pulled a whole wack of emails down (didn't have to do that, but heck, storage is cheap these days so at least the local stuff allows me to do some extensive searching if need be and it's synchronized automatically anyways).

OK, so that's all good stuff.

But to ease the use of TB I figured I should import my Google Contacts next. Did a bit of research (https://support.mozilla.org/en-US/kb/thunderbird-and-gmail) and discovered a fairly well recommended add-on, that being: gContactSync (https://addons.thunderbird.net/en-US/thunderbird/addon/gcontactsync/).

Since the latest official release is way past our 45.8 TB release, I went fishing to find a matching older release, which happens to be 2.1.13:

Code: [Select]
Version 2.1.13 Released May 31, 2019 330.7 KiB Works with SeaMonkey 2.14 - 2.56, Thunderbird 17.0 - 60.*

Installed it, that went fine. My problem is that attempting to import anything from Google just flat out appears to go nowhere, as in: nothing seems to happen. Strange thing is that even turning on the debug log (for the add-on) does not produce anything.

Therefore, I'm curious if anyone has encountered this before?

Most likely I'll keep on down-levelling until I find a release that works. I want this to be a one-time sync, and well, if that fails I know there are more tedious approaches such as a manual export/import through a CVS setups, etc.

Thanks everyone,
-Dariusz

Pages: 1 ... 3 4 [5] 6 7 ... 12