Author Topic: Classic, Regina, Object and Open Object Rexx  (Read 11529 times)

Jan-Erik Lärka

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 332
  • Karma: +7/-0
    • View Profile
Re: Classic, Regina, Object and Open Object Rexx
« Reply #15 on: February 25, 2025, 03:31:52 pm »
The benefit of tokenized code is that the rexx engine can run faster as it's prepared and structured in a way that the machine would handle it.

In my mind model to make it understandable I look at it as something like:
Code: [Select]
[b]"Not tokenized"[/b]
The text has to be read and interpreted over and over again as it run loops and process instructions.
Comparisons performed letter by letter to determine what instruction for instruction lead to
[b]"Tokenization"[/b]
The text code read into memory is prepared before the engine run it
Each time a variable get mentioned it is replaced by a placeholder that point to memory, specific to the computer and architecture.
All references that point to that placeholder/memory can then be directly read/written/used instead of looking/searching/comparing through text.
[b]"Tokenized"[/b]
The non text tokenized code in memory can execute the instructions the way the computer work.

On OS/2 it save the tokenized code to the EA (Extended Attribute) of the .cmd file, so if no one has altered the code between runs, can just skip the "compilation" part and just run it.

cRexx check less of the code in advance for obvious mistakes and problems.