Author Topic: Finding an ideal filesystem for Network storage  (Read 10060 times)

Rich442

  • Guest
Finding an ideal filesystem for Network storage
« on: December 13, 2017, 12:40:00 am »
In terms of managing an OS/2 network of storage devices, what would be the most effective filesystem for speed and efficiency? I really like the way ZFS works on UNIX and UNIX-like systems (combining the volume manager with the actual filesystem). Most of my storage devices are on the small size (i.e. 6 hard drives of about 120-1000GB per disk drive) and I have ecomstation 2.2 on my main device. Since HPFS and JFS both use the B+ tree, what would anyone recommend to manage a network of six storage devices of : 120GB, 60GB, 300GB, 1TB sizes.

My goal is efficient data retrieval  where data is preserved (in case one of the disks fails). How does HPFS handle the problem of disk failure and data corruption? My SATA disks are relatively old. I'm really interested in the benefits of HPFS over some of the NT filesystems made by MS. Thanks for any responses.  :)

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4787
  • Karma: +99/-1
    • View Profile
Re: Finding an ideal filesystem for Network storage
« Reply #1 on: December 13, 2017, 02:38:44 am »
For OS/2, it is basically JFS for modern large hard drives. HPFS is limited to 64GB partitions and takes forever to do a chkdsk on a large partition due to no journaling.

Rich442

  • Guest
Re: Finding an ideal filesystem for Network storage
« Reply #2 on: December 13, 2017, 11:37:39 pm »
Thank you for responding. I do use JFS on big partitions but my "network" consists of  a lot of different oddly-sized stuff from various. A few are 20 years old and the rest are at least ten years old. One of my most reliable drives has only 60GB on it so I really wondered. It was sold by apple and I usually use HFS+ on my Mac stuff.

My real struggle is whether use JFS  rather than FAT32 (Windows) or HFS+ for this conglomeration of old and older disks. I referenced ZFS (from Oracle because I thought it would be an example of how best to what I want to do: speed, ,managing devices and preventing loss of data/data corruption. I was hoping that JFS had something comparable that would  make my network faster and safer (ZFS being an example). I like a b-tree structure but also like the way FAT32.

Would it be better to simply run regular backups with something like a cron script?


 

RickCHodgin

  • Guest
Re: Finding an ideal filesystem for Network storage
« Reply #3 on: December 14, 2017, 12:08:40 am »
In terms of managing an OS/2 network of storage devices, what would be the most effective filesystem for speed and efficiency? I really like the way ZFS works on UNIX and UNIX-like systems (combining the volume manager with the actual filesystem). Most of my storage devices are on the small size (i.e. 6 hard drives of about 120-1000GB per disk drive) and I have ecomstation 2.2 on my main device. Since HPFS and JFS both use the B+ tree, what would anyone recommend to manage a network of six storage devices of : 120GB, 60GB, 300GB, 1TB sizes.

There are two aspects.  First, there should be an aggregating server setup which coordinates which volumes are online at any given time.  That is a manager and is queried by any server wishing to populate data onto the network drive, or any client machines retrieve information about the online devices.  And a single OS/2 instance could be both a server and a client.

That aggregating server maintains a virtual map of current online storage, which is then transmitted to each client machine requesting network drive access, with the aggregating server sending out push notifications whenever resources change.

In this way, each server registers its own public data with the aggregating server, which is all aggregated into a single central directory, with the individual resources being indicated that they are on physical servers, which each client then directly communicates with for services on those files on that machine.

This is for data I/O.  The file system then in use would be of any kind OS/2 supports, possibly with emulation to allow OS/2 attributes on a non-OS/2 file system.  But as a file system of choice, I would suggest JFS in moving forward.

I think any modern networking file system has to be both distributed and aggregated as indicated above.  The traffic to the aggregate server would be minimal, and it would constantly communicate with each online resource and signal when things are reliable, unreliable, offline, online, etc., to all client machines connected to it.  It could also perform mirroring and management from a single source, directing files to be moved from one machine to another, copied, all from a single console.

In this way, a single "network volume" is made visible with essentially unlimited storage, with the physical requests to each of the network resources going out to the specific machines to be filled.

My goal is efficient data retrieval  where data is preserved (in case one of the disks fails). How does HPFS handle the problem of disk failure and data corruption? My SATA disks are relatively old.  I'm really interested in the benefits of HPFS over some of the NT filesystems made by MS. Thanks for any responses.  :)

HPFS was originally created by Microsoft.  As I understand it, NTFS was basically a full-on fork of HPFS at a given time, examined, revised, and re-written for Microsoft's purposes thereafter.

UPDATE:  I have not seen this kind of network file system in operation before.  I was proposing what I think is the best way to handle a new creation, created from the ground up.
« Last Edit: December 14, 2017, 03:31:19 pm by Rick C. Hodgin »

Dave Yeo

  • Hero Member
  • *****
  • Posts: 4787
  • Karma: +99/-1
    • View Profile
Re: Finding an ideal filesystem for Network storage
« Reply #4 on: December 14, 2017, 01:15:00 am »
Thank you for responding. I do use JFS on big partitions but my "network" consists of  a lot of different oddly-sized stuff from various. A few are 20 years old and the rest are at least ten years old. One of my most reliable drives has only 60GB on it so I really wondered. It was sold by apple and I usually use HFS+ on my Mac stuff.

My real struggle is whether use JFS  rather than FAT32 (Windows) or HFS+ for this conglomeration of old and older disks. I referenced ZFS (from Oracle because I thought it would be an example of how best to what I want to do: speed, ,managing devices and preventing loss of data/data corruption. I was hoping that JFS had something comparable that would  make my network faster and safer (ZFS being an example). I like a b-tree structure but also like the way FAT32.

You really don't want to use FAT32 if you can avoid it, though for sharing files between OSes, it is needed. I don't know much about HFS+. JFS is considered an excellent all around file system, at least according to Wikipedia and some years back I benchmarked JFS vs HPFS and in most cases JFS was faster. HPFS is limited to 2MBs of cache (HPFS386 can use much more) and as I said, has long chkdsk times.
Another consideration is that for OS/2 you really need EA (xttrs) support. Might work on HFS+, FAT32 has a kludge on OS/2 to support them but some report it leads to instability.

Quote
Would it be better to simply run regular backups with something like a cron script?

Likely. Lots of people are using rsync for backups with good results, and of course there is zip.

ivan

  • Hero Member
  • *****
  • Posts: 1557
  • Karma: +17/-0
    • View Profile
Re: Finding an ideal filesystem for Network storage
« Reply #5 on: December 14, 2017, 11:50:15 am »
Hi Richard,

I'm not quite sure what you are trying to do so sorry for the questions.
1) are all the disks you mention in one box or distributed around several computers?
2) what type of network do you have, wired or wireless and what speed.  Also is it via a router or switch?
3) if the disks are in different machines are they all on at the same time?
4) are you trying to maintain mirror images or just data backup/storage?

Now a few observations.
It does not matter what file system the disks are formatted with provided the system they are in can read it - there may be a speed problem but that depends on the system and the age of the disks.  Transfer speed, read and write, is very often dependant on the network speed and setup.

An example of a multi disk multi device network.
Here at home I have.
One computer (OS/2 cp2) on 24/7 and it has 4 disks 500GB each with boot partition HPFS all the others JFS and is the main e-mail and tracking unit.
Three 2 disk NAS boxes on 24/7 with each box setup as RAID 1 arrays disk pairs of 1TB, 1.5 TB and 2TB file systems Ext3 and Ext4.
Two computers that get switched on during the day, one is using ArcaOS (my test bed for that OS) with a 2TB disk and HPFS (boot) and JFS partitions.  The other is my work computer with two 2TB disks (OS/2 cp2) with HPFS (boot) and JFS.
There are other computers on the network which get switched on and used as necessary, mainly Linux boxes but can be windows units when friends bring one in for 'fixing'.
The network is mainly wired (a gigabit managed switch) with a couple of gigabit wireless APs.

I have access to all the disks and all the data on the network from any computer using SMB/CIFS and/or FTP.  When my work computer and the AOS box comes on line they get updated from the e-mail and tracking unit (I am running those two in a mirror setup for testing).  Work data is updated to one NAS box every two hours using rsync (a holdover from my working days).  The second NAS box gets updated from the first every six hours and the third NAS is a dump unit for storing disk images, master boot image and images of anything for repair.

This is a cut down version of what is at my old company except there we had various industrial systems added with very strange disk formats but they are all readable as and when necessary.

     

Levasai

  • Guest
Re: Finding an ideal filesystem for Network storage
« Reply #6 on: December 24, 2017, 08:29:54 pm »
In terms of managing an OS/2 network of storage devices, what would be the most effective filesystem for speed and efficiency?
* If I read this question correct, you have OS/2 systems as user interface (aka clients, personal conputers..), all networked together and you look for a solution to provide a common network storage for these systems.
The easy way is an OS/2 file and printer share-network, like the same in windows environments: you can use bulit-in tools from OS/2 to access them. If you want the hard way, you can use FTP (not very user friendly to integrate in WPS), the most efficiency way is to use NFS, but have to buy something like the Netdrive-software. Thats all on the client-side. You talk either SMB, FTP, NFS (or some other Prot.) over network to a networked storage system (NAS).
(I know: nothing new here) 
The storage systems today all talk these protocolls by default (at least: this can be choosen).  What kind of a) file system, b) OS, c) hard disk controller  these NAS using, is unimportant for the speed and efficiency from the clients point of view.
The clients are measuring speed by network link speed, throughputspeed of the used hard disks in the NAS and the possibility of some bottlenecks in the controlling part: (e.g. the NAS is fitted only with  a cheap ARM, a simple SATA controller and to few RAM, but should encrypt the entire communication as SFTP, and running the stack of 6(!) disks as a softwareRaid in Raid mode 6.  :o Okay this might be the worst case :D)
The clients don't know (and don't have to), what real file system, OS or hardware sits on "the other end of the network cable", they got provided with directories and files. So you can buy a commercial NAS system (from a simple Ethernet-aware hard disk standing next to your pen holder cup  up to Systems like NetApps FAS series, worth more than a middle class car.
They all work the same ( in general). The hard disks canbe simple kinds of SATA magnetics, Hardware-Raid-connected SAS-SolidStateDisk or Bus-connected FusionIO-SSD-Cards, is just a matter of speed and money. You can even build a "NAS" on your own, based on a PC architecture. You can use Windows, Linux FreeBSD or even OS/2 as OS for such an system (okay OS/2 is handycapped in many ways in this use case).
You can manage your entire storage from monitoring single hard disks up to glueing together network drive space in several modes (from "just a bunch of disks" to a "Raid 1 mirror of several raid-6 glued harddisk groups", resulting in something like "if 3 disks fail simultaneously out of 8, you are still on 'Go!'". Managing this will usually done by web browser.
You are mentioning several disk sizes, that indicates older or recend single hard disk drives. You can take them as your NAS drives and can integrate them one by one (I further assume they are formatted with HPFS or JFS. Both cannot used in most NAS systems, so you have to reformat them in the NAS box).
But I personally suggest to substitute every harddisk you have at least all 5 years,  maybe sooner, copying the data. If they already run in a Raid 4,5 or6 environment, you can wait until one drive failed to substitute them, but you should have the spare drive handy.

* If you want to set up your storage within your computer an not network attached, your are faced in most cases with the common known SATA-drives. You can take Sata-connected SSDs to speed up your storage part of your PC, but if you are take OS/2 as your OS, you are limited to certain file systems. All of them are not fault tolerant of disk drive failure, even our LVM doesnt help you in this case. We dont have Software Raid technologies at hand like Linux or windows, and the use of Hardware Raid controllers are limited to very few old SCSI, if any.
You can do a cron-based copy from one drive to another, but thats just a cheap solution thats even have to handle files like os2.ini exceptionally.
 
You are mentioning ZFS from Oracle. I assume you work with this an another (non-OS/2 -) network. In the OS/2-world, there is nothing like this. LVM was designed to do such tasks but our LVM in OS/2 is far behind the possibilities of the AIX-LVM, and even further away from ZFS.

* But in a OS/2-System, the speed of a build-in SSD cannot reached with any of the NAS technologies out there. Simply because we are stuck by 1Gb-network cards that we cannot bridge. There are 10Gb cards out there but without OS/2-drivers (The world-demand of this can be pobably counted with the fingers of one hand). So my suggestion:  set up your clients on simple SSDs, make a backup every time you start to a SOHO NAS. Buy 'Netdrive' with 'NFS-Plugin' and put all shareble files on the NAS.
* And most important: Backup up your old SATA-drives before its to late. Even if you use a grml-LiveCD with the ddrescue-command without ever mounting your file systems in Linux.

just my 2 cents

Dariusz Piatkowski

  • Hero Member
  • *****
  • Posts: 1317
  • Karma: +26/-0
    • View Profile
Re: Finding an ideal filesystem for Network storage
« Reply #7 on: December 28, 2017, 06:25:45 am »
...along the lines of dedicated NAS box advice I would only add the following: take a close look at what filesystem the box supports, if you use EAs on OS/2 (say, photo album with image thumbnails that are stored as EAs) you want to make sure that the target NAS will support this.

That is currently my problem. The ZyXel NSA325 V2 does not support this. Subsequently my drive towards stripping the main OS/2 machine off of it's numerious HPFS386 partitions is being challenged...LOL!

Levasai

  • Guest
Re: Finding an ideal filesystem for Network storage
« Reply #8 on: December 28, 2017, 01:20:31 pm »

NASBoxes relies on the Samba-software to provide diskspace for OS/2. If you have root access to the box, you can try to cange the stettings in /etc/smb.conf.  Add a parameter "use EAs=yes". I'm not sure, if this parameter is vaild on any box, but it works for me.

ak120

  • Guest
Re: Finding an ideal filesystem for Network storage
« Reply #9 on: December 28, 2017, 04:33:26 pm »
NASBoxes relies on the Samba-software to provide diskspace for OS/2. If you have root access to the box, you can try to cange the stettings in /etc/smb.conf.  Add a parameter "use EAs=yes". I'm not sure, if this parameter is vaild on any box, but it works for me.
Not every NAS box relies on Samba. And changing the parameter is futile when the underlying file system has limitations for EA sizes. Only better devices that allow virtualisation or iSCSI can play nice in environments with older network clients.