However the / or root part can be placed as raid, I did this and I will show you how to do this, but please don't do it.
Here where it gets bad.
You wish to update the system and kernel and all this can affect the RAID/ now the config for the raid is done in the kernel and stored in boot part, so you always updating this and your always keeping the disks busy, even on network packet level. Everything happens will happen on the /
What you want to do is look into purchasing a small reliable drive
Like 64 GB or 128 GB NVMe SSD and using that as your /
or If that is too fancy or expensive, get USB thumb drive and pair it with samsung SD card and install the root part
Then create RAID 10 on the 4 other drives, cause even if the /root fails it wont matter . pop another one and it will auto detect the raid , with cockpit it is very easy to detect the RAID and activate it and mount it
4_1538835763447_2018-10-06 17_18_47-Centos Gi (Ansible 2) - VMware Workstation.png 3_1538835763447_2018-10-06 17_18_38-Centos Gi (Ansible 2) - VMware Workstation.png 2_1538835763447_2018-10-06 17_18_33-Centos Gi (Ansible 2) - VMware Workstation.png 1_1538835763447_2018-10-06 17_17_52-Centos Gi (Ansible 2) - VMware Workstation.png 0_1538835763447_2018-10-06 17_17_43-Centos Gi (Ansible 2) - VMware Workstation.png
My money is on fake RAID you got there. So use md.
Also because you already use MD. :-)
You know what, I can even move the two disks to my new workstation and not have to transfer any data.
I think the answer may have been found... :D
Actually even better, I can break the RAID 1 on my current box and convert it to non-RAID, then move one of the disks to my new workstation and convert it back to RAID 1. If done right, I think I can do it all without any data loss too.
I'd prefer to just move the two disks and not have to transfer any data... but that's just me.
Yeah. I'll need to copy my home directory to the SSD in my current box before doing that though. That might be easier than breaking/re-creating the RAID. I'm not quite ready to "switch" yet. Hmm...
One of the confusing pieces here is that Linux actually does things more clearly but the Windows world is so confusing that if you carry that confusion into the Linux world, it makes things harder. Windows rarely uses or discloses the names of their product components. So Windows Software RAID is used to describe part of the Windows OS. But what if you have software RAID on Windows that is not Windows Software RAID? Windows Admins typically have no good terminology to discuss this, even though it is common. They just.... don't know what's going on and don't document it. But in Linux, we have the terms on hand all of the time (MD, ZFS, whatever.) So the Linux side isn't as bad as it seems, but if you are used to a weird blend of generic names being used as if they are specifics from the Windows world and assume that the Linux world is just as crazy, then it seems crazy.
Seems like the perfect case to use RAIN, even if it's within a single system enclosure. @StarWind_Software LSFS, I'm looking at you. @KOOLER I am right in thinking this is the sort of thing LSFS could handle, right?
RAIN in a single enclosure rarely does anything that RAID 10 does not. It's effectively all the same at that point (more or less.) If RAID 10 doesn't work, RAIN isn't going to work either (normally.) The issue here is "single enclosure."
Wouldn't a properly configured single RAIN node make it easier to grow when it's time to add more storage?
I've seen this with Exablox and it was a nice feature!
Yes, if you are preparing for scale out. But if you are just doing it within the context of a single node, it doesn't change anything.
Of interesting side note, the Linux md RAID system also implements Intel Matrix RAID and DDF (Disk Data Format) software RAID formats commonly used by consumer FakeRAID systems. Because of this, Linux md can sometimes convert FakeRAID into enterprise md RAID if you really know what you are doing :)
No, it doesn't support this. RAID 1, you are correct, but Parity RAID 5 or 6 it does not. The OS needs to be up and running to be able to manage the parity RAID so you can't use it for the system install, only for extra data volumes.
Now which applies to MDADM RAID? If Cold swapping is the only way of swapping drives, then I guess it immediately excludes it from any Enterprise or even business solution.
MD RAID (MDADM is the management utility for MD RAID) is hot swap, of course, and some vendors like ReadyNAS and Synology add their own extensions to add blind swapping. No one would even discuss it if it wasn't hot swap.