[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: RAIDFrame mirroring - parity conceptual issues
- To: "'Alec Skelly'" <alec_(_at_)_dtkco_(_dot_)_com>, <misc_(_at_)_openbsd_(_dot_)_org>
- Subject: RE: RAIDFrame mirroring - parity conceptual issues
- From: "Gavin Lloyd Bates" <gavin_(_dot_)_bates_(_at_)_bigpond_(_dot_)_com>
- Date: Fri, 21 Sep 2001 08:51:33 +0930
Ordinary RAID5 does indeed use XOR to generate parity. To get the parity
bit, you XOR the bits from all of the other volumes together - this has the
advantage of allowing an "unlimited" number of volumes participate in the
When a volume containing data fails, the operating system simply performs an
XOR on the remaining data volumes and includes the parity volume. With XOR
being a symmetric operator, you end up with the bit value from the missing
However.. Ordinary RAID1 doesn't bother with XOR's - that would be just a
waste (as you have postulated) of processing power and time. Instead it just
(usually) uses a direct copy of the one disk on the other. This is wasteful
in terms of disk space (and thus money), but can decrease the risk of
For example, if you need 36GB, you could either have a 3 volume 18G RAID5
set, or a 2 x 2 volume 18G RAID1 set. The RAID5 set can handle 1 volume
failure at a time - the RAID1 sets can handle one failure on each set
It all comes back to how much is your data worth, versus how paranoid you
are, and how much you like restoring from tapes.
Me, I'm certifiable. That's why I recommend RAID1 sets with hot spares. Now
all I need is the money. (Needless to say, my department head doesn't agree
Remember that in theory, the likelyhood of a disk failure in a set increases
linearly with the number of disks in that set.
However, real interactions, such as running in the same environment,
purchased from the same batch, same number of running hours all assist Mr.
Murphy in making more than one drive fail at _very_ similar times.
From: owner-misc_(_at_)_openbsd_(_dot_)_org [mailto:owner-misc_(_at_)_openbsd_(_dot_)_org]On Behalf Of
Sent: Friday, 21 September 2001 1:22
Subject: RE: RAIDFrame mirroring - parity conceptual issues
Suppose I have a RAID 1 volume that's had its parity initialized with
'raidctl -i', has been labeled & formatted, and now has data on it. If
a disk fails, I believe the correct way to handle it would be to replace
the disk and then run 'raidctl -R' against the new disk. Does
'raidctl -i' come into play again at some point? If the RAID 1 volume
conceptually has parity stored on both disks, it seems that the parity
would need to be rebuilt after a failure. In fact, it seems that
rebuilding the parity and rebuilding the data would be the same
operation (although 'raidctl -i' does not take a parameter to specify
which disk is the bad one needing to be rebuilt as does 'raidctl -R').
Also, my understanding of RAID parity (and maybe this is the problem) is
that data from two locations is combined in some reversible way (XOR?)
and stored in a third location. With RAID 1, not only is there no third
location to store the parity, but the data in the source locations is
identical anyway, so why bother?
I would really appreciate it if someone could clear this up.
Visit your host, monkey.org