[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: congrats on OpenBSD SAN... one little question



Nick Holland wrote:
Jason Dixon wrote:

On Oct 20, 2005, at 1:49 PM, Joe Advisor wrote:


Congrats on the cool OpenBSD SAN installation.  I was
wondering how you are dealing with the relatively
large filesystem.  By default, if you lose power to
the server, OpenBSD will do a rather long fsck when
coming back up.  To alleviate this, there are numerous
suggestions running around that involve mounting with
softdep, commenting out the fsck portion of rc and
doing mount -f.  Are you doing any of these things, or
are you just living with the long fsck?  Thanks in
advance for any insight into your installation you are
willing to provide.

This is just a subversion repository server for a bunch of developers. There are no dire uptime requirements, so I don't see a lengthy fsck being an issue. Not to mention the hefty UPS keeping it powered. Sorry if this doesn't help you out, but it's not a big problem on my end (thankfully).


If it was, I would have just created many slices and distributed projects equally across them.


I'm working on a couple "big storage" applications myself, and yes, this
is what I'm planning on doing, as well.  In fact, one app I'm going to
be turning on soon will be (probably) using Accusys 7630 boxes with
about 600G storage each, and I'll probably split that in two 300G pieces
for a number of reasons:
  1) shorter fsck
  2) If a volume gets corrupted, less to restore (they will be backed
up, but the restore will be a pain in the butt)
  3) Smaller chunks to move around if I need to
  4) Testing the "storage rotation" system more often (I really don't
want my app bumping from volume to volume every six months...I'd rather
see that the rotation system is Not Broke more often, with of course,
enough "slop" in the margins to have time to fix it if something quit
working.)
  5) Cost benefit of modular storage.  Today, I can populate an ACS7630
(three drive, RAID5 module) with 300G drives for probably $900.  I could
populate it with 400G drives for $1200.  That's a lotta extra money for
200G more storage.  Yet, if I buy the 300G drives in a couple storage
modules today, and in about a year when those are nearing full, replace
them with (then much cheaper) 500G (or 800G or ...) drives, I'll come
out way ahead.  Beats the heck out of buying a single 3+TB drive array
now and watching people point and laugh at it in a couple years when it
is still only partly full, and you can buy a bigger single drive at your
local office supply store. :)  With this system, I can easily add-on as
we go, and more easily throw the whole thing away when I decide there is
better technology available.

Would I love to see the 1T limit removed?  Sure.  HOWEVER, I think I
would handle this application the exact same way if it didn't exist
(that might not be true: I might foolishly plowed ahead with the One Big
Pile philosophy, and regretted it later).

Hi Nick

We can argue back and forth on the pros and cons of building >1TB partitions or not, but the need for these giant allocations are real enough and from a commen/broader view (small business) the demand is also moving closer and closer. At work we have a disk-to-disk backup server for (for customers) with one 1.5TB (SATA raid5) backup partition. The app works that way and if each customer start using it and used =<20GB per customer, we would need at least 3.5TB more disk space. Breaking up in smaller chunks is not always possible/practical.

I would appresiate an "unlimited" filesystem one day - but not at the cost of potentially losing data!
I would also just love to see "OpenBSD large-scale enterprise SAN/NAS solutions" in the LISA program some day :)


/per
per_(_at_)_xterm_(_dot_)_dk




For this application, the shorter fsck is not really an issue. In fact, as long as the archive gets back up within a week or two, it's ok -- the first stage system is the one that's time critical...and it is designed to be repairable VERY quickly, and it can temporarily hold a few weeks worth of data. :)

Nick.



Visit your host, monkey.org