Search

Search this site:

Dear lazyweb: Large volumes strategy?

Wouter, you seem to enjoy using huge filesystem volumes. So, dear lazyweb… I have a 500GB disk for my users’ home partitions - they mainly use it to back up their desktop data, but I do expect them to give it more usage. Anyway - I am currently using around 350GB of it, leaving the rest still unpartitioned. I have switched over to ReiserFS, as online resizing it is way faster than online resizing ext3 - it’s basically instantaneous. It seems I do belong to the old sysadmin school, and I value highly the reliability that comes from not having filesystems too large for their own good - After all, recovering from failure in 10GB is way more probable than doing the same in 100 or 1000, isn’t it? But then again, it’s just impractical to create /home/samba/{1,2,3,4,5,6,7,8,9}/ and randomly split users’ directories inside them. Most users won’t use more than a couple MB, but some users (legitimately!) back up tens of GB… It’s hard for me to anticipate their needs. Does any of you have scary stories regarding huge volumes? Or any data supporting the idea that I’m just too old fashioned and that half a TB is a good partition size? Note that I am holding full backups :) [update]: So I’m undoubtley an old fart? Everybody seems to have partitions larger than me. Ok, I will stop crying…

Comments

Anonymous 2008-04-17 08:11:01

Check LVM you can add and

Check LVM you can add and remove disks and partitions to your big filesystem


Anonymous 2008-05-03 00:17:09

No bad article, for online

No bad article, for online data backup, I like mozy and idrive, just fully free. newbie can use some small free tools, such as wwwbackup, easy to backup all important data.


gwolf 2008-04-18 07:20:50

Of course, but on top of that…

I am aware I can do this (and am a frequent user of LVM) - The question is, independently of LVM, on how advisable is it to have such a partition independently on whether they are over LVM or not.


gwolf 2008-04-18 07:30:29

Avoid XFS altogether…

It might give great performance, I don’t really know. But I know it has lost me heaps of data. acknowledged, the worst case was using 2.6.17, which had an awful XFS corruption bug - but nevertheless, being a less-mainstream and quite more complex filesystem, it is less likely to be thoroughly maintained. But it’s even worse on desktop/laptop system: As it caches aggresively, if you lose power/battery or your system freezes or whatnot, even if your files were supposed to be written ~30 seconds ago… They are still in RAM, waiting for the right time to land to the disk. Even though the metadata has been written. So, the file contents are just gone forever. Avoid XFS unless you have a strong case for it.


Henrik 2008-04-17 13:22:24

10GB is probably much too

10GB is probably much too small by todays standards, consider the common case of preparing files for burning a DVD for example. I remember back in the early 90s when I was at university. I had to work with a huge by its time (about 2GB) file full of data from CERN, and there was no way I could do it on my regular school account with its 100Meg quaotas. There was only 1 computer in the whole lab that could handle files that big, and everyone needed to get on it, so you had to wait for several days before you got your chance. Things like that are pretty annoying and makes ones computer experience much worse.


Jason Clinton 2008-04-17 08:38:55

Large?

I chuckled at the suggestion that 500GB is large. Where I work, a company that builds supercomputers, we regularly ship systems with 8-24 TB of storage in a RAID 6.

Seriously, ext3’s fsck time is terrible if the volume is full of data. A 16 TB volume can take 16 hours to fsck–to keep a fsck from ever happening, the strategy is to use RAID controller battery backup and UPS.

ext4 developers have said they are addressing fsck time with the next revision.

The other challenge is finding affordable tape robots that will auto-change tapes during a bacula backup. Since no tape format is large enough, multi-tape requirements are implicit.

I don’t see how splitting anything could help; if you have to fsck one of the slices, chances are you’d have to do it to all of them and you don’t get any benefit from the slices being smaller.


Mike 2008-04-17 08:33:15

Make sure if you happen to

Make sure if you happen to use XFS that you have one GB of ram for every TB of partition, and an extra GB of ram for every million files.. learned that one the hard way on a 4TB partition.

Categories