Latest RAID Experiment – VirtualBox and FreeNAS (8 TB RAID 5)

When we last left off, I found Christmas joy in a big honking bag of hard drives and an external SATA enclosure.  But my joy was not complete without a measure of redundancy and recoverability.

The controller that shipped with the enclosure did not support RAID 5 in firmware and the embedded controller on my motherboard could not be used with a SATA port multiplier.

Oh, the agony of being so close… and yet so far.

Then I discovered FreeNAS (  Some kind soul put together a FreeBSD distribution that would handle RAID 5 in software.  But am I going to blow away my beloved Windows 7 box and dedicate it to NAS?  Hell no.  But I am going to download VirtualBox and run it as a virtual machine.

More agony:  I find myself willingly running an Oracle product.  I’ll get over it.  Somehow.

So here’s what I did:

On each of the five 2 TB hard drives, I created two 900 GB “virtual hard drives” using the VBoxManage utility.  I then created a virtual machine in VirtualBox.  I figured this way I’d only lose 900 GB to parity instead of 2 TB.

I then downloaded the FreeNAS VMWare images (VMDK files).

I created a FreeNAS virtual machine, attached the FreeNAS VMWare images, and the 10 virtual hard drives.  Configured FreeNAS to use the 10 hard drives, set up a RAID 5 stripe using all 10, then made the RAID 5 stripe available as a CIFS share accessable by the host operating system.

This sounds like a simple, quick process… but it wasn’t.  Creating each of the virtual hard drives took hours and hours.  Configuring the RAID 5 array was simple and quick… but waiting for the array to be ready took about 24 hours.

Needless to say, there was a lot of starting processes either before work or after I got home, then waiting breathlessly for the end of the day or the morning to see if it completed.

The good news:  the processes eventually finished without error.

The bad news:  there is a memory leak in VirtualBox.  I configured the VM to use 1024 MB of RAM.  Windows Task Manager reports that the VirtualBox.exe process is using about 4 GB of RAM.  Good thing I have 8 GB of RAM in the box or this thing would be useless..

In all honesty, this is one compromise after another.  The original idea was to build a small Linux box and install the hard drives into it and present them as a CIFS share or iSCSI target.  That’s probably what I’m going to wind up doing.  Somewhere down the road I’ll get a new case/power supply/mobo and RAM and make sure the mobo’s embedded controller can handle RAID 5 in silicon.

This time, I’ll just install FreeNAS directly on the hardware without any virtualization (and lose the need to run Oracle software).  The VirtualBox memory leak becomes a non-issue.

The FreeNAS web portal is very easy to use… so I probably won’t even bother with a monitor or keyboard for the box, just something big enough to hold the hard drives.  Yeah, I’ll have an extra SATA enclosure laying around, but it’ll just join the rest of my experiments in the Closet of Dead Technology.  I’m sure I’ll eventually find a use for it.

  1. #1 by George Kalaouzis (@georgekalaouzis) on October 26, 2011 - 3:08 am

    great article!
    Maybe I have got something wrong, but if you have created 2 virtual disks on each physical disk, and one of your physical disks stops working, you are suddenly in the situation where in a raid5 array you have lost 2 disks, so you cannot recover your data…
    Maybe it’s wiser to loose a whole 2tb in parity and create a single virtual disk in each physical. Or choose another raid method instead (raid6, raid-z).

  2. #2 by Marc Jellinek on November 5, 2011 - 11:33 am

    @George Kalaouzi

    You are 100% right.. having two virtual disks on the same physical disk gives no no protection at all.

    I haven’t kept up on this thread, so here’s what happened:

    Eventually, I wound up running Ubuntu within a VMWare Player virtual machine. I configured the VM to directly address the 5 2 TB disks directly, then used Ubuntu’s mdadm utility to create a RAID 5 array from the 5 disks. I set up the RAID array as an iSCSI target.

    From the Windows Host, I was able to attach to the RAID5 array using Windows iSCSI initiator.

    This worked pretty well for a while. Then things started getting wonky (missing files, different files visible from within Ubuntu vs what I could see from within Windows.)

    I was never able to get it all figured out.

    What I’ve discovered: It would probably be easier to use a hardware-based RAID solution. I considered using the ICH10R controller on my motherboard, but it would only handle 4 drives at a time. The motherboard I’m using also requires that the boot drive be one of those 4 (all 4 are on the northbridge, the remaining two are on the southbridge). So I’d really only be able to have 3 drives in the array, and lose one of those to parity.

    Not really worth it.

    I’m having a local shop build me out a new computer with a RocketRAID hardware controller… it will be their problem to get it all working.

    Unfortunately, the flooding in Thailand has driven hard drive prices through the roof, so I’m going to wait until prices drop (probably around Christmas) before pulling the trigger.

  3. #3 by Paul Rinear on January 30, 2012 - 10:31 am

    Besides no redundancy as George pointed out, this whole setup sounds like it would perform poorly. RocketRAID is a good choice – solid and reasonably priced.

    • #4 by Marc Jellinek on January 31, 2012 - 11:20 pm

      The setup actually performed pretty well, considering the virtualization layers and hardware involved. I eventually scrapped the whole thing. RocketRAID hardware was considered.

      The RocketRAID cards are “hardware assisted”, not hardware RAID… call me gun-shy, but I’ve tried a couple of Silicon Image SIL3132-based RAID controllers and was not happy with the results. I’m also pretty short on PCIX slots (running two video cards that run two monitors each), so if I can get an external enclosure that will do hardware RAID, I think that’s the direction I’m going to go.

      My new obsession are the Synology Diskstation enclosures. Yes, they are expensive (so are the Drobo and NetGear boxes), but they’ve reviewed well. The ability to support gigabit iSCSI has me hooked. Just waiting for hard drive prices to drop a bit more and I think I’m going to load one up with 3 GB drives and consider this case-closed.

  4. #5 by w1n78 on June 25, 2012 - 10:02 am

    i did a couple of different things then i just bit the bullet and saved up for what i have now… i started with a terastation 4x1TB in a raid5, then got a synology ds212j with 2x2TB in a raid1. both were getting filled up and i needed to consolidate my data. i decided to just build a server box using whs2011 and adaptec 6805 raid card. it’s a bit pricey but the results are great. i’m running a raid6 with 8x2TB. i now have 11TB of storage that should keep me going for a while. i just have to find a way to mirror the data for backup. i may get the highpoint 2680 raid card and run freenas on a spare pc. grab a bunch of old drives and stripe them for the backup server. if a drive goes, it’s okay, it’s only the back up. it’s been a fun and sometimes frustrating experience.

    • #6 by Marc Jellinek on June 26, 2012 - 8:10 am

      What was your experience with the Synology box? I’m looking at a complete hardware refresh (i7-based PC, boot from SSD) and will likely start it a storage refresh.

      I’m looking at the Synology DS1512+ (5x 3TB to start). If I run out of storage, I can expand using the Synology DX510 for an additional 5 bays… and can install two DX510’s. The system maxes out at 15 bays.

      (I know early versions of this device had some issues, but from what I’m reading on the ‘net, they’ve been resolved and Synology support is handling the issue like professionals… no blame shifting, no BS)

      I’ll connect the Synology using iSCSI on a dedicated Gigabit Ethernet network… so it should move over nicely to the new PC.

      I enjoy rolling-my-own, but I’m short on time (SQL 2012 shipped and I’m still getting up to speed on enhancements to SSAS, SSIS, etc; consulting customers taking up time, generally having a life). At this point, I’m feeling like trading some money for time (and having a supported product).

      I’ll use my existing setup as a backup location once I have the new PC up and running.

      The real benefit here (outside of RAID5, centralized storage, etc) is the new PC can be small and use a reasonably small power supply… the stack of disks will be installed into the Synology box, not a tower. No more crawling around under my desk to pop in a CD/DVD/BluRay disk

      • #7 by w1n78 on June 29, 2012 - 12:24 pm

        many of the bad reviews i read were related to the DSM v3 software and media files for UPNP. they have DSM v4 now. i access the media files via CIFS shares and no problems what so ever. since building my whs 2011 box, i turned the ds212+ into a raid0 and have it mirror my data off the server. it’s only good until i hit the 3.8TB size then i have to find larger drives or create another raid array on the server. other than that my synology is great. the web gui is easy to use. it’s linux OS and power consumption is low. i turned off the hdd hibernation. by default it set to 20 min. of inactivity, then it spins down but a minute later it spins back up. no device is accessing it. i figured it may increase the wear and tear if i kept it on so i disabled it. i can’t comment on support, i haven’t used them since i’ve had no issues since owning the enclosure. good luck.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: