Reconfigure Dell PERC H310 RAID-0 to RAID-1

I recently wrote about converting a hardware RAID array to Btrfs. In that article I noted that my operating system was running on a single RAID 0 disk handled by the Dell PERC H310 disk controller. This article outlines how I used the Dell PERC H310 to reconfigure the RAID 0 disk into a RAID 1 setup with the addition of a new disk.

If Btrfs Is Good For Storage, Why Not The OS?

I left hardware RAID for my storage array. The storage array is fundamentally different than the disk running my OS. The storage array has 6 disks for massive capacity, has changed over time, will likely change in the future, and needs flexibility. The OS doesn’t need much space; however, when you install Linux there are few things you have to consider on the disk that aren’t relevant for a simple storage array:

  • boot partition – this contains data needed to boot the OS. It’s not advisable (yet) to use btrfs on this partition
  • swap partition – this contains overflow space for when your physical memory isn’t enough. Again, btrfs isn’t the best choice.
  • root partition – this is everything else (sometimes this is split into several partitions, e.g., /home for user data). For this btrfs is fine.

If I want to survive a disk failure all of these partitions have to survive. I could use brtfs on the root partition, but currently the boot and swap partitions need a non-btrfs file system types. This means if the disk with the boot and swap partitions fails at best I’d have a degraded OS and a full system outage if rebooted. Could I recover? Yes, but it’d be painful and the server would have significant downtime. I’d rather have a no-outage recovery. The is where hardware RAID is great. I can let the controller manage the entire disk and be file system agnostic.

Continue reading

Switch to Btrfs from Hardware RAID

I recently successfully completed a migration of ~4 TB of data from one multi-disk hardware RAID array to a new software array. This article summarizes how I did it with the help of Btrfs. Storage growth was may primary goal. In short, I went from a 4x 3TB disk Dell PERC H310 hardware RAID 10 array with ~6TB storage capacity, to a 6x 3TB disk btrfs v4.1 software RAID 6 array with ~12 TB storage capacity. Both arrays have file systems sitting on top of LUKS for encryption.

Why Change Anything? Isn’t Hardware RAID The Best?

In 2013 I bought a Dell PowerEdge R520 as my “do everything” home server. I love it. It’s great for many reasons. One relevant to this article are its 8 hot swappable drive bays. When I bought the server, I put in 4 Western Digital 3 TB RE drives and created a RAID 10 array, i.e., a 6 TB array. I wanted some ability to recover from a disk failure and the PERC H310 controller doesn’t handle parity-based RAID very well (e.g., RAID 5. RAID 6 isn’t even an option). I created the RAID 10 array and filled it to nearly 50% capacity with existing videos, RAW images, and all sorts of digital stuff accumulated over the years. Two years later I found myself approaching 70% capacity. As you can’t just grow a hardware array, I knew I had some work ahead of me and decided it was time to rethink the array.

Why I Chose Btrfs Over Hardware RAID

Hardware RAID is great in some situations, but I found my situation to not really need it. Moreover, my hardware isn’t capable of doing what I want. I have 4 needs:

  1. RAID 6. RAID 10 is cool, but I don’t need that much performance at the cost of 50% of my disks. And as it’s a rather small array (i.e., 6 disks max) RAID 6 gives me more than enough performance and resiliency. It’s important to note that I have both on-site (2nd server running FreeNAS) and off-site backup with CrashPlan. (If you’re new to RAID, be sure to search for articles explaining why RAID isn’t backup.) To upgrade to a Dell supported PERC H710 card capable of RAID 6 would cost $500. As I’m not an enterprise, I don’t need the super performance (and from what I’ve read, these days software RAID isn’t much different than hardware unless you have big performance needs), so the investment didn’t make sense for me.
  2. Flexibility. Hardware RAID is nice, but once you build an array, it’s fixed and you have to destroy the array to grow it (though some simple transformations are possible). You can swap drives in the event of a failure, but that’s about it. So if I decide later change or expand my RAID configuration again, I want to do that on the fly in a non-destructive manner. If I have “home server” constraints (e.g., disk sizes that don’t match that I’d like to use), I’d like to be able to deal with them.
  3. Ease of Maintenance. Have you ever used an LSI command line interface? If you manage to find a decent LSI MegaRAID reference, it still is notably painful to use. I want the thing implementing RAID to be very intuitive to use, and well documented.
  4. Encryption. Full disk encryption is a must, preferably a mainstream approach like LUKS.

Btrfs offers all of this and much more. Seriously, check out their wiki and the rate of development in the last two years. It’s impressive. ZFS was also a consideration, but in short, it’s a pain to implement in Linux due to license incompatibilities (and I have no interest in leaving Linux), and from what I’ve read I fundamentally like how Btrfs is setup compared to ZFS.

Continue reading

Ubiquiti, I Think I’m in Love

For as long as I can remember, I’ve had a home network problem. While I truly enjoy the simplicity of Apple’s Airport Extreme wifi router, I often need some features that it doesn’t offer, e.g., command line interface, a web interface, dynamic DNS support, easy static dhcp mapping for dozens of clients, simple local “DNS,” etc. On the flip side, there are many high-end routers that will give you this an much more, but you need a Ph.D. in Cisco hardware to make sense of them. My typical workaround usually involved offloading network tasks to Linux servers. It works, but in most cases it felt heavy for a home network. About a year ago I bought a Netgear Nighthawk R7000 and flashed dd-wrt onto it. This was a nice step forward, but the whole dd-wrt community is a bit of a hack. With every firmware update something gets better, but something else breaks. It’s cool, but not reliable.

Ubiquiti EdgeRouter PoE

Ubiquiti 5-Port EdgeRouter PoE

And then about a month ago I found Ubiquiti while reading an article on SmallNetBuilder. It sounded too good to be true, “enterprise hardware at consumer prices.” For the price, I couldn’t resist ordering one to see. I decided on the EdgeRouter PoE. Small, silent (no fans), built-in switch, it seemed to have the essentials. Continue reading

Severe Hand RSI Pain and Recovery

It’s been over a year since I wrote something on my site. It’s not because I got lazy or disinterested. It’s because my hands hurt. They hurt like I never knew they could. After months of trying to determine the issue and subsequent rehab, they’re manageable, but still not great.

I’m writing this post with the intent of informing two types of people: 1) in-pain people: if someone out there is feeling the sort of pain I describe in this article hopefully it will help expedite successful diagnosis and recovery, 2) pre-pain people: if you’re twenty-something and feeling invincible, I’m here to tell you you’re not.
Continue reading

Update MacPorts via rsync Behind a Firewall Over SSH

Update: I found an even better approach to update over https. What’s not so clear in that link is to use sudo port sync instead of sudo port selfupdate for subsequent syncing. (29 Oct 2014)

If you’re stuck behind a firewall and can’t access rsync to update MacPorts, but ssh is available and you have a server on the Internet, here is a quick fix (assuming you have MacPorts installed)

Step 1: Update the MacPorts config

While quite simple to change, this config option took me a while to find. Credit to Nikolas Mayr.

$ vim /opt/local/etc/macports/macports.conf

Find the line with “rsync_server” and change to this (or whatever port number your prefer):

rsync_server		localhost:12345

Step 2: Tunnel to your server

Nothing fancy here. Just open an SSH tunnel to forward rsync traffic through your server.

ssh -L 12345:rsync.macports.org:873 -lyour_user your.server.example.com

That’s it.

Systems: Because You Can’t Count That Fast


Me trying to explain a global SAP ERP network to kids.

On Thursday, it was bring your daughters and sons to work day at Genentech. My little Marlowe is only 6 weeks old so I didn’t bring her, but my VP shot me a note Wednesday saying she was bringing her two boys, knew of some other kids who would be there, and wondered if I could give a 30 minute talk on systems. I responded, “sure.”

But what the heck do you tell kids ranging from 6-12 years old about your corporate ERP system? While fascinating, I doubt they’d care about the usual things that I work on and I wouldn’t dare show them PowerPoint slides. So what to do?

Continue reading

Hello Marlowe!

Allow me to introduce to the world, Marlowe Maxine McBride!

Marlowe was born in San Francisco, California at the University of California San Francisco (UCSF) hospital at 12:23pm on Sunday, 10 March 2013 (10 + 3 = 13 … yay! A math trivia birthdate!). She weighed in at 7 pounds 7 ounces, and measured 21 inches in length.

Both Alicia and Marlowe are doing great and getting a lot of needed rest. Marlowe’s middle name is borrowed from my paternal grandmother. Her first name was her mom’s top pick and after witnessing the process of giving birth, mom had final decision rights on the name. Luckily, when we both gave our final suggestion on what her name should be around noon on the 11th (24 hours after her birth), we both agreed Marlowe matched her personality.

For those of you looking for more pics, I will upload highlights by day to the link below for the first week or two, so check back each day for more:

Click for Marlowe Pics!

Cleanup Unused Linux Kernels in Ubuntu

I update Ubuntu with a very simple script I call apt-update that looks like this:

$ cat ./apt-update 
sudo apt-get update; sudo apt-get dist-upgrade; sudo apt-get autoremove

Nothing too crazy there. It updates the apt-get cache, performs the upgrade, and then removes all the residual junk that’s laying around. Well, almost all. If you do this enough, eventually you’ll see the following (assuming you’ve got the default motd Ubuntu script running and you’re logging in from a terminal):

=> /boot is using 86.3% of 227MB

This is because that script I mentioned doesn’t consider old kernel images to be junk. However, unless you’ve got an abnormal /boot partition, it doesn’t take too many old images to fill it up.

A quick Google search found Ubuntu Cleanup: How to Remove All Unused Linux Kernel Headers, Images and Modules. The solution on the page had exactly what I’m looking for, however, I couldn’t take it at face value. While the article offers an adequate solution, it doesn’t offer much explanation. The remainder of this article explains the details for this one-liner noted in the article above:

$ dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge

Note: Only run this if you’ve rebooted after installing a new kernel.

Ick. Let’s dig into what’s going on here. The pipe characters are chaining a bunch of commands together. Each command’s output becomes the input for the next. Given that, let’s walk through what’s going on in 3 steps.
Continue reading

Remove Chromatic Aberration With Lightroom

I was chatting with Karen Yang last week about photography and she mentioned that she uses Photoshop to open each of her photos individually to edit them. As I’m a huge fan of all of the powerful one-or-two-click tweaks in Lightroom, I thought I’d note one here that I fixed over the weekend and hopefully begin to help Karen see the Light … room. :)

The issue I ran into with my photo is called chromatic aberration. It’s a term that simply means not all colors of light had the same focal alignment when captured. Chromatic aberration usually can be seen by colors bleeding out on the edges of subjects in your photos. Check out this shot of me as an example:

Continue reading

Managing A Large Photo Library With Lightroom, Dropbox, and Crashplan

Late last year I made the switch to Adobe’s Lightroom 4 from Aperture and Picasa for a variety of reasons, but a big one was photo/file management. As my photo collection got larger I found that I’d need more than my laptop’s hard drive to store them all. Despite all the things I liked about Aperture and Picasa, both had clunky file management and the burden was enough to make me consider alternatives. I’m now very happy with the workflow described below using Lightroom, Dropbox, Crashplan, and a Linux file server.

What I Want to Accomplish

I want my setup to enable the following:

  1. Mobile Photo Management: To be clear, when I say “mobile,” I don’t mean smartphone. I take my Canon 5D Mark III everywhere and like to transfer photos to my Apple MacBook Pro immediately. It’s important that I be able to do whatever I need to do no matter where I am, i.e., I don’t want to be tied to my office.
  2. Large Photo Archive: Photos (especially RAW photos) consume a lot of disk space. I want a file server to store any photos that I’m not actively working on. It has to be huge and scalable.
  3. Passive Transport: I want my files to get to my file server as quickly as possible, but I don’t want it to be an active part of my workflow, e.g., I don’t want to get stuck waiting on a FTP program to finish its batch before I can disconnect from a wifi spot or put my computer to sleep. I’d rather it just happen when the opportunity arises.
  4. One-Time, Offsite Backup: Once my files are on the file server, I want them to back up from the server only (i.e., I don’t want my laptop doing a second backup). Also, as I’ve noted before, I like offsite backup of my photos. A local copy isn’t good enough.
  5. Accessibility. When home, I want to easily access all of my photos, whether they’re on my laptop or in the archive.

Sound like a lot to accomplish? It turns out it’s pretty easy to do. Continue reading