ENTRIES
Welcome to Eric Cheng's online journal! You are not logged in. [ Log in ]
«  :: index ::  »

ZEVO ZFS over Thunderbolt on a Mac

:: Saturday, October 12th, 2013 @ 1:39:46 am

:: Tags: , ,

After years of threatening, I finally set up a ZFS pool in a Mac OS X (10.8.5) environment. Today, I downloaded ZEVO Community Edition, which is (self)-described as “a momentous, much needed and long-overdue improvement over Apple’s status quo file system (HFS+) that was designed in the mid 1980s — before the Internet existed!” I totally agree. HFS+ is a turd, allowing “bit-rot” to silently corrupt your data over time. I am a working photographer with many terabytes of data that I need to store securely. Although I keep many copies of the data (and versioned copies of important stuff), I have, on occasion, gone back to old pictures only to find them to be corrupted. This scares me. Luckily, ZFS is a file system that has been designed not to allow silent corruption. If you want to know more about ZFS, read its Wikipedia entry.

I’ve been warned that ZEVO may not be supported in the future, and that their version of ZFS for the Mac has the following limitations:

  • No GUI
  • No Deduplication
  • Limited storage capacity (16 TB)
  • Other natural limitations (“resource diet,” they say)

Still, people seem to be successfully using ZEVO on a daily basis, and I’m told it’s very stable, so here I am.

I’m not really a unix person, and I hate configuring storage via the command line. But ZEVO and ZFS is really brain-dead simple. Anyone can get a ZFS volume up and running by following instructions carefully.

Here’s what I did:

  • I bought an Areca ARC-8050 Thunderbolt RAID 8-Bay. I am pleasantly surprised by how quiet this box is. When you first power it on, it sounds like a jet, but it’s apparently just doing a fan test. The fan is adaptive and typically runs quietly. I can hear the box, but it’s not annoyingly loud.

  • I filled the box with 8 x 4TB Western Digital Red SATA NAS hard drives.

  • I revived an old-ish Mac Mini and upgraded it to 16GB of RAM and an inexpensive SSD. This is the Mac I am going to use with the Areca box because my Mac Pro doesn’t have Thunderbolt. When the new Mac Pro comes out, I’ll move the connection over, assuming that OS X Mavericks doesn’t do something stupid like disallow such things.

  • I registered at the ZEVO website, downloaded ZEVO Community Edition, and installed it.

  • I downloaded the latest firmware for the ARC-8050, unzipped it, and applied the firmware updates using the web interface for the Areca. The firmware update comes with 3 .bin files for the ARC1882 (which is correct), and you have to apply all of them. There is no feedback from the web GUI after you hit “Submit” until the update completes (or fails!). Scared, yet? I was, when I got here. The documentation is poor.

  • I went to Physical Drives->Create Pass-Through Disk in the Areca configuration interface and created a pass-through disk for each of the 8 drives. Creating a pass-through disk allows Mac OS X to see the drive, but doesn’t allow you to use the disk in a RAID set. This is fine because we are going to use ZFS to manage the RAID instead of the Areca’s RAID controller.

  • I initialized and partitioned each disk as GUID and formatted as Mac Extended (Journaled). I don’t think it matters what you format as, but ZEVO definitely wants you to initialize and partition the drives.

  • At this point, running “zpool showdisks” returned:

DISK DEVICE SIZE CONNECTION DESCRIPTION
/dev/disk1 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk2 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk3 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk4 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk5 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk6 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk7 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media
/dev/disk8 3.64TiB SAS WDC WD40EFRX-68WT0N0 Media

  • I created a RAID-Z2 (2-drive fault tolerance) by running:

sudo zpool create -f -o ashift=12 -O casesensitivity=insensitive copepodzfs raidz2 /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4 /dev/disk5 /dev/disk6 /dev/disk7 /dev/disk8

I used “-o ashift=12″ because pretty much every consumer drive is an “Advanced Format (AF) Drive,” which means that it has large, 4K sectors, but fools computers into thinking that it uses the old, 512-byte logical sector. ZFS can be told to align with a 4K sector size by giving it an ashift of 12. This results in better performance.

I used “-O casesensitivity=insensitive” after being given advice by Graham Perrin. Some applications in Mac OS X do not do well with case sensitivity, which is the default setting in ZEVO. You cannot change this after the fact, so you should decide during pool creation time.

You can verify that your drive is telling the OS that is uses 512-byte block sizes by running “diskutil info /dev/disk1″ (assuming one of your drives is “/dev/disk1″) and looking for “Device Block Size.” Mine says, “Device Block Size: 512 Bytes”

Creating the RAID-Z2 was instantaneous. ZFS is amazing.

  • I checked my ZFS pool status by running “zpool status copepodzfs” (my pool is called “copepodzfs”):

pool: copepodzfs
state: ONLINE
scan: none requested
config:

 NAME                                           STATE     READ WRITE CKSUM  
 copepodzfs                                     ONLINE       0     0     0  
   raidz2-0                                     ONLINE       0     0     0  
     GPTE_BB07001A-8B58-4C54-AF77-D71CEE3BE391  ONLINE       0     0     0  at disk1s2  
     GPTE_FF882147-9E69-4CD2-AD64-EE216275F239  ONLINE       0     0     0  at disk2s2  
     GPTE_BE799326-E888-4EDE-9CFD-4D604FB728C5  ONLINE       0     0     0  at disk3s2  
     GPTE_22475434-3E60-491A-BD9D-8BE9EDF3239D  ONLINE       0     0     0  at disk4s2  
     GPTE_957351BC-43EC-4F2F-9120-1791090539EF  ONLINE       0     0     0  at disk5s2  
     GPTE_03AB5A7A-BD0A-4EF1-8613-FAB64EFBBFE4  ONLINE       0     0     0  at disk6s2  
     GPTE_EAD32B39-2FEA-4B62-BD7C-E0FA115706C5  ONLINE       0     0     0  at disk7s2  
     GPTE_E66C7105-DF1B-4B4A-9C72-CB74E722C1B9  ONLINE       0     0     0  at disk8s2  

errors: No known data errors

  • I claimed ownership of the new volume using “sudo chown echeng:staff /Volumes/copepodzfs”.

Here’s the volume (below). One thing that is strange (but consistent with what others have seen) is that it is reporting 22.4TB even though ZEVO Community Edition has a 16TB cap.

volume

Speed Tests

Local speed test on the Mac Mini, using Blackmagic Disk Speed Test. This thing is FAST! It’s a little freaky that writes are faster than reads.

blackmagic speed test-local

Speed mounted over SMB / gigabit ethernet:

blackmagic speed test-nas

Note that this is 100% Mac Mini-limited, since running the same speed test to the Mac Mini’s internal SSD yields similar results:

blackmagic speed test-nas to macmini ssd

I get 100MB/s over my wired network when talking to a Synology DS1812+ NAS box, so the network is capable of running at full speed. Hopefully, accessing the device over the network is temporary. If the new Mac Pro and OS X 10.9 works with ZEVO, I’ll be connected directly. Fast, redundant, corruption-immune and rebuild-friendly? If this works, I’ll be super happy!

UPDATE: Mac OS X cannot share out non-HFS volumes over AFP. I’ve been sharing over SMB. But SMB on the Mac is barely supported, and my SMB shares have been disconnecting every few hours, even if I’m actively accessing files over the share. I’ve installed SMBUp to replace Mountain Lion’s crappy SMB support, and will update here once I’ve used it for a few days.

| Los Altos, CA | link | trackback | Oct 12, 2013 01:39:46
  • FrankLee

    Good stuff, thanks for sharing!

  • Pingback: ZEVO ZFS over Thunderbolt on a Mac : alexking.org

  • Pingback: Don't Use WD Unlocker on a Mac with Mountain Lion

  • LATBauerdick

    Regarding AFP sharing of non-HFS disks: Mac OS X can! See https://github.com/joshado/liberate-applefileserver

    Works very well and reliably for me.

  • http://echeng.com/ Eric Cheng

    Does it work in Mac OS X 10.8.x? I read that someone was having problems with it, and it scared me off…

  • Huck

    Mavericks supports SMB2, which it appears that Apple is now using by default when two Mavericks computers talk to each other.

    I’d love to hear more about your new ZFS volume is working for you. I’m interested in potentially getting a few two-drive JBOD enclosures (Lacie 2big, WD Duo, or whatever else is out there) and daisy-chaining them for a similar effect.

  • JohnW

    I’ve been using ZEVO with OS X 10.8 heavily for nearly a year now, and can report that it’s been working beautifully. Not a single panic or data problem.

  • Steve

    Eric, do you know if you can use this volume for time-machine backups?

  • Dag Hansen

    Nice to hear that you taken the step over to ZFS. I have been using ZFS for many months now and have no problems, until now. ZEVO dos not support 10.9(Maverick) but I fount out that ZFSMac does work with Maverick. But I cant mount the ZEVO spool. Trying to fix this…

  • Dag Hansen

    It shod be MacZFS: MacZFS.org

  • http://echeng.com/ Eric Cheng

    I believe you can, but I haven’t tried it.

  • http://echeng.com/ Eric Cheng

    Yep. Waiting for something to be come stable, and will migrate over…

  • sam.q @dslextreme.com

    This is interesting, but…Why waste the money and performance. Areca TB or any other TB RAID can easily read and write 700MB/s~800+MB/s. With the built-in “Schedule Volume check function” will take care so call “rotten bit” / “dirty bit”

    I don’t see any advantage of your set up and don’t get what you’re trying to accomplice?!

  • Marcus Bointon

    Scheduled volume checks do not do what ZFS does. With ZFS EVERY read and write is checked, not just when you run a scan once a day/week, and a single bit out of place will be flagged immediately, and probably fixed dynamically.

    What I’d guess everyone would prefer is to have ZFS running natively in the controller hardware so it would be independent of host software and would get higher throughput like you say.

  • Abc

    I believe Mavericks is not supported by Zevo

  • http://echeng.com/ Eric Cheng

    Correct. I have migrated to OpenZFS, which does work with Mavericks.

  • Pingback: Adderall Abuse

ARCHIVES
Journal Home
Where is Eric? (password)
Stuff for Sale
February 2014 (2)
December 2013 (1)
October 2013 (1)
June 2013 (3)
May 2013 (2)
April 2013 (3)
March 2013 (1)
February 2013 (2)
January 2013 (3)
November 2012 (2)
October 2012 (3)
September 2012 (8)
August 2012 (8)
July 2012 (8)
June 2012 (8)
May 2012 (5)
April 2012 (8)
March 2012 (15)
February 2012 (7)
January 2012 (6)
December 2011 (8)
November 2011 (10)
October 2011 (12)
September 2011 (8)
August 2011 (14)
July 2011 (9)
June 2011 (9)
May 2011 (11)
April 2011 (11)
March 2011 (12)
February 2011 (23)
January 2011 (22)
December 2010 (16)
November 2010 (17)
October 2010 (26)
September 2010 (24)
August 2010 (24)
July 2010 (30)
June 2010 (26)
May 2010 (21)
April 2010 (26)
March 2010 (19)
February 2010 (17)
January 2010 (29)
December 2009 (21)
November 2009 (23)
October 2009 (32)
September 2009 (19)
August 2009 (34)
July 2009 (21)
June 2009 (30)
May 2009 (23)
April 2009 (18)
March 2009 (6)
February 2009 (25)
January 2009 (5)
December 2008 (6)
November 2008 (22)
October 2008 (27)
September 2008 (25)
August 2008 (34)
July 2008 (34)
June 2008 (32)
May 2008 (26)
April 2008 (15)
March 2008 (19)
February 2008 (31)
January 2008 (43)
December 2007 (33)
November 2007 (29)
October 2007 (29)
September 2007 (9)
August 2007 (19)
July 2007 (10)
June 2007 (17)
May 2007 (26)
April 2007 (38)
March 2007 (39)
February 2007 (13)
January 2007 (35)
December 2006 (35)
November 2006 (14)
October 2006 (6)
September 2006 (20)
August 2006 (24)
July 2006 (32)
June 2006 (17)
May 2006 (23)
April 2006 (16)
March 2006 (16)
February 2006 (26)
January 2006 (33)
December 2005 (17)
November 2005 (21)
October 2005 (18)
September 2005 (17)
August 2005 (5)
July 2005 (15)
June 2005 (20)
May 2005 (25)
April 2005 (7)
March 2005 (22)
February 2005 (20)
January 2005 (38)
December 2004 (6)
November 2004 (24)
October 2004 (16)
September 2004 (22)
August 2004 (12)
July 2004 (17)
June 2004 (15)
May 2004 (11)
April 2004 (35)
March 2004 (40)
February 2004 (29)
January 2004 (36)
December 2003 (20)
November 2003 (18)
October 2003 (10)
September 2003 (18)
August 2003 (10)
July 2003 (34)
June 2003 (12)
May 2003 (49)
April 2003 (42)
March 2003 (42)
February 2003 (15)
January 2003 (7)
December 2002 (17)
November 2002 (19)
October 2002 (24)
September 2002 (22)
August 2002 (20)
July 2002 (21)
June 2002 (14)
May 2002 (15)
April 2002 (11)
March 2002 (13)
February 2002 (20)
January 2002 (17)
December 2001 (16)
Even Older Journal
Travel Journals

CATEGORIES / TAGS
(25) (2) (1) (3) (1) (1) (1) (6) (2) (3) (11) (8) (3) (1) (1) (4) (2) (4) (2) (1) (6) (1) (1) (1) (6) (2) (1) (1) (1) (3) (1) (5) (1) (1) (23) (1) (1) (1) (1) (1) (14) (1) (10) (1) (1) (2) (1) (1) (1) (27) (6) (3) (2) (4) (4) (1) (1) (41) (11) (12) (4) (38) (1) (3) (2) (4) (1) (1) (1) (1) (2) (1) (1) (1) (1) (1) (10) (25) (8) (3) (2) (3) (2) (1) (5) (1) (1) (2) (1) (1) (14) (1) (5) (1) (1) (5) (43) (1) (1) (1) (3) (24) (1) (1) (1) (1) (5) (1) (4) (1) (1) (10) (1) (3) (1) (1) (1) (1) (6) (5) (1) (1) (1) (3) (1) (3) (1) (1) (1) (69) (4) (3) (7) (3) (1) (16) (6) (1) (29) (1) (7) (1) (4) (4) (4) (1) (1) (1) (1) (1) (1) (1) (10) (4) (4) (2) (1) (89) (14) (1) (2) (79) (2) (2) (1) (1) (1) (1) (1) (1) (3) (2) (3) (1) (1) (24) (3) (5) (4) (1) (2) (1)
MOST POPULAR
Most Popular Posts of All Time


Eric Cheng's RSS Journal Journal RSS
Eric Cheng's RSS Journal Comments RSS

proudly powered by wordpress
script exec time: 0.54s
i hate computers.