Thursday, July 16, 2009

RAID 10 Performance.

As I had noted earlier would love to research into RAID 10 Setup's. Would love to know what sort of performance requirements are there for an setup as quoted by at

Taken for what it is, here's some recent experience I'm seeing (not a precise explanation as you're asking for, which I'd like to know also).
Layout : near=2, far=1Chunk Size : 512K
gtmp01,16G,,,125798,22,86157,17,,,337603,34,765.3,2,16,240,1 ,+++++,+++,237,1,241,1,+++++,+++,239,1
gtmp01,16G,,,129137,21,87074,17,,,336256,34,751.7,1,16,239,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp01,16G,,,125458,22,86293,17,,,338146,34,755.8,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,237,1

Layout : near=1, offset=2Chunk Size : 512K
gtmp02,16G,,,141278,25,98789,20,,,297263,29,767.5,2,16,240,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp02,16G,,,143068,25,98469,20,,,316138,31,793.6,1,16,239,1 ,+++++,+++,237,1,239,1,+++++,+++,238,0
gtmp02,16G,,,143236,24,99234,20,,,313824,32,782.1,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,238,1

Here, testing with bonnie++, 14-drive RAID10 dual-multipath FC, 10K 146G drives.
RAID5 nets the same approximate read performance (sometimes higher), with single-thread writes limited to 100MB/sec, and concurrent-thread R/W access in the pits (obvious for RAID5).

mdadm 2.5.3linux 2.6.18
xfs (mkfs.xfs -d su=512k,sw=3 -l logdev=/dev/sda1 -f /dev/md0)

No comments:

Post a Comment