Wednesday, July 22, 2009

JTA in Core Java Systems.

Found some interesting stuff on JTA. Seems to be quite different to use core java based high availability systems.

Thursday, July 16, 2009

RAID 10 Performance.

As I had noted earlier would love to research into RAID 10 Setup's. Would love to know what sort of performance requirements are there for an setup as quoted by at

Taken for what it is, here's some recent experience I'm seeing (not a precise explanation as you're asking for, which I'd like to know also).
Layout : near=2, far=1Chunk Size : 512K
gtmp01,16G,,,125798,22,86157,17,,,337603,34,765.3,2,16,240,1 ,+++++,+++,237,1,241,1,+++++,+++,239,1
gtmp01,16G,,,129137,21,87074,17,,,336256,34,751.7,1,16,239,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp01,16G,,,125458,22,86293,17,,,338146,34,755.8,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,237,1

Layout : near=1, offset=2Chunk Size : 512K
gtmp02,16G,,,141278,25,98789,20,,,297263,29,767.5,2,16,240,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp02,16G,,,143068,25,98469,20,,,316138,31,793.6,1,16,239,1 ,+++++,+++,237,1,239,1,+++++,+++,238,0
gtmp02,16G,,,143236,24,99234,20,,,313824,32,782.1,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,238,1

Here, testing with bonnie++, 14-drive RAID10 dual-multipath FC, 10K 146G drives.
RAID5 nets the same approximate read performance (sometimes higher), with single-thread writes limited to 100MB/sec, and concurrent-thread R/W access in the pits (obvious for RAID5).

mdadm 2.5.3linux 2.6.18
xfs (mkfs.xfs -d su=512k,sw=3 -l logdev=/dev/sda1 -f /dev/md0)

RAID 10 Setup {"far","offset","near"}

Would love to research into the RAID 10 Setup's "far","near","offset".

As per the discussion at This is an summary of the RAID Layouts "far","offset" and "near" from an classic RAID 10 setup as discussed in

As quoted by Neil Brown at
On Thursday October 5, madduck [at] wrote:
> If A,B,C are data blocks, a,b their parts, and 1,2 denote their
> copies, the following would be a classic RAID1+0 where 1,2 and 3,4
> are RAID0 pairs combined into a RAID1:
> hdd1 Aa1 Ba1 Ca1
> hdd2 Ab1 Bb1 Cb1
> hdd3 Aa2 Ba2 Ca2
> hdd4 Ab2 Bb2 Cb2
> How would this look with the three different layouts? I think "near"
> is pretty much the same as above, but I can't figure out "far" and
> "offset" from the md(4) manpage.

near=2 would be

hdd1 Aa1 Ba1 Ca1
hdd2 Aa2 Ba2 Ca2
hdd3 Ab1 Bb1 Cb1
hdd4 Ab2 Bb2 Cb2

offset=2 would be
hdd1 Aa1 Bb2 Ca1 Db2
hdd2 Ab1 Aa2 Cb1 Ca2
hdd3 Ba1 Ab2 Da1 Cb2
hdd4 Bb1 Ba2 Db1 Da2

far=2 would be
hdd1 Aa1 Ca1 .... Bb2 Db2
hdd2 Ab1 Cb1 .... Aa2 Ca2
hdd3 Ba1 Da1 .... Ab2 Cb2
hdd4 Bb1 Db1 .... Ba2 Da2

Where the second set start half-way through the drives.The advantage of far= is that you can easily spread a long sequentialread across the drives. The cost is more seeking for writes.offset= can possibly get similar benefits with large enough chunksize, though I haven't tried to understand all the implications ofthat layout. I added it simply because it is a supported layout in DDF and I am working towards DDF support.

Linux RAID 4 and Linux RAID 5

Recently been in an discussion on RAID 4/5 on RAID 4/5 have parity information stored in the Disks.

RAID 4 has an dedicated disk to store the Parity information.
RAID 5 has an distributed parity information stored across the disks as arrays.

Real Tek Gigabit Driver for FreeBSD 5.x+

Was interested by the code that the realtek driver for FreeBSD Provided. The Driver has poll routines for hardware setup and shutdown. The driver also has functionality to configure the device as 1000/100/10 MB/s full duplex/half duplex driver.

As From the Read Me.
The Configuration for Static as well as DHCP is given below
# ifconfig rl0
# /sbin/dhclient rl0

The user can use the following command to change link speed and duplex mode.
1. For auto negotiation,
#ifconfig rl media autoselect
2. For 1000Mbps full-duplex,
#ifconfig rl media 1000baseTX mediaopt full-duplex
3. For 100Mbps full-duplex,
#ifconfig rl media 100baseTX mediaopt full-duplex
4. For 100Mbps half-duplex,
#ifconfig rl media 100baseTX -mediaopt full-duplex
5. For 10Mbps full-duplex,
#ifconfig rl media 10baseT/UTP mediaopt full-duplex
6. For 10Mbps half-duplex,
#ifconfig rl media 10baseT/UTP -mediaopt full-duplex

The Site for the driver download/Source Code is provided below

Quartz Servlet

One of the means by which quartz schedular can be incoporated in an project is by using quartz servlet.
public class QuartzInitializerServlet extends HttpServlet;
As from the javadoc

Quartz Scheduler

Was reading some interesting stuff about quartz. The Scheduler API's. The Jobs and Trigger. The RAMStore and JDBCStore and Related HA. Was Very interesting. org.quartz.impl.QuartzServer uses an main method. As from the Java Doc.
The main() method of this class currently accepts 0 or 1 arguemtns, if there is an argument, and its value is "console", then the program will print a short message on the console (std-out) and wait for the user to type "exit" - at which time the scheduler will be shutdown.

Solaris Build 01/06+ on x86

I was having some problem with installation of Solaris 01/06 and above on x86. Could not verify with any of previous versions. The problem seems to be with x86 SCSI/SATA/ATA Drive Marked as an Secondary drive. The Solaris Installer is not able to detect the drive.
Seems internally it is an bios problem. The bios does not seem to be able to give a way to setup the Cylinder/Sector/Head Ratio.