Found some interesting stuff on JTA. Seems to be quite different to use core java based high availability systems.
http://www.datadirect.com/developer/jdbc/topics/jta/index.ssp
Wednesday, July 22, 2009
Saturday, July 18, 2009
Storage Hardware RAID Fiber Channel MAC OS X and SGI IRIX 6.5
Have had an look at some of the latest hardware/software/networking/chipset for SAN/RAID/FC Combination.
http://support.apple.com/kb/HT1769
http://techpubs.sgi.com/library/tpl/cgi-bin/browse.cgi?coll=0650&db=bks&cmd=toc&pth=/SGI_Admin/FCRAID_AG
http://support.apple.com/kb/HT1769
http://techpubs.sgi.com/library/tpl/cgi-bin/browse.cgi?coll=0650&db=bks&cmd=toc&pth=/SGI_Admin/FCRAID_AG
Thursday, July 16, 2009
RAID 10 Performance.
As I had noted earlier would love to research into RAID 10 Setup's. Would love to know what sort of performance requirements are there for an setup as quoted by at kernel.org
Taken for what it is, here's some recent experience I'm seeing (not a precise explanation as you're asking for, which I'd like to know also).
Layout : near=2, far=1Chunk Size : 512K
gtmp01,16G,,,125798,22,86157,17,,,337603,34,765.3,2,16,240,1 ,+++++,+++,237,1,241,1,+++++,+++,239,1
gtmp01,16G,,,129137,21,87074,17,,,336256,34,751.7,1,16,239,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp01,16G,,,125458,22,86293,17,,,338146,34,755.8,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,237,1
Layout : near=1, offset=2Chunk Size : 512K
gtmp02,16G,,,141278,25,98789,20,,,297263,29,767.5,2,16,240,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp02,16G,,,143068,25,98469,20,,,316138,31,793.6,1,16,239,1 ,+++++,+++,237,1,239,1,+++++,+++,238,0
gtmp02,16G,,,143236,24,99234,20,,,313824,32,782.1,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,238,1
Here, testing with bonnie++, 14-drive RAID10 dual-multipath FC, 10K 146G drives.
RAID5 nets the same approximate read performance (sometimes higher), with single-thread writes limited to 100MB/sec, and concurrent-thread R/W access in the pits (obvious for RAID5).
mdadm 2.5.3linux 2.6.18
xfs (mkfs.xfs -d su=512k,sw=3 -l logdev=/dev/sda1 -f /dev/md0)
Taken for what it is, here's some recent experience I'm seeing (not a precise explanation as you're asking for, which I'd like to know also).
Layout : near=2, far=1Chunk Size : 512K
gtmp01,16G,,,125798,22,86157,17,,,337603,34,765.3,2,16,240,1 ,+++++,+++,237,1,241,1,+++++,+++,239,1
gtmp01,16G,,,129137,21,87074,17,,,336256,34,751.7,1,16,239,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp01,16G,,,125458,22,86293,17,,,338146,34,755.8,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,237,1
Layout : near=1, offset=2Chunk Size : 512K
gtmp02,16G,,,141278,25,98789,20,,,297263,29,767.5,2,16,240,1 ,+++++,+++,238,1,240,1,+++++,+++,238,1
gtmp02,16G,,,143068,25,98469,20,,,316138,31,793.6,1,16,239,1 ,+++++,+++,237,1,239,1,+++++,+++,238,0
gtmp02,16G,,,143236,24,99234,20,,,313824,32,782.1,1,16,240,1 ,+++++,+++,237,1,240,1,+++++,+++,238,1
Here, testing with bonnie++, 14-drive RAID10 dual-multipath FC, 10K 146G drives.
RAID5 nets the same approximate read performance (sometimes higher), with single-thread writes limited to 100MB/sec, and concurrent-thread R/W access in the pits (obvious for RAID5).
mdadm 2.5.3linux 2.6.18
xfs (mkfs.xfs -d su=512k,sw=3 -l logdev=/dev/sda1 -f /dev/md0)
RAID 10 Setup {"far","offset","near"}
Would love to research into the RAID 10 Setup's "far","near","offset".
As per the discussion at Kernel.org. This is an summary of the RAID Layouts "far","offset" and "near" from an classic RAID 10 setup as discussed in Kernel.org.
As quoted by Neil Brown at kernel.org
On Thursday October 5, madduck [at] madduck.net wrote:
snip<..>
> If A,B,C are data blocks, a,b their parts, and 1,2 denote their
> copies, the following would be a classic RAID1+0 where 1,2 and 3,4
> are RAID0 pairs combined into a RAID1:
>
> hdd1 Aa1 Ba1 Ca1
> hdd2 Ab1 Bb1 Cb1
> hdd3 Aa2 Ba2 Ca2
> hdd4 Ab2 Bb2 Cb2
>
> How would this look with the three different layouts? I think "near"
> is pretty much the same as above, but I can't figure out "far" and
> "offset" from the md(4) manpage.
near=2 would be
hdd1 Aa1 Ba1 Ca1
hdd2 Aa2 Ba2 Ca2
hdd3 Ab1 Bb1 Cb1
hdd4 Ab2 Bb2 Cb2
offset=2 would be
hdd1 Aa1 Bb2 Ca1 Db2
hdd2 Ab1 Aa2 Cb1 Ca2
hdd3 Ba1 Ab2 Da1 Cb2
hdd4 Bb1 Ba2 Db1 Da2
far=2 would be
hdd1 Aa1 Ca1 .... Bb2 Db2
hdd2 Ab1 Cb1 .... Aa2 Ca2
hdd3 Ba1 Da1 .... Ab2 Cb2
hdd4 Bb1 Db1 .... Ba2 Da2
Where the second set start half-way through the drives.The advantage of far= is that you can easily spread a long sequentialread across the drives. The cost is more seeking for writes.offset= can possibly get similar benefits with large enough chunksize, though I haven't tried to understand all the implications ofthat layout. I added it simply because it is a supported layout in DDF and I am working towards DDF support.
As per the discussion at Kernel.org. This is an summary of the RAID Layouts "far","offset" and "near" from an classic RAID 10 setup as discussed in Kernel.org.
As quoted by Neil Brown at kernel.org
On Thursday October 5, madduck [at] madduck.net wrote:
snip<..>
> If A,B,C are data blocks, a,b their parts, and 1,2 denote their
> copies, the following would be a classic RAID1+0 where 1,2 and 3,4
> are RAID0 pairs combined into a RAID1:
>
> hdd1 Aa1 Ba1 Ca1
> hdd2 Ab1 Bb1 Cb1
> hdd3 Aa2 Ba2 Ca2
> hdd4 Ab2 Bb2 Cb2
>
> How would this look with the three different layouts? I think "near"
> is pretty much the same as above, but I can't figure out "far" and
> "offset" from the md(4) manpage.
near=2 would be
hdd1 Aa1 Ba1 Ca1
hdd2 Aa2 Ba2 Ca2
hdd3 Ab1 Bb1 Cb1
hdd4 Ab2 Bb2 Cb2
offset=2 would be
hdd1 Aa1 Bb2 Ca1 Db2
hdd2 Ab1 Aa2 Cb1 Ca2
hdd3 Ba1 Ab2 Da1 Cb2
hdd4 Bb1 Ba2 Db1 Da2
far=2 would be
hdd1 Aa1 Ca1 .... Bb2 Db2
hdd2 Ab1 Cb1 .... Aa2 Ca2
hdd3 Ba1 Da1 .... Ab2 Cb2
hdd4 Bb1 Db1 .... Ba2 Da2
Where the second set start half-way through the drives.The advantage of far= is that you can easily spread a long sequentialread across the drives. The cost is more seeking for writes.offset= can possibly get similar benefits with large enough chunksize, though I haven't tried to understand all the implications ofthat layout. I added it simply because it is a supported layout in DDF and I am working towards DDF support.
Linux RAID 4 and Linux RAID 5
Recently been in an discussion on RAID 4/5 on Kernel.org. RAID 4/5 have parity information stored in the Disks.
RAID 4 has an dedicated disk to store the Parity information.
RAID 5 has an distributed parity information stored across the disks as arrays.
RAID 4 has an dedicated disk to store the Parity information.
RAID 5 has an distributed parity information stored across the disks as arrays.
Labels:
Linux/Kernel.org/RAID 4/5/Pairty
Real Tek Gigabit Driver for FreeBSD 5.x+
Was interested by the code that the realtek driver for FreeBSD Provided. The Driver has poll routines for hardware setup and shutdown. The driver also has functionality to configure the device as 1000/100/10 MB/s full duplex/half duplex driver.
As From the Read Me.
As From the Read Me.
The Configuration for Static as well as DHCP is given below
# ifconfig rl0 xxx.xxx.xxx.xxx
else
# /sbin/dhclient rl0
The user can use the following command to change link speed and duplex mode.
1. For auto negotiation,
#ifconfig rl
2. For 1000Mbps full-duplex,
#ifconfig rl
3. For 100Mbps full-duplex,
#ifconfig rl
4. For 100Mbps half-duplex,
#ifconfig rl
5. For 10Mbps full-duplex,
#ifconfig rl
6. For 10Mbps half-duplex,
#ifconfig rl
The Site for the driver download/Source Code is provided below
http://www.realtek.com.tw/downloads/downloadsView.aspx?Langid=1&PNid=5&PFid=5&Level=5&Conn=4&DownTypeID=3
Quartz Servlet
One of the means by which quartz schedular can be incoporated in an project is by using quartz servlet.
public class QuartzInitializerServlet extends HttpServlet;
As from the javadoc
http://www.opensymphony.com/quartz/api/
public class QuartzInitializerServlet extends HttpServlet;
As from the javadoc
http://www.opensymphony.com/quartz/api/
Labels:
HTTP,
Java Doc,
Quartz Scheduler,
Quartz Servlet,
Servlet
Quartz Scheduler
Was reading some interesting stuff about quartz. The Scheduler API's. The Jobs and Trigger. The RAMStore and JDBCStore and Related HA. Was Very interesting. org.quartz.impl.QuartzServer uses an main method. As from the Java Doc.
The main() method of this class currently accepts 0 or 1 arguemtns, if there is an argument, and its value is "console", then the program will print a short message on the console (std-out) and wait for the user to type "exit" - at which time the scheduler will be shutdown.
The main() method of this class currently accepts 0 or 1 arguemtns, if there is an argument, and its value is "console", then the program will print a short message on the console (std-out) and wait for the user to type "exit" - at which time the scheduler will be shutdown.
Labels:
High Availability.,
JDBC,
Quartz Scheduler,
RAM
Solaris Build 01/06+ on x86
I was having some problem with installation of Solaris 01/06 and above on x86. Could not verify with any of previous versions. The problem seems to be with x86 SCSI/SATA/ATA Drive Marked as an Secondary drive. The Solaris Installer is not able to detect the drive.
Seems internally it is an bios problem. The bios does not seem to be able to give a way to setup the Cylinder/Sector/Head Ratio.
Seems internally it is an bios problem. The bios does not seem to be able to give a way to setup the Cylinder/Sector/Head Ratio.
Labels:
BIOS,
Hard Disk,
Intel x86,
Solaris 10 01/06
Subscribe to:
Posts (Atom)