Home » Server Options » RAC & Failsafe » RAC - storage configuration (11g R2, SLES 11.0)
RAC - storage configuration [message #576632] Wed, 06 February 2013 14:16 Go to next message
kamilp
Messages: 10
Registered: August 2012
Junior Member
Hi,

I was given following task - have to reinstall two node Oracle 11g Database Standart Edition RAC with shared ASM storage attached via FibreChannel. I am quite new to RAC and AMS Smile Previous instance crashed due to both controllers hardware failure. I have recovered database and put into production to another server. The database size is around 300GB and expected to grow to 600GB in next ~5 years. Database is combined OLTP and OLAP (high amount of small transactions inserts/updates on current data, while archiving processed data for ever and allowing analytics over whole archive).
I have follwing hardware available:
2x db server 8 core Intel CPU, 12GB RAM, with FC
1x FC storage with 10x 300GB SAS 15k rpm
1x NFS storage 6x 1TB SATA, RAID 5, connected over 1GBit ethernet
Previously the NFS was used for exports (no rman backups) and storage was single 9x300GB RAID 5 split into 3 volumes - crs, racdb and fra. Fail of year.

Now I want to create new setup as below:

2x 300GB - mirror ~ 300GB space for cluster storage (crs) and database redo logs
2x 300GB - mirror ~ 300GB space for undo and temporary tablespaces
6x 300GB - raid10 ~ 900GB space for database user data

NFS - 6TB - fast recovery area - archive logs and rman backups

Please give me any comments and suggestion for new setup.
Thanks a lot !

[Updated on: Wed, 06 February 2013 14:18]

Report message to a moderator

Re: RAC - storage configuration [message #589928 is a reply to message #576632] Fri, 12 July 2013 02:18 Go to previous message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
I've never seen what the big redologs to 300GB but located same to the LUN. The CRS and Voting disk are not need to be huge, they're 10g at all.

First time, you must plan IPs for 2 nodes, in 11gr2 you need at least:

- Public IP
- Private IP
- VIP
- Scan IP

Example:
Public IP addr (rac node1): 192.168.10.60 node-1-public
Virtual IP addr(rac node1): 192.168.1.61  node-1-vip
Private Interconnect(rac node1): 172.168.1.60 node-1-private

Public IP addr (rac node2): 192.168.1.70 node-2-public
Virtual IP addr(rac node2): 192.168.1.71 node-2-vip
Private Interconnect(rac node2): 172.168.1.70 node-2-private

# 3 Scan IP addr associated with the cluster (cluster) registered with DNS server
192.168.1.80 cluster-scan 
192.168.1.81
192.168.1.82 


Well, you might suggest to the Network Administrator will device the Switch into 2 VLANs. The one is for Public, Vitural and Scan IPs, the other is for Private IPs. As you see below, the Private IPs are 172.x differently to 192.x.

Why? The answer is, if you locate the Public, VIP and Private into the same network arrary, example: 192.168.10.x/24, so, you set the Private's address is lower than the other, example: Public is 192.168.10.60 and the Private is 192.168.10.50. Whenever, the Cluster restart again, it can not start before.

Now, for the Storage, if it is the SAN (Share Area Network) you're must be a storage administrator to device the LUN and place IP for one by one LUN area. But, if it is the Share Storage, you might consider about:

- NFS
- ZFS

Note: Both of NFS and ZFS, you should disable multipathing.

I take an example for ZFS:

Enable RPC on all nodes
Execute as root on node-1 & node-2.
allnodes# svccfg -s svc:/network/rpc/bind setprop config/local_only=false
allnodes# svcadm refresh svc:/network/rpc/bind

Disable ISCSI Multipathing on node-1 & node-2:
As root user edit the file /kernel/drv/iscsi.conf and
set mpxio-disable="yes"

Enable ISCSI on node-1 & node-2
Enable iscsi initiator:
allnodes# svcadm enable svc:/network/iscsi/initiator:default
allnodes# svcs -a|grep -i iscsi


Check the log file
# svcs -a|grep -i iscsi
disabled 10:20:33 svc:/system/iscsitgt:default
online 10:25:40 svc:/network/iscsi/initiator:default


Discover ISCSI Targets and Verify
Execute the below as root user on both vmsol1 & vmsol2
allnodes# iscsiadm modify discovery --sendtargets enable
allnodes# iscsiadm add discovery-address 192.168.1.96
allnodes# devfsadm -i iscsi


Now, you can devices the LUN and the raw-devices as you want.
Previous Topic: PX Deq Credit: send blkd
Next Topic: Software version in RAC
Goto Forum:
  


Current Time: Thu Mar 28 13:51:53 CDT 2024