Showing posts with label Oracle RAC. Show all posts
Showing posts with label Oracle RAC. Show all posts

Sunday, November 13, 2011

Oracle RAC installation in Solaris 11 Container

Here are the steps to create a containers on Solaris 11 to install Oracle RAC:


  • If the Servers do not enough physical NICs
    • Virtualize the NICs
  •  Create VNICs for public and private interface
    • dladm create-vnic -l igb0 vnic5
    •  dladm create-vnic -l igb2 priv5
  •  Create an exclusive IP container with these interfaces
    • set ip-type=exclusive
    •  add net set physical=vnic5 end
    •  add net set physical= priv5 end
  •     Setup private and public interface
    •  Ipadm show-if
    •  Ipadm create-ip priv5
    •  Ipadm create-addr -T static -a 199.6.6.6/24 priv5/v4addr
    •  Ipadm show-addr
  •  No need to update any /etc files with hostname if ipadm command is used
  •  Configure shared memory
    •  Set max-shm-memory=50G
  • Add ASM device/s to the zone
    • Add Device:
      • Set match: <device name>
      • Set allow-partition=true
      • Set allow-raw-io=true
    • End
  •  Provide right privileges to the container
    •  Set limitpriv=default,proc_priocntl
  •  Install RAC as you install in global container



Suppose you need to create a container that uses  igb0 as public interface and igb2 as private interface for Oracle RAC installation. It uses a pool with one CPU in it and mounts 2 FS. This blog summarizes the steps to create such a container.
  • Create the VNICs for public and private interface
From the global container create VNICs from public interface igb0 and for  private interface igb2
  • # dladm create-vnic -l igb0 vnic5
  • # dladm create-vnic -l igb2 priv5

  •  Create a pool with one CPU in it:
poolcfg -c 'create pset rac5_set(uint pset.min=1;uint pset.max=1)'
poolcfg -c ' create pool rac5Pool'
poolcfg -c 'associate pool rac5Pool (pset rac1_set)'
 poolcfg -c 'transfer to pset rac5_set(cpu 4)'
pooladm -c
pooladm

Check out http://ritukamboj.blogspot.com/search/label/ResourcePool for more info.
  • Create a zone:
Create racZone5 directory under /zonepools/zones and issue the following command:
  • zonecfg -z racZone5 -f zonetemplate.cfg
where zonetemplate.cfg is as under

create
set zonepath=/zonepools/zones/racZone5
set autoboot=false
set limitpriv=default,proc_priocntl
set ip-type=exclusive
add net
set physical=vnic5
end
add net
set physical= priv5
end
set max-shm-memory=50G
add fs
set dir=/u05
set special=/u05
set type=lofs
end
add fs
set dir=/installer
set special=/installer
set type=lofs
end
set pool=rac5Pool
Run
  • zoneadm -z racZone5 install
  • zoneadm -z racZone5 boot
  • zlogin -C racZone5 (for inital configuration)


  • Verify that you can see the VNICS in the container
root@etchst8-zone22:~# dladm show-link
LINK                CLASS     MTU    STATE    OVER
vnic5               vnic      1500   up       ?
priv5               vnic      1500   up       ?
root@etchst8-zone22:~# ^C
  • Verify that public interface NIC is plumped up (through initial configuration)
root@etchst8-zone23:~# ipadm show-if
IFNAME     CLASS    STATE    ACTIVE OVER
lo0        loopback ok       yes    --
vnic6      ip       ok       yes    --
  • Plump the private interface

  • ipadm show-if
  •  ipadm create-ip priv6
  • ipadm create-addr -T static -a 199.6.6.6/24 priv6/v4addr
root@etchst8-zone23:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
vnic6/v4          static   ok           10.6.138.55/24
priv6/v4addr      static   duplicate    199.6.6.6/24
lo0/v6            static   ok           ::1/128
vnic6/v6          addrconf ok           fe80::8:20ff:fe31:49c4/10
 Create Oracle user and required directories for Oracle installation

Wednesday, June 8, 2011

Cluster verification Utility

  • To view what prerequisites are failing
    • ./runcluvfy.sh comp sys -n  t5120-241-06,t5220-241-03 -p crs
  • To generated a fixup script in /tmp/ directory to fix the prerequisites that are failing
    • ./runcluvfy.sh stage -pre crsinst -fixup -fixupdir /tmp/ritu.sh  -n t5120-241-06,t5220-241-03
  • Additional commands:
    • ./runcluvfy.sh -help
    • ./runcluvfy.sh stage -list or stage -help
    •  ./runclvfy.sh comp -list or comp -help
    • System requirement verification
      • ./runcluvfy.sh comp sys -n {node list} -p {crs|database} -verbose
    • Storage verification
      • ./runcluvfy.sh comp ssa -n {node list}  -s {storageid_list] - verbose
  • Detailed Documentation:

Wednesday, November 3, 2010

RAC Preinstallation Check II

  • Refer to previous post to setup ssh connectivity between nodes
  • Create the required user and groups and directories
    • # groupadd -g 1000 oinstall
    • # groupadd -g 1031 dba
    • # useradd -u 1101 -g oinstall -G dba oracle
    • # mkdir -p /u01/app/11.2.0/grid
    • # mkdir -p /u01/app/oracle
    • # chown -R oracle:oinstall /u01
    • # chmod -R 775 /u01/
  • Create the fixup scripts before running the installer to fix up the prerequisite requirements:
    • ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
  • Setup the following values as root
    • ndd -set /dev/tcp tcp_smallest_anon_port 9000
    •  ndd -set /dev/tcp tcp_largest_anon_port 65500
    •  ndd -set /dev/udp udp_smallest_anon_port 9000
    •  ndd -set /dev/udp udp_largest_anon_port 65500
  • Regarding NTP (network time protocol)service\
    • Solaris NTP enables time synchronization on the network. Solaris NTP uses a software called xntpd. This is a OS daemon which sets and maintains the system time-of-day in synchronism with Internet standard time servers  Detailed :http://www.sun.com/blueprints/0701/NTP.pdf 
    • Verify that the service is up
      • svcs ntp
    • Enable the service
      • svcadm enable ntp

Monday, October 11, 2010

RAC Preinstallation Check

  • Memory requirement
    • Every node should have minimum of 1 GB
    • prtconf | grep Memory
  • Swap requirement
    • Swap space should be set to twice the amount of RAM for systems with 2GB of RAM or less. For systems with 2GB to 8GB, use swap space equal to RAM. For systems over 8GB , use .75 times of the size of RAM
    • Verify the swap is set to .75 times the size of RAM
    • swap -s
  • tmp space
    • Atleast 400MB of disk space is required in /tmp
    • df -h /tmp
  • Maximum Open File descriptors
    • To check ulimit -n
    • To set ulimit -n <new value>
  • Network requirements
  • You should have minimum of 2 network interfaces per node
    • dladm show-link
  • You should have three network address for each node
    • Public IP address
      • ping <public-node-name>
    • Virtual IP address : Used by applications for failover in case of node failure
      • Do not plump Virtual ip address. Pinging virutal address should result in failure
      • The virtual IP address is on the same subnet as your public interface
    • Private IP address: Used by Oracle clusterware for internode communication
      • It should be on the same subnet reserved for private networks such as 10.0.0.0 or 192.168.0.0
      • It should use dedicated switches or physically separate private network, reachable only by the cluster member nodes, prefably using high-speed NICs
      • It cannot be registered on the same subnet that is registered to a public IP address
      • ping <private-node-name>
  • The /etc/hosts should have following entries for each node
    • Your public node name,public node name.domainname
    • Your private node name, private nodename.domainname
    • Your vip node name, vip nodename.domainname
  • About interfaces on all nodes
    • The public interface names associated with the network adapters for each network must be the same on all nodes and the private interface names assoicated with the network adaptors should be the same For example: With a two-node cluster, you cannot configure network adapters on node1 with eth0 as the public interface, but on node2 have eth1 as the public interface. Public interface names must be the same, so you must configure eth0 as public on both nodes. You should configure the private interfaces on the same network adapters as well. If eth1 is the private interface for node1, then eth1 should be the private interface for node2.
  • SSH connectivity
    • Passwordless SSH connectivity should be establed between all cluster nodes.  OUI can automatically configure password SSH connectivity. For that to happen, make sure there are no stty commands in oracle user probile. By default OUI searches for public keys in /usr/local/etc directory and it searches for ssh-keygen binaries in /usr/local/bin directory. However, in Solaris public keys are found under /etc/ssh and ssh-keygen binaries are under /usr/bin. So the following softlinks needs to be created prior to starting OUI
      • ln -s /etc/ssh /usr/local/etc
      • ln -s /usr/bin /usr/local/bin
    • Create the links as mentioned above and invoke sshsetup.sh script in staging area. Verify you can ssh without password
      • ssh <node1> date
      • ssh <node2> date
  • More info
  • Additional notes
  • Verifying the existance of public IPs and VIPs
    • Use ypwhich, ypcat hosts    
  • Network setup : Refer to IP services guide for details
    • Issue dladm show-link command to find out installed interfaces   
      • Issue ifconfig -a command to determine which interface is plumped 
        • To configure and plump an interface named el000g1
          • ifconfig e1000g1 plump up      
          •  ifconfig e1000g1 <address> netmask+  
          • Verify that interface is up : ifconfig -a    
        • To make interface e1000g1 plumping persistent across reboots
          • Create a file /etc/hostname.e1000g1      
          • Add the address of the interface to this file
          • vi /etc/hostname.e1000g1    
          • Add entries for the new interface into /etc/inet/ipnodes     
        • Perform a reconfiguration reboot
          • reboot -- -r       
        • Verfiy that interface is up : ifconfig -a      
      • Solaris supports two types of interfaces
        • Legacy interfaces
          • They are DLPI and GLDv2 interfaces. Some legacy types are eri, qge,and ce  
        • Non-VLAN interfaces
          • These interfaces are GLDv3 interfaces.    
          •  bge,xge and e1000g are non-VLAN interfaces