Showing posts with label ORACLE. Show all posts
Showing posts with label ORACLE. Show all posts

Wednesday, July 10, 2013

Pluggable Database : An Introduction


What is PDB:


Pluggable database , also referred to as Multitenancy, is the new option of Oracle Database 12c that enables dense database consolidation, rapid provisioning and rapid cloning as well as rapid patching and upgrades. It enables the administrators to manage many databases as one and allows resource management between databases.

In 12c, you can create two types of database: container database and non container database. Container database, referred to as CDB, is the database that can contain several PDBs. Non-container database, referred to as non-CDB, is the regular database that existed prior to 12c.

Multitenacy is  just a deployment choice. When you create a database, you choose whether you want to create a CDB or a non-CDB. Once you make that choice and create the  database, nothing else changes. The way you connect to database and the way you interact with database remains the same for both CDB and non-CDB.


Business challenges that PDB address:


Prior to 12c , database consolidation would happen through one of the following two ways:

1) Server or OS consolidation: You would start several database instances on a single server. Each instance would be on a server virtualized environment like  LDOMs or on an OS virtualized environment such as Solaris Containers.The advantages of this approach is that it provides complete isloation and maximum security. No two distinct database users can ever meddle with each other..as there are usually that many instances as the number of databases. This form of database consolidation cannot provide very dense consolidation as it is not sharing any of the database resources: there are as many copies of Oracle background process and SGA as there are databases so the system very soon exhaust its CPU and memory.

2) Consolidation in the database : You would enable schema based consolidation where each database is represented as a schema in a single monolithic database instance.In this approach, you can consolidate lot of database (schema) into one database instance thereby attaining dense database consolidation. However, the disadvantages are its weak or no name space isolation and very weak security model. A database user can view/change data of another database schema.

Pluggable database provides database consolidation within the database. In a way, it is  a supported version of schema based consolidation with all disadvantages of schema based consolidation removed. Additionally,  it also provides additional features like pluggability and rapid cloining.

 Introduction to Multitenancy architecture:

 

At a higher level, an Oracle Database consist of Oracle Database Instance and Oracle Database. The instance consist of group of Oracle background process and a shared memory. The database consist of group of data files; some of which contain the user data and some contain Oracle system data. When you start an Oracle database, you essentially have a bunch of Oracle background processes, a bunch of shared memory and data files for storing the data. And this gets replicated for each database that is created. So, each database needs additional shared memory and series of Oracle database process....which quickly consumes the available server resources.

With multitenant architecture you can create an empty container database and then plug in the databases as PDBs. Now you are using just one SGA and one set of Oracle database processing thereby saving the system resources to enable dense database consolidation

By consolidating existing databases as PDBs, administrators can manage many databases as one. For example, tasks such as backup and disaster recovery are performed at the level of the CDB. The multitenant architecture also provides the flexibility to take backups and perform point-in-time recovery at the PDB level, if required. 

At a physical level Oracle database consist of a bunch of files: data files as well as bootstrap files like control files, spfile and so on. The physical data files implements the logical entity tablespace which in turn implements tables. Tables hold only three kinds of data

1) metadata that describes the oracle System
2) Metadata that describes the user schema
3) Actual user data .

Prior to 12c, the metadata that describes Oracle system and the metadata that describes user schema were stored collectively in a set of tables that are referred to as data dictionary. This data dictionary tables were stored in dedicated tablespace namely System and Sysaux.


Initially when you create the database, the data dictionary tables only have Oracle system related metadata. Later on when user creates its own data, the data dictionary tables contain user and system metadata. And the user tablespace contains only user data.

Now, if you want to transport a user tablespace from one database to another, you can simply copy the data files containing the tablespace from database location to another . However, you will have to go through the data dictionary view to find out user metadata information and then transport it. If the user metadata information was stored seperately from the Oracle sytem metadata,  than you could simply copy user's tablespace metadata  from one location to another to transport a tablespace.

Hence , in 12c the data dictionary tables got horizontally partitioned;  one partition containing Oracle System metadata is stored in CDB$root container  and another  partition container user data related metadata is stored in the PDB. Now, transporting a tablespace from one database to another means copy the data files as well as files containing the metadata information of user schema. The user related metadata is stored in SYSTEM and SYSAUX tablespaces of PDB and the Oracle metadata is stored in SYSTEM and SYSAUX tablespaces of CDB$root

In other words, each data dictionary table now occurs twice, once for Oracle System and once for the user schema. Every query against a data dictionary table is now a union between the two occurrences of that table.

The primary physical difference between CDB’s and non-CDB is the system data files. A non-CDB has only one set of system data files. In contrast the root and each PDB in a CDB has its own SYSTEM and SYSAUX tablespaces which has its own complete set of dictionary tables describing the objects in itself.

There are two ways for provisioning new PDBs in multitenant architecture.  One is by cloining and another is plug in an previously unplugged PDB.   You clone the PDB when you want an extra copy of that PDB and you plug in a PDB when you want to move the PDB from one CDB to another. When you create a new PDB with out explicitly stating the source PDB from which you want to clone, than Oracle creates the new PDB by cloning that PDB from pdb$seed container. In other words, creation of a PDB is internally a clone operation.


  These operations are done using SQL commands. which  can be executed from any client machine and, hence, access privileges to the server machine are not needed. In addition, the SQL commands can be run in parallel thereby reducing execution times. They can also be programmed in PL/SQL to automate the process of provisioning and cloning.


Monday, August 13, 2012

Installing Oracle Database on Solaris 11 : Issue resoution

While installing the Oracle 11gR2 (11.2.0.1) grid infrastructure software on Solaris 11 , I got the following errors:

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-11-10_06-23-31PM. Please wait ...oracle@len01:~/download/grid$ You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2011-11-10_06-23-31PM.log
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparcv9/server/libjsig.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparc/client/libjsig.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparcv9/libjdgaSUNWafb.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparcv9/server/libjsig_g.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparc/libjvm.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparc/libjdgaSUNWafb.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/javaws/javaws: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparc/server/libjsig_g.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparc/client/libjsig_g.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/jdk/jre/lib/sparc/server/libjsig.so: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/lib/libocci.so: No such file or directory
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/bin/lbuilder: Invalid argument
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/lib/libagtsh.so: No such file or directory
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/lib/libodm11.so: No such file or directory
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/lib/libodm11.so: No such file or directory
Error in checking existance of the link, errno:11:/u01/app/oracle/product/11.2.0/grid/JRE: No such file or directory


In the error message where it says invalid argument, the installer copies the file instead of creating a link. And for the error message that states No such file or directory, the corresponding file does not get copied over.

And then the ASM install fails with the error : ORACLE not available.

I got the similar message for Oracle database install as well.

It turns out 11.2.0.1 is not supported on Solaris11. 11.2.0.3 and beyond are supported on S11. Installing 11.2.0.3 resolved the issue.

Installing Oracle database in Solaris Container: Issue resolution

I was trying to install Oracle clusterware on 11.2.0.2 on Solaris container. I got the following error:

Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: 'Solaris Container: Solaris 10 update 10'
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: 'Solaris Container: Solaris 10 update 10'
CRS-2672: Attempting to start 'ora.mdnsd' on 'dszone1'
CRS-2676: Start of 'ora.mdnsd' on 'dszone1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'dszone1'
CRS-2676: Start of 'ora.gpnpd' on 'dszone1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dszone1'
CRS-2672: Attempting to start 'ora.gipcd' on 'dszone1'
CRS-2676: Start of 'ora.cssdmonitor' on 'dszone1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'dszone1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'dszone1'
CRS-2672: Attempting to start 'ora.diskmon' on 'dszone1'
CRS-2676: Start of 'ora.diskmon' on 'dszone1' succeeded
CRS-2674: Start of 'ora.cssd' on 'dszone1' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'dszone1'
CRS-2681: Clean of 'ora.cssd' on 'dszone1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'dszone1'
CRS-2677: Stop of 'ora.diskmon' on 'dszone1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'dszone1'
CRS-2677: Stop of 'ora.gipcd' on 'dszone1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'dszone1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'dszone1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'dszone1'
CRS-2677: Stop of 'ora.gpnpd' on 'dszone1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'dszone1'
CRS-2677: Stop of 'ora.mdnsd' on 'dszone1' succeeded
CRS-5804: Communication error with agent process
CRS-4000: Command Start failed, or completed with errors.
The exlusive mode cluster start failed, see Clusterware alert log for more information
Initial cluster configuration failed.  See /u01/app/11.2.0/grid/cfgtoollogs/crsconfig


The alert log under /u01/app/11.2.0/grid/log/<nodename> has the following error:



[cssd(8886)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00011:) in /u01/app/11.2.0/grid/
log/dszone1/cssd/ocssd.log

And the ocssd.log has following error:

2012-08-07 06:06:57.599: [    CSSD][1]clssscSetPrivEnv: Setting priority to 4
2012-08-07 06:06:57.607: [    CSSD][1]clssscSetPrivEnv: unable to set priority to 4
2012-08-07 06:06:57.607: [    CSSD][1]SLOS: cat=-2, opn=scls_set_priority_realtime, dep=1, loc=setsched
unable to escalate to real time
2012-08-07 06:06:57.607: [    CSSD][1](:CSSSC00011:)clssscExit: A fatal error occurred during initialization

Restarting the container with following privileges fixed the issue:

set limitpriv="default,proc_priocntl,proc_clock_highres,sys_time"
Refer to metalink note ID 1340694.1 for detailed information.

Oracle Database Listener Service Registration

  • I do not have listener.ora file. How is service registration working:
Service registration can happen dynamically or statically. By default, listener uses dynamic service registration before attempting static registration. For dynamic configuration, listener.ora file is not needed.
Dynamic service registration is configured in the database initialization file. It does not require any configuration in the listener.ora file.  For dynamic registration to work, you should set service_names and instance_names in init.ora. or it defaults to default value (INSTANCE_NAME is set to ORACLE_SID and service_names is set to global database name: DB_NAME +DB_DOMAIN). If your local listener is listening on a non-default port, you need to set LOCAL_LISTENER parameter and remote_listener ,typically used in RAC, needs to set with remote_listener parameter.
The static configuration reads information from listener.ora file to configure the services. An instance with status UNKNOWN is statically registered. For additional info check out the following:
  • How do I know which naming method is being used to connect:
Tnsping utility can be used to determine whether the listener for a service on an Oracle net network can be reached.
If you can ping the server host from client, use tnsping to test if client can reach Oracle net service. Tnsping also states the naming method which was used to connect.
Additional info:
http://edstevensdba.wordpress.com/2011/03/19/ora-12514/ ( look for all post under TNS category)

Tuesday, July 24, 2012

Using DBCA to generate database creation scripts

Invoke dbca . Select "Create Database" and then Advanced Options. Select your options to create the DB.

Select <Custom Database template> on Database Template page  (If you do not use custom db template, all other templates will result in a script that does rman restore from a backup that comes with the product). Selecting 'custom database template' will result in scripts built around "Create database" sql statement

On the creation options page, deselect Create Database and select Generate DB creation Scripts before you finish. Note the directory name where scripts will be generated.

Thursday, April 5, 2012

emctl : Can't locate CompEMdbconsole.pm in @INC error.

If you are starting emctl from grid home , you will see error like:

Can't locate CompEMdbconsole.pm in @INC  error. 

emctl start dbconsole
Can't locate CompEMdbconsole.pm in @INC (@INC contains: %s_javapOracleHome%/sysman/admin/scripts %s_javapOracleHome%/bin /u01/app/11.2.0.3/grid/perl/lib/5.10.0/sun4-solaris


 

Start emctl from Oracle home to resolve this error.

cd $ORACLE_HOME/bin
./emctl start dbconsole



Tuesday, October 18, 2011

Backup and recovery concepts

RMAN environment


The RMAN environment consist of following components:

Required components:
  • RMAN client : The client application that manages the backup and recovery of target database.
  • target database: A database that RMAN backs up and restores

Optional components:

  • recovery catalogy database:
  • recovery catalog schema
  • physical standby database
  • duplicate database: a copy of the primary database that you can use for testing purposes
  • fast recovery area:
  • media manager:
  • media mangement catalog


RMAN channels


The RMAN client itself does not perform backup, restore or recovery opertations. When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perfom the operations.

The RMAN channel corresponds to a one database server session. It represents one stream of data to a device.

A channel establishes a connection from the RMAN client to a target or auxiallary database instance by starting a server session on the instance.

The RMAN supported device types are disk and SBT (system backup to tape). An SBT device is controlled by third party media manager.


The RMAN channels can be allocated either automatically or manually .

You can enable automatic channel allocation by using configure_channel command. . When you run a command that can use automatic channels, RMAN automatically allocates the channels with the options you specified in configure_channel command. RMAN determines the names for the automatic channels. RMAN comes preconfigured with DISK channel that you can use for backups to disk.

You can manually allocate channels as well. You give user defined names to manually created channels.

The number of channels available for use with a device determines whether RMAN reads from or write to this device in parallel or not. In parallelism, the back up of the file is performed by more than one channel. Each channel may back up more than one file but unless a multisection backup is performed, no file is backed up by more than one channel.

RMAN Backup Options


You can backup your database using "BACKUP" command. RMAN determines backup options based on RMAN environment and RMAN defaults.

This table summarizes the RMAN options and their default values:

Backup Option Default Value All possible values Example Note
Device type Value set in CONFIGURE_DEFAULT_DEVICE_TYPE Device type can be disk or SBT BACKUP Database
device type disk
(or) device type SBT

Backup type By default , RMAN creates backup sets Backup type can be backupset or image copy Backup as copy
(or) backup as backupset

File Format

format "/disk1/backup_%U" (file location)
(or) format 'tdgroup' (ASM group)

































Overview of RMAN backups

When you issue BACKUP command the following files are backed up:

  • Datafiles and control files
  • Server parameter files
  • Archived redo logs
  • RMAN backups
Network configuration files, password files , contents of Oracle home are not backed up with RMAN.

The backups created can either be consistent or in-consistent.

The entire database can be backup ed with "backup database" command. The database should either be mounted or open for backup to succeed.

Backing up tablespaces and datafiles:


If you want to backup only certain tablespaces or datafiles, you can issue BACKUP TABLESPACES or BACKUP DATAFILES command. When you specify tablespaces, RMAN translates the tablespace name internally into a list of datafiles.

RMAN automatically backs up control file and server parameter file when datafile 1 is included in the backup. If the control file autobackup is enabled , then RMAN writes the current control file and server parameter file to a seperate autobackup piece.

Backing up control files :

If the configure controlfile autobackup is set to ON, RMAN automatically backs up the control file and server parameter file after every backup and after database structural changes. The control file autobackup contains metadata about the previous backup, which is crucial for disaster recovery.

You can take manual backup of control file by

Running backup current controlfile command
Include a backup of control file within any backup by using include current controlfile option of the backup command
Backup datafile 1 (as it automoatically includes backup of control file and sp file)


Manual backup of control file is not the same as autobackup. Autoback contains metadata about the backup that just completed. Also, RMAN can automatically restore autobackups without the user of recovery catalog

Overview of Incremental backups


An incremental backup copies only datafile blocks that have changed since a specified previous backup.

An incremental backup is of two types:
  • Cumulative incremental backup
    • Backs up all blocks changed after the most recent incremental backup at level 0
  • Differential incremental backup
    • Backs up all blocks changes after the most recent incremental backup at level 1 or level 0
    • By default incremental backups are differntial.
  •  
    The contents of full backup and backup increment level 0 are the same. However, a full backup cannot be used as part of incremental strategy. Level 0 incremental backup is the basis of incremental strategy.

    A good rule of thump is to take new level 0 backup whenever 20% or more of the data has changed.

    In a data guard environment, you can offload incremental backups to a physical standby database. Incremental backups of a standby and primary database are interchangeable.

    To make differential incremental backup

    backup
    incremental level 0
    database ;

    To make cumulative incremental backup

    backup
    incremental level 1 cumulative
    database;

    To make incremental backups in the fast recovery area:
    backup
    incremental level 1
    from scn <look this up>



    Archival backups:


    You can use BACKUP ....KEEP to creat a backup that is both all-inclusive and exempt from the backup retention policy. The KEEP options also specifies that the backup should be exempt from the retention policy either forever or for a specified period of time (KEEP UNTIL TIME '01-JAN-08'.

    The characteristics of BACKUP ....KEEP command are as under:
    • It automatically backs up datafiles, control file (even if control file autobackup is disabled) and server parameter file.
    • It automatically generates an archived redo log backip to ensure that the database backup can be recovered to a consistent state.


    An Overview of Data Recovery Manager

    Data recovery manager is a tool to help reduce MTTS (Mean time to recover). The GUI of DRA is available in database control and grid control by selecting Perform Recovery in Availability tab. DRA has following commands:
    • LIST FAILURE
    • ADVISE FAILURE
    • REPAIR FAILURE
    • CHANGE FAILURE

    Advise failure comand is used to view repair options which typically include both automated and manual options. Repair failure command is used to repair the failure automatically. Once the failure is repaired, Repair failure command closes the relevant repaired failure. If you chose to repair failure manually, you should use change failure command to change the status of an open failure to closed. Change failure command is used to alter the status and priority of failure. You can change the status of the failure to closed or to priority high or priority low. You cannot set the status to open nor can you set the priority to critical with change failure command.


    To list all failures:
    • rman > list failure
    To list details of a particular failure
    • rman > list failure <failure id > detail
    List a subset of failure
    • rman > list failure low ( low is the failure priority; high and critical are the other two failure priority values)
    • rman > list failure close
    • rman > list failure exclude failure <failure id>
    To determine repair options for a subset of failure
    • rman > advice failure <failure id>
    To see what will be repaired if repair failure command is invoked
    • rman > repair failure preview

    Monday, September 12, 2011

    Run Oracle Database in distinct resource pools in global container

    Suppose you have created three resource pools namely:
    • rac1Pool
    • rac2Pool
    • rac3Pool
    You want to run four Oracle database instances in each of these three pools and in the global pset.
    You can do so by creating 3 additional oracle users namely oracle_2, oracle_3 and oracle_4. All these users should have the same group as oracle user  and different (distinct)  home directory . Also, ensure that Oracle user home directory has group write permission. The command to create additional users is as under:
    • useradd -g oinstall -G dba -u 202 oracle_2
    •  useradd -g oinstall -G dba -u 203 oracle_3
    • useradd -g oinstall -G dba -u 204 oracle_4
    Next, create three projects with following pool and user association:
    rac1Project ---> rac1Pool---->oracle
    rac2Project---->rac2Pool----->oracle_2
    rac3Project---->rac3Pool----->oracle_3
    You can create the above mentioned associated by issuing the following command:
    1)Create a project , associate it with right user and pool:
    • projadd -U oracle -p 400  -K project.pool=rac1Pool rac1Project 
    • projadd -U oracle_2 -p 402  -K project.pool=rac2Pool rac2Project 
    • projadd -U oracle_3 -p 403 -K project.pool=rac3Pool rac3Project
    Additonal info : http://docs.oracle.com/cd/E26502_01/html/E29024/rmpool.task-41.html#rmpool.task-46

    2)Associate the user with right project  


    edit /etc/user_attr and add the following line
    • oracle::::project=rac1project  (add four colons after oracle)
    • orace_2::::project=rac2Project
    • oracle_3::::project=rac3Project
    3) Re-login as the use and verify that it is associated with right project
    •  id -p

    Thursday, July 21, 2011

    Statspack Events

    enq: US - contention

    When you see enq: US - contention on Top 5 wait events on AWR or Statspack report or in ADDM findings you can take the following actions :

    1) Check undo advisor if in 10g anf up.
    2) Edit your undo retension parameter
    3) Give space to Undo Tablespace
    4) Move Undo tablespace to an other disk not so hot disk.

    Mostly this wait event concerns autoxents on undo tablespaces.

    • db file sequential read and read by other session
    Check buffer pool statistics and buffer wait (especiallyu data block wait)
    A "db file sequential read" occurs when a an Oracle process looks for a block in the buffer cache and can’t find the block. The block is then read of disk, creating IO.

    Db file sequential read wait time is increasing:
    Check the average read times per file under FILE IO Stats
    Check cache buffer hit ratio under "Buffer pool advisory"

    Oracle Installation notes

    • Create user and group:
      •  groupadd -g 201 dba
      • groupadd -g 200 oinstall
      •  useradd -g oinstall -G dba -u 200 oracle
      • mkdir /export/home/oracle
      • chown -R oracle:oinstall /export/home/oracle 
      • passwd oracle (setup password for oracle user) 
    • Create ZFS pool to install Oracle (optional)
      •  Find out which pools exist and which disks are used by those pools
        • zpool status
        •  Create a pool named oraclePool
        • zpool create  oraclePool c0t5000C5001D250C7Fd0   c0t5000C5001D207157d0 
          zfs create oraclePool/11gR2
    • Create directories and provide them right permissions:
      • mkdir -p /u01/app/oracle
      • mkdir -p /u01/app/11.2.0/grid
      • mkdir -p /u01/app/oracle/11.2.0/db
      • chown -R oracle:oinstall /u01
      • chmod -R 775 /u01
      • Oracle BASE : /u01/app/oracle
      • GRID Base: /u01/app/11.2.0/grid
      • DB BASE: /u01/app/oracle/11.2.0/grid
    • Install the required packages
      •  pkg install  SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWpool SUNWsprot SUNWtoo SUNWuiu8 SUNWfont-xorg-core SUNWfont-xorg-iso8859-1 SUNWmfrun SUNWxorg-client-programs SUNWxorg-clientlibs SUNWxwfsw pkg://solaris/SUNWxwplt

    • Set project attributes
      • projmod -s -K 'process.max-file-descriptor=(basic,10000,deny)' oracleProject

    Things to check
    • 11.2.0.3 and later are supported on Solaris 11
    •  
    •  Set up AWT_TOOLKIT variable on Solaris 11
      • if installing on Solaris 11, set the following variable so runInstaller can work
        • export AWT_TOOLKIT=XToolkit
    • To setup rsh/ rlogin/ssh between nodes on Solaris 11:
      •  pkg  install service/network/legacy-remote-utilities
      •  inetadm -e  svc:/network/login:rlogin
      • svcadm enable /network/shell:default
    • Clone ASM and DB installation
      • Check out Appendix B of this document:

    http://download.oracle.com/docs/cd/E11882_01/install.112/e24346/toc.htm

      • If you want to do a slient install with response file check out appendix A.
    •  Verify if you have all the required packages:

      • pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWpool SUNWpoolr SUNWsprot SUNWtoo SUNWuiu8 SUNWfont-xorg-core SUNWfont-xorg-iso8859-1 SUNWmfrun SUNWxorg-client-programs SUNWxorg-clientlibs SUNWxwfsw SUNWxwplt 
      • Else install the packages
      • pkg  install <list of space separated pkg names> 
      • If required you can delete existing publisher and add new publisher
        •  pkg set-publisher -G '*' -g  http://ipkg.us.oracle.com/solaris11/dev/ solaris
          # pkg publisher
          PUBLISHER                             TYPE     STATUS   URI
          solaris                               origin   online   http://ipkg.us.oracle.com/solaris11/dev/
           

    Friday, July 1, 2011

    List hidden paramters in Oracle Database

    You can get the hidden parameter values by querying x$ksppcv and x$ksppi as follows:
    • select a.ksppinm name, b.ksppstvl value from sys.x$ksppi a,sys.x$ksppcv b where a.indx = b.indx and a.ksppinm=’_smm_max_size’;
    • select a.ksppinm name, b.ksppstvl value from sys.x$ksppi a,sys.x$ksppcv b where a.indx = b.indx and a.ksppinm=’_pga_max_size’;

    Tuesday, June 28, 2011

    When is DISM enabled in Solaris

    • You can find out if ISM or DISM is enabled, by executing the following :
      • pmap -xs ora_<SID>_lgwr | grep -i ism
    • The algorithm for finding out if DISM will be enabled is as under:
      • Is memory_target or memory_max_target is set
        • Yes: enable DISM
        • No : Is the size of sga_max_size > size for all SGA components
          •  Yes : Enable DISM
          •  No : enable ISM
    • If you want to disable DISM, ensure that sga_max_size is set to a value less than the size of all SGA components
      • You can find the size of all SGA components as under:
        • select sum(value)/(1024*1024*1024) as totalMB from v$sga;
    • Info on sga_target and sga_max_size:
      • sga_target enables automatic SGA mangement. You do not need to set individual SGA buffers if you set sga_target. It can be dynamically increased to sga_max_size.
      • sga_max_size is the max size of the SGA. It is a static parameter. By default, Oracle sets sga_max_size to sga_target if sga_max_size is not set. Additonal info : http://forums.oracle.com/forums/thread.jspa?threadID=535400

    Wednesday, June 8, 2011

    Create Statspack

    "snapshot" is  snapshot of database statistics and performance information as of a specific point in time.

    There is no such thing as "comparing two snapshots". To get information about database activity and/or performance, you have to generate the StatsPack report for the duration between two snapshots.
    Thus, for example, a StatsPack report for the duration between Snapshot 1 and Snapshot 2 shows activity and performance information for that time duration.

    Script to take snapshot:

    echo "connect to sql plus and taking a snap"

    sqlplus "/nolog" << !
    connect perfstat/perfstat
    variable snap_before number ;
    begin :snap_before :=statspack.snap; end;
    /
    print snap_before
    !
    sleep 60
    echo "starting second snapshot"

    sqlplus "/nolog" << !
    connect perfstat/perfstat
    variable snap_after number ;
    variable snap_before number ;
    begin
    :snap_after := statspack.snap ;
    end ;
    end ;
    /
    print snap_after ;
    begin
    select :snap_after -1 into :snap_before from dual ;
    end ;
    /

    define begin_snap=:snap_before
    define end_snap=:snap_after
    define report_name=snapshot
    @$ORACLE_HOME/rdbms/admin/spreport

    Cluster verification Utility

    • To view what prerequisites are failing
      • ./runcluvfy.sh comp sys -n  t5120-241-06,t5220-241-03 -p crs
    • To generated a fixup script in /tmp/ directory to fix the prerequisites that are failing
      • ./runcluvfy.sh stage -pre crsinst -fixup -fixupdir /tmp/ritu.sh  -n t5120-241-06,t5220-241-03
    • Additional commands:
      • ./runcluvfy.sh -help
      • ./runcluvfy.sh stage -list or stage -help
      •  ./runclvfy.sh comp -list or comp -help
      • System requirement verification
        • ./runcluvfy.sh comp sys -n {node list} -p {crs|database} -verbose
      • Storage verification
        • ./runcluvfy.sh comp ssa -n {node list}  -s {storageid_list] - verbose
    • Detailed Documentation:

    Monday, May 9, 2011

    WARNING: oradism not setup correctly in alert log

    If you see the following warning in alert file :

    WARNING: -------------------------------
    WARNING: oradism not set up correctly.
    Dynamic ISM can not be locked. Please
    setup oradism, or unset sga_max_size.
    [diagnostic 0, 16, 64882]
    ----------------------------------------

    the solution is as under:

    1- cd $ORACLE_HOME/bin

    2- chmod 4550 oradism
    3- chmod g+s oradism
    4- chown root:dba oradism
    5- Bounce the database

    Refer to Metalink note 374367.1 for additional info.

    Tuesday, April 5, 2011

    Sort Area in PGA

    If parallel servers are disabled, than the max size of the sort area is determined by _smm_max_size parameter. The value is in KB. The _pga_max_size is the max size of a single session pga. It is expressed in bytes.

    You can view this paramter as under:
    select a.ksppinm name, b.ksppstvl value from sys.x$ksppi a,sys.x$ksppcv b where
    a.indx = b.indx and a.ksppinm='_smm_max_size';
    select a.ksppinm name, b.ksppstvl value from sys.x$ksppi a,sys.x$ksppcv b where
    a.indx = b.indx and a.ksppinm='_pga_max_size';
    show parameters 'pga%';
    For 10gR2:

    When max_parallel_servers is set to 0:

    If pga_aggr_target < 500mb , _smm_max_size is set to 20% of PAT (pga_aggr_target)
    if PAT is between 500mb and 1gb , _smm_max_size is set to 100M
    if PAT > 1GB, _smm_max_size is set to 10% of PAT

    When parallel operations are enabled

    Upper bound limited to 50% of PAT/DOP

    when DOP <=5, _smm_max_size is used
    when DOP > 5 _smm_px_max_size/DOP limits the max memory usuage

    _pga_max_size defaults to 2* _smm_max_size

    Thursday, March 24, 2011

    Out Of Memory Error

    Execute prctl $$ command to  use the current shell process to determine resource limits


    A threshold value on a resource control constitutes a point at which local actions can be triggered or global actions, such as logging, can occur.
    Each threshold value on a resource control must be associated with a privilege level. The privilege level must be one of the following three types:
    basic
    Can be modified by the owner of the calling process.
    privileged
    Can be modified by the current process (requiring sys_resource privilege) or by prctl(1) (requiring proc_owner privilege).
    system
    Fixed for the duration of the operating system instance.
    A resource control is guaranteed to have one system value, which is defined by the system, or resource provider. The system value represents how much of the resource the current implementation of the operating system is capable of providing.
    Any number of privileged values can be defined, and only one basic value is allowed. Operations that are performed without specifying a privilege value are assigned a basic privilege by default.

    If you see "Out of memory errors" , you may want to enable logging of resource control limit threshold.
    By default, Solaris 10  does not  log resource limit threshold notifications. You can see whether they are enabled or not by using the “rctladm” command:

    To turn on the syslog for "project.max-shm-memory", issue the following:
    rctladm -e syslog project.max-shm-memory
    
    
    
    To set the process.max-address.space to 20GB, add the following line to /etc/project
             
    
    user.ben:12345::::process.max-address.space=(privileged,20000000000,deny)
    
    An example of /etc/project:
    user.oracle:100::::process.max-address-space=(privileged,40240000000,deny);proce
    ss.max-data-size=(priv,10737418240,deny);process.max-sem-nsems=(priv,8192,deny);
    process.max-stack-size=(priv,10737418240,deny);project.max-file-descriptor=(priv
    ,65536,deny);project.max-sem-ids=(priv,100,deny);project.max-shm-ids=(priv,8192,
    deny);project.max-shm-memory=(privileged,2199023255552,deny)
    
    
    
    
    This setting however only takes effect for new processes and not for existing process.
    
    ie prctl $$ (will continue to show old value for process.max-address-space)
    '
    $ su oracle -c 'sh -c "prctl $$"' (will show the new value as you are starting a new shell)
    or ...su oracle -c 'sh -c "xterm"'

    Wednesday, February 9, 2011

    Connecting to database from remote client

    • Start listener on server machine
    • Install adminstrative client s/w on the client machine and add server name in tnsname.ora in client machine
    •  Verify on client machine:
      • tnsping <servicename>
      • sqlplus tpcc/tpcc@bench (where tpcc is the username and bench is the service name)
      • setenv TWO_TASK bench
      • sqlplus tpcc/tpcc (you should get the prompt)
      •  
    • What is the use of TWO_TASK environment variable ?
    If you are in a Unix environment, you can normally set 2 environment variables:
    • ORACLE_HOME
    • ORACLE_SID
    that will connect you to a local database identified by that home and sid. If you wanted
    to be able to connect to a remote database or a database on that machine via SQLNet, you
    can also set:
    •  TWO_TASK = <tnsconnect string> 
    The setting of TWO_TASK overrides the ORACLE_SID when set. You will not connecting to a
    local database with two_task but rather using sqlnet to connect to a remote database.
    So, instead of: 
    I can: 
    $ setenv TWO_TASK some_db
    $ sqlplus scott/tiger
    and the @some_db is implied.

    Tuesday, February 8, 2011

    Whitepaper links

    • Oracle Database on Sun SPARC Enterprise T-series Systems : Suitablilty Analysis
    • Optimizing Single Instance Oracle Databases on Oracle's T-Series Servers:
    • Deploying Oracle Database on the Oracle Solaris Platform -Best Practices:
    • Best Practices for Runing Oracle Databases in Oracle Solaris Containers
    • Session in MySQL Conf 2008 : Best Practices for Deploying MySQL on the Solaris Platform
    • Session in Oracle OpenWorld 2010:
    http://www.oracle.com/technetwork/server-storage/solarisstudio/oow-focus-on-solaris-173299.pdf

    • Oracle Optimized Solution
    How to Consolidate, Optimize, and Deploy Oracle Database Environments with Flexible Virtualized Services
    Oracle Optimized Solution for Oracle Database
    http://www.oracle.com/technetwork/server-storage/hardware-solutions/o12-032-howto-db-throughput-avail-1592601.pdf
     
     
    •  Oracle Optimized Soltuion:
     
    How to Accelerate Test and Development Through Rapid Cloning of Production Databases and Operating Environments
     
    http://www.oracle.com/technetwork/server-storage/hardware-solutions/o13-022-rapid-cloning-db-1919816.pdf
    
    
    •  Session in Oracle Open World 2012

    How to Configure HCC for Optimizing Query Database Performance and lowering TCO
    • Deploying MySQL Database in Solaris Cluster Environments

    Monday, February 7, 2011

    Shared Pool Sizing

    This blog entry talks about how the library cache and dictionary cache statistics can be monitored to ensure they are optimally sized.


    • Shared pool: Library cache statistics
    When sizing the shared pool, the goal is to ensure that SQL statements that will be executed multiple times are cached in the library cache, without allocating too much memory.

    The statistic that shows the amount of reloading (that is, reparsing) of a previously cached SQL statement that was aged out of the cache is the RELOADS column in the  V$LIBRARYCACHE view. In an application that reuses SQL effectively, on a system with an optimal shared pool size, the RELOADS statistic will have a value near zero.

    The INVALIDATIONS column in V$LIBRARYCACHE view shows the number of times library cache data was invalidated and had to be reparsed. INVALIDATIONS should be near zero. This means SQL statements that could have been shared were invalidated by some operation (for example, a DDL). This statistic should be near zero on OLTP systems during peak loads.

    The following query gives info about reloads and invalidations:

    sql> col  namespace format a30
    SQL> SELECT NAMESPACE, PINS, PINHITS, RELOADS, INVALIDATIONS
      FROM V$LIBRARYCACHE ORDER BY NAMESPACE;

    NAMESPACE                            PINS    PINHITS    RELOADS INVALIDATIONS
    ------------------------------ ---------- ---------- ---------- -------------
    APP CONTEXT                           525        524          0             0
    BODY                                 7114       6993          0             0
    CLUSTER                               529        496          0             0
    DBINSTANCE                              0          0          0             0
    DBLINK                                  0          0          0             0
    EDITION                               482        479          0             0
    INDEX                                  44          0          0             0
    OBJECT ID                               0          0          0             0
    QUEUE                                7316       7300          1             0
    RULESET                                 7          6          0             0
    SCHEMA                                  0          0          0             0
    NAMESPACE                            PINS    PINHITS    RELOADS INVALIDATIONS
    ------------------------------ ---------- ---------- ---------- -------------
    SQL AREA                         55853208   58144246         35           112
    SUBSCRIPTION                           12          7          0             0
    TABLE/PROCEDURE                     32025      28475         91             0
    TRIGGER                               156        145          0             0
    USER AGENT                              1          0          0             0
    16 rows selected.
     
    Another key statistic is the amount of free memory in the shared pool at peak times. The amount of free memory can be queried from V$SGASTAT, looking at the free memory for the shared pool. Optimally, free memory should be as low as possible, without causing any reloads on the system.

    You can find out the free memory for the shared pool with the following query:

    SELECT * FROM V$SGASTAT WHERE NAME = 'free memory' AND POOL = 'shared pool';


    Lastly, a broad indicator of library cache health is the library cache hit ratio. This value should be considered along with the other statistics discussed in this section and other data, such as the rate of hard parsing and whether there is any shared pool or library cache latch contention.

    To calculate the library cache hit ratio, use the following formula:
    Library Cache Hit Ratio = sum(pinhits) / sum(pins)

    The following query displays the library cache hit ratio:

    select sum(pinhits)/ sum(pins) from v$librarycache;

    • Shared pool: Dictionary cache statistics
    Typically, if the shared pool is adequately sized for the library cache, it will also be adequate for the dictionary cache data.

    Misses on the data dictionary cache are to be expected in some cases. On instance startup, the data dictionary cache contains no data. Therefore, any SQL statement issued is likely to result in cache misses. As more data is read into the cache, the likelihood of cache misses decreases. Eventually, the database reaches a steady state, in which the most frequently used dictionary data is in the cache. At this point, very few cache misses occur.

    Each row in the V$ROWCACHE view contains statistics for a single type of data dictionary item. These statistics reflect all data dictionary activity since the most recent instance startup. The columns in the V$ROWCACHE view that reflect the use and effectiveness of the data dictionary cache are listed below.

    PARAMETER
     Identifies a particular data dictionary item. For each row, the value in this column is the item prefixed by dc_. For example, in the row that contains statistics for file descriptions, this column has the value dc_files.

    GETS
     Shows the total number of requests for information about the corresponding item. For example, in the row that contains statistics for file descriptions, this column has the total number of requests for file description data.

    GETMISSES
     Shows the number of data requests which were not satisfied by the cache, requiring an I/O.

    MODIFICATIONS
     Shows the number of times data in the dictionary cache was updated.


    Use the following query to monitor the statistics in the V$ROWCACHE view over a period while your application is running. The derived column PCT_SUCC_GETS can be considered the item-specific hit ratio:

    column parameter format a21
    column pct_succ_gets format 999.9
    column updates format 999,999,999
    SELECT parameter  , sum(gets)  , sum(getmisses)  , 100*sum(gets - getmisses) / sum(gets)  pct_succ_gets
         , sum(modifications)                     updates
      FROM V$ROWCACHE
     WHERE gets 0
     GROUP BY parameter;


    It is also possible to calculate an overall dictionary cache hit ratio using the following formula; however, summing up the data over all the caches will lose the finer granularity of data:
    SELECT (SUM(GETS - GETMISSES - FIXED)) / SUM(GETS) "ROW CACHE" FROM V$ROWCACHE;