Thursday, February 2, 2017

ZFS basics

  •  Create a zpool named zonepool :
zpool create  zonespool c0t5000C5001D250C7Fd0   c0t5000C5001D207157d0
 zpool create -f oraclePool c0t5000C50007F554E3d0
  • Create a FS namely zones in that pool
zfs create zonepool/zones


  • Set a quota on files system

zfs set quota=10G zonepool/zones

  • Create a ZFS volume
zfs create -V 20gb zonepool/zones


  • Destroy zfs fs:

zfs destroy  zonepool/zones
  • List the disks of all the zpools
zpool status
  • List the file hierarchy
zfs list
root@T5-2-SDB-1:/# zfs list -r odsRoot
NAME                USED  AVAIL  REFER  MOUNTPOINT
odsRoot             103G  30.7G    31K  /odsRoot
odsRoot/zone1Root  51.6G  82.3G    16K  -
odsRoot/zone2Root  51.6G  82.3G    16K  -



  • Destroy the zpool

zpool destroy zonespool



  • Create Zvol from zpool :
 ZFS volume is a dataset that represents a block device. ZFS volumes are identified as devices in the /dev/zvol/{dsk,rdsk}/pool directory.



root@T5-2-SDB-1:/# zpool destroy zoneRoot
root@T5-2-SDB-1:/# zpool create -f odsRoot c0t5000CCA00089FA88d0
root@T5-2-SDB-1:/# zpool list
NAME        SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
odsRoot     136G   126K  136G   0%  1.00x  ONLINE  -
oraclePool  136G    85K  136G   0%  1.00x  ONLINE  -
rpool       556G   340G  216G  61%  1.00x  ONLINE  -
root@T5-2-SDB-1:/# zfs create -v  50G oraclePool/zone1
invalid option 'v'
For more info, run: zfs help create
root@T5-2-SDB-1:/# zfs create -V  50G oraclePool/zone1
root@T5-2-SDB-1:/# zfs create -V 50G oraclePool/zone2
root@T5-2-SDB-1:/# zfs create -V 50G odsRoot/zone1Root
root@T5-2-SDB-1:/# zfs create -V 50G odsRoot/zone2Root
root@T5-2-SDB-1:/# ^C

root@T5-2-SDB-1:/# zfs list odsRoot
NAME     USED  AVAIL  REFER  MOUNTPOINT
odsRoot  103G  30.7G    31K  /odsRoot
root@T5-2-SDB-1:/# zfs list oraclePool
NAME        USED  AVAIL  REFER  MOUNTPOINT
oraclePool  103G  30.7G    31K  /oraclePool




Thursday, October 13, 2016

Oracle Database 12c does not acknowledge parallel_min_server value to be lower than default


When I changed the value of parallel_min_server from 512 to 256, the change did not take place Although parallel_min_servers was set to 256, Oracle continue to start 512 processes at startup.


SQL> show parameters parallel_min_servers ;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_min_servers                 integer     512



SQL> alter system set parallel_min_servers=256 ;

System altered.

SQL> show parameters parallel_min_servers;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_min_servers                 integer     256
SQL> shutdown
SQL> startup



However, the number of ora_p0* process continued to be 512

Even the system view showed the process count to be 512

SQL> set line 80
SQL> set tab off
SQL> col statistic for a30
SQL>
SQL> select * from v$px_process_sysstat where statistic like '%Server%';

STATISTIC                           VALUE     CON_ID
------------------------------ ---------- ----------
Servers In Use                          0          0
Servers Available                     512          0
Servers Started                       512          0
Servers Shutdown                        0          0
Servers Highwater                       0          0
Servers Cleaned Up                      0          0
Server Sessions                         0          0

7 rows selected.

SQL>


According to metalink note 1678111.1, this is a known/expected behaviour. In Oracle DB 12c ,there is no lower level permissible any longer that is lower than the default

Tuesday, October 7, 2014

ZFSSA: HCC, Clone a File System

  • Upgrade to or install 11.2.0.3
  • Install patch 13041324 or 11.20.3 Patch set update (PSU) 13343438 which contains 13041324. and Verify that the patch is installed.

-bash-4.1$ ./opatch lsinventory

  • Setup snmp service for the ZFS appliance.
Click on Configuration->Services->System Settings.
Click on SNMP.Set up the following info:
  • SNMP community name :public
  • Authorized network : 0.0.0.0/0
  • Appliance Contact: ritu.kamboj@oracle.com
  • Trap destination: 127.0.0.1
On right side, click on SNMP:Help to get help on the fields:
SNMP community nameThis is the community string that SNMP clients must provide when connecting
Authorized networkThe network which is allowed to query the SNMP server, in CIDR notation. To block all clients, use 127.0.0.1/8 (localhost only); to allow all clients, use 0.0.0.0/0
Appliance contactThe string served by MIB-II OID .1.3.6.1.2.1.1.4.0. Setting this to a person or department's name will help SNMP clients determine who is responsible for this appliance
Trap destinationsThe hostnames or IP addresses for sending SNMP traps to. Custom SNMP traps can be configured in the Alerts section. Set this to 127.0.0.1
Go back to services and restart the snmp service

On the server side, ensure that SNMP service is enabled:
# svcs | grep snmp
online         Jun_30   svc:/application/management/net-snmp:default
Issue the snmp get command to ensure that snmp services are operational.
$ /usr/bin/snmpget -v1 -c public 10.129.195.209 1.3.6.1.4.1.42.2.225.1.4.2.0
SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: "Sun ZFS Storage 7420"

Refer to ZFS snmp docs for additional info about installed MIB:
  •  Verify dNFS is configured by querying v$dnfs_servers
Refer to following DOC to configure DNFS
Step by Step - Configure Direct NFS Client (DNFS) on Linux (11g) (Doc ID 762374.1)

Check alert log file for the following message to ensure that dNFS is successfully configured :
grep -i nfs alert*
alert_hcc.log:Oracle instance running with ODM: Oracle Direct NFS ODM Library Ve

rsion 3.0
alert_hcc.log:Direct NFS: channel id [0] path [10.129.195.209] to filer [10.129.
195.209] via local [] is UP
alert_hcc.log:Direct NFS: channel id [1] path [10.129.195.209] to filer [10.129.
195.209] via local [] is UP


Additonal info:

  

./opatch[103]: whereis: not found [No such file or directory]
Invoking OPatch 11.2.0.1.7
Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home       : /oraclePool/11g/u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /oraclePool/12c/u01/app/oraInventory
   from           : /var/opt/oracle/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /oraclePool/11g/u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2012-07-12_18-44-26PM.log
Lsinventory Output file location : /oraclePool/11g/u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2012-07-12_18-44-26PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g                                                  11.2.0.3.0
There are 1 products installed in this Oracle Home.
Interim patches (1) :
Patch  13343438     : applied on Thu Jul 05 16:53:14 GMT-08:00 2012
Unique Patch ID:  14273862
Patch Description:  "Database Patch Set Update : 11.2.0.3.1 (13343438)"
   Created on 30 Dec 2011, 03:13:30 hrs PST8PDT
   Bugs fixed:
     12847466, 13499128, 12718090, 12917230, 13035804, 12938841, 13528551
     12861463, 12620823, 12950644, 13466801, 13041324, 12998795, 13343438
     13492735, 13362079, 9873405, 12797765, 12834027, 9703627, 12905058
--------------------------------------------------------------------------------
OPatch succeeded.

-








  • On ZFSSA there exist a project named oracle. It has a FS named test which is mounted as /export/test. Follow the following step to create a snapshot of /export/test and clone that snapshot



  • Create a new project named clone. (Shares->Project->Clone)
  • Snapshot the project that you need to clone. (Shares->Shares->Edit oracle Project->Create a snapshot snap1)
  • Promote the snapshot to a clone (Shares->Shares->Edit Oracle/test->Snapshot->Edit snap1 ->Select Clone snapshot as a new share)
  • Mount the new FS


Creation of CDB

The following blog explains how to create a CDB named cdbtest containing 1 PDB  named pbdtest
  • export ORACLE_SID=cdbtest
  • To be authenticated with a password file create a password file
  • Create the initialization parameter file
  • Connect to the instance
  • Create a server parameter file
  • Start the instance with nomount option
  • Issue the create database

The create database command has following additional clause to indicate that it is CDB

enable pluggable database
seed file_name_convert=('/u01/app/oracle/oradata/testcdb', '/u01/app/oracle/oradata/pdbseed')

Enable pluggable database clause creates a CDB with a root and a seed

The seed file_name_convert generates file names for the seed's file in /u01/app/oracle/oradata/pbdseed directoru using filenames in the /u01/app/oracle/oradata/testcdb directory.

After you create a CDB it consist of the root and the seed.

  • Create additional tablespaces if required
  • Run scripts to build data dictionary views

The following three scripts needs to be run:
catalog.sql
catproc.sql
pubbld.sql

  • Create PDBs to store user data,.
You can create a PDB from the seed using the create pluggable database statement.
create pluggable database testpdb admin user testpdb identified by test123 file_name_convert=('/test1/bench/pdbseed','/test1/bench/testpdb' );
  • PDB is now in mount state, modify its mode to OPEN
alter pluggable database testpdb open ;

  •  Set the current session to point to PDB
alter session set container=testpdb ;
  • Create a local user which will own the schema
create user test identified by test;
grant dba to test;
  • Clone a pbd from existing pdb.
A PDB namely testpdb exist. We need to clone this pbd and call the cloned PDB as clonePDB
Open source pdb in read-only mode
create pluggable database clonepdb from testpdb file_name_convert=('/testpbd,'clonepdb') ;
; change the mode to open


  • Is it a CDB or a non-CDB database
select cdb from V$database

  • What does the various con_id stand for:

con_id        desc
0                    the data pertains to entire PDB
1                     the data pertains to the root (oracle metadata)
2                    the data pertains to the seed
3                    the data pertains to the PDB


  • How many tablespace do I have after I create a CDB containing 1 PDB:

SQL> select name,con_id from v$tablespace;

NAME                               CON_ID
------------------------------ ----------
SYSTEM                                  1
SYSAUX                                  1
UNDOTBS1                                1
USERS                                   1
TEMP                                    1
SYSTEM                                  2
SYSAUX                                  2
TEMP                                    2
SYSTEM                                  3
SYSAUX                                  3
TEMP                                    3

NAME                               CON_ID
------------------------------ ----------
USERS                                   3


The root ,seed and PDB has system, sysaux and temp tablespace each.

Additionally , the root container has UNDO tablespace . Root and PDB conainters also has user tablespace.


  • What is my user and con_id and con_name

SQL> show user;
USER is "SYS"
SQL> show con_id;

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------


  • How many PDBs are exist in a given CDB


cdb_pdbs or dba_pdbs
How many containers does it have:

Getting started with AWR

  • Do I have license to enable AWR reports:
 Enterprise editions have license for AWR and tuning packs. Check Control_management_pack_access parameter to figure out which server manageability pack is active.The following packs are available:
  • The DIAGNOSTIC pack includes AWR, ADDM, and so on.
  • The TUNING pack includes SQL Tuning Advisor, SQLAccess Advisor, and so on.
The default value for this parameter in Enterprise Edition is  DIAGNOSTIC+TUNING and for all other editions is none.
SQL> show parameters control_management%;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_management_pack_access       string      DIAGNOSTIC+TUNING



  • Is AWR enabled on my server:
Gathering database statistics using the AWR is enabled by default and is controlled by the STATISTICS_LEVEL parameter. The statistics_level paramter should be set to TYPICAL or ALL to enable statistics gathering by the AWR
In the following example, AWR reporting is disabled.
SQL> show parameters statistics_level ;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
statistics_level                     string      BASIC
You can enable it with the following statement :
  • alter system set statistics_level='ALL';




  • How often are AWR reports generated:
By default , snapshots are captured once every hour and retained in the database for 8 days.

SQL> SQL> col retention format a20
SQL>  col snap_interval format a20
SQL> select dbid, snap_interval, retention from dba_hist_wr_control;

   DB Id    SNAP_INTERVAL        RETENTION
----------- -------------------- --------------------
 1622729499 +00000 01:00:00.0    +00008 00:00:00.0

 You can adjust the interval  and retention of snapshot generation by issuing the following command:
EXECUTE DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(
  interval  =>  60,
  retention =>  20160);


Interval is specified in minutes

The following script manually generates AWR report:
if [[ $# -lt 1 ]]
then
  echo "runawr.sh  <DELAY(RUNTIME)> "
  exit 1
fi
RUNTIME=$1
sqlplus / as sysdba << !
execute dbms_workload_repository.create_snapshot();
!
sleep $RUNTIME
sqlplus / as sysdba << !
var  endSnapshot number ;
var startSnapshot number;
var startSnapshot number;
var myDbid number ;
var myInst      number;
execute select dbms_workload_repository.create_snapshot() into :endSnapshot from dual ;
execute select :endSnapshot - 1  into :startSnapshot from dual;
show user ;
execute select dbid into :myDbid from v\$database;
execute select instance_number into :myInst from v\$instance;
print :endSnapshot ;
print :startSnapshot ;
print :myDbid ;
print :myInst;
spool awr_report.txt
select * from table (dbms_workload_repository.awr_report_text(:myDbid,:myInst,:startSnapshot,:endSnapshot));
!

Wednesday, January 15, 2014

Pluggable Database Concepts

An Oracle Database instance is combination of SGA and group of background processes. An Oracle database is a group of files that contain data. In a single instance db, there  is a 1 to 1 relationship between an instance and a database. In a RAC environment, there are many to 1 relationship between instance and DB.

To have multiple DB on the same machine, you need to run multiple DB instance. So, in a single instance DB, if you have 3 mounted DB, you will need 3 DB instances. With PDB, the idea is to remove this duplication of resources and to share one DB instance with multiple DBs now referred to as CDBs.


A container Database (CDB) contains Plugable database (PDB)

A container database consists of following three components each of which is called a container
  • the root container (CDB$ROOT) which contains metadata for Oracle supplied objects
  • Seed PDB (PBD$SEED) which contains metadata for user supplied objects
  • PDBs which are generated by cloning seed PBD and assigned a unique ID upon creation.


At a physical level, the CDB has a database instance and database files, just as a non-CDB does.

The following files exist only in root container:
  • The redo log files
    • The redo log files are common for the whole CDB and therefore the information it contains is annotated with the identity of the PDB where a change occurs. All PDBs in a CDB share the ARCHIVELOG mode of the CDB
  • The control file
    • The control files are also common for the whole CDB. The control files are updated to reflect any additional tablespaces and datafiles of plugged PDBs.
    • Add info about undo tbsp ??
Following files exist either  in  both root container and user container or in one of them(PDBs and seed PDBs together form user container)
  • Temporary tbsp (required in root ;optional in user container):
    • A temporary tbsp is required in the root container.  This tbsp is optional in PDB. Each PDB can hold its own temporary tbsp for its own local users.
  • System and SysAux tablespace (required in both containers)
    • Each container has its own data dictionary stored in its proper SYSTEM tbsp ,containing its own metadata and a SYSAUX tbsp.
  • User tablespace (only in user container)
    • The PDBs can create their tbsp according to its application needs
Each datafile is associated with a specific container , named CON_ID.



There are four ways to provision a PDB:
  • Create a new PDB from seed PDB
  • Plug in a non-CDB as a PDB
  • Clone a PDB from another PDB (in same or another CDB)
  • Plug a unplugged PDB into another CDB. (However, a PDB can be plugged into only one CDB at a time)
You can plug or unplug a PDB from a CDB. A PDB can be plugged into only one CDB at a time.A CDB has a common users which can log in as root and adminster the PDB.You can create a CDB by either DBCA tool or by create database sql.

When you create a CDB, you can specify how many PDBs it will contain. Once the CDB is created, you can use DBCA to plug and unplug PDBs

When you create a CDB using a CREATE DATABASE sql statement, you must enable PDBs (by specifying enable PLUGGABLE DATABASE clause) and specify the names and locations of the seed's files.



Backup and Recovery:
  • Backup :
RMAN will support backup and recovery of a CDB that contains zero or more PDBs. All RMAN operations will be performed while connected as root of a CDB. An individual PDB is not a valid RMAN target  db. New syntax, namely Pluggable Database, is introduced to allow backup of CDB.
  • Recovery:
On a CDB, the recovery can be done for the entire CDB, for a PDB, a tbsp, a datafile or even a block.
If the files in the root container are lost (namely redo log file, control files, temp tbsp, ...)the only possible  recovery is once CDB is in a mounted state.





Wednesday, January 8, 2014

How to create “Cold” RMAN Backup

 

This blog entry describes how to take rman backup of a database with NOARCHIVELOG mode. The only option you have to take backup of a NOARCHIVELOG mode database is to take consistent whole database backup.

The characteristic of a consistent backup is as under:

  • All read/write datafiles and control files are checkpointed with the same system change number (SCN)
  • A consistent backup does not need recovery after it is restored as all files in the backup are guaranteed to contain all changes up to the same SCN.

The steps involved in taking consistent backup as as under:

  • Make sure the database is shutdown properly.
    • If the database is not shut down consistently, for example, an instance fails or you issue a “shutdown abort” statement, then the datafiles are always inconsistent and hence your restore will fail. The script is shutting down the database twice as to ensure that the datafiles are in consistent state.
  • Startup the database in mount option
  • Configure the location where you need to take the backup
    • The script is backing up the database on an disk at ‘backupPool/backup’ directory
  • Configure autobackup of controlfile and specify a location where control file will be backed up.
  • Backup the complete database

Here is the script to take cold rman backup:

#!/bin/sh
echo "Taking backup of $ORACLE_SID\n"
#rman target /  log=backupbench.log << !


shutdown immediate ;
startup  pfile=p_build.ora force dba;
shutdown immediate;
startup pfile=p_build.ora mount ;


configure device type disk parallelism 15 backup type to  backupset ;
configure channel device type disk format '/backupPool/backup/ora_df%t_s%s_s%p' ;
configure controlfile autobackup on ;
configure controlfile autobackup format for device type  disk to  '/backupPool/backup/cf_%F' ;

backup database tag 'JGD_create';
alter database open;
exit
!