10 April 2012
Patch RAC DR server with 11g PSU5
Working notes, without the real directory names:
cd 11202_Patches/Linux_x86_64
export PATH=$PATH:/.../OPatch
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./13653086
export PATH=$PATH:$GRID_HOME/bin
crsctl stop cluster
crsctl stop crs
As the root user execute:
$GRID_HOME/crs/install/rootcrs.pl -unlock
to avoid error "Copy Action: Destination File "$GRID_HOME/bin/crsd.bin" is not writeable."
sudo su - grid
$GRID_HOME/OPatch/opatch napply -oh $GRID_HOME -local .../Linux_x86_64/13653086
$GRID_HOME/OPatch/opatch napply -oh $GRID_HOME -local .../Linux_x86_64/13343424
$GRID_HOME/OPatch/opatch lsinventory
sudo su - oracle
cd .../11202_Patches/Linux_x86_64/13653086
../custom/server/13653086/custom/scripts/prepatch.sh -dbhome $ORACLE_HOME
opatch napply -oh $ORACLE_HOME -local .../custom/server/13653086
opatch lsinventory <-- to check
cd ../11202_Patches/Linux_x86_64/13343424
opatch apply -oh /$ORACLE_HOME -local
../custom/server/13653086/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME
sudo su - grid
export PATH=$PATH:$GRID_HOME/bin
crsctl start crs
crsctl start cluster
Pre-checks
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOMEcd 11202_Patches/Linux_x86_64
export PATH=$PATH:/.../OPatch
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./13653086
export PATH=$PATH:$GRID_HOME/bin
crsctl stop cluster
crsctl stop crs
As the root user execute:
$GRID_HOME/crs/install/rootcrs.pl -unlock
to avoid error "Copy Action: Destination File "$GRID_HOME/bin/crsd.bin" is not writeable."
On each node
sudo su - grid
$GRID_HOME/OPatch/opatch napply -oh $GRID_HOME -local .../Linux_x86_64/13653086
$GRID_HOME/OPatch/opatch napply -oh $GRID_HOME -local .../Linux_x86_64/13343424
$GRID_HOME/OPatch/opatch lsinventory
sudo su - oracle
cd .../11202_Patches/Linux_x86_64/13653086
../custom/server/13653086/custom/scripts/prepatch.sh -dbhome $ORACLE_HOME
opatch napply -oh $ORACLE_HOME -local .../custom/server/13653086
opatch lsinventory <-- to check
cd ../11202_Patches/Linux_x86_64/13343424
opatch apply -oh /$ORACLE_HOME -local
../custom/server/13653086/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME
sudo su - grid
export PATH=$PATH:$GRID_HOME/bin
crsctl start crs
crsctl start cluster
Labels: 11gR2 PSU5 Oracle RAC Patch DR server
03 April 2012
Basic RAC 11gR2 commands
- Checks
- cemutlo -n; srvctl config scan (list cluster and scan name)
- crsctl check crs (summary check of top components)
- crsctl status resource -t (list detailed resources)
- crsctl query crs activeversion (check active version)
- srvctl status nodeapps/scan
- cluvfy comp crs (verify CRS integrity)
- OCR
- crsctl query css votedisk
- ocrconfig -replace ocrmirror (
add a mirrored copy) - ocrconfig -add +ASM_DG (add OCR to ASM disk group)
- ocrconfig -showbackup (OCR automatically backed up every 4 hours, at the end of a day, and at the end of a week). Check $CRS_HOME/cdata/$cluster_name
- ocrconfig -manualbackup
- Various other commands
- crsctl stop cluster -all (without "-all", it stops CS on the current node)
- crsctl disable crs (disables HA)
- crsctl stop crs (stops the cluster on the local node)
- srvctl stop instance -d DBNAME - i "INST1, INST2" -o immediate (also stops services)
- srvctl stop database -d DBNAME
- srvctl disable service ...
- SQL>alter system kill session 'sid, serial, instance_id';
- ASM:
- srvctl status asm
- /etc/init.d/oracleasm listdisks or querydisks
- Add additional disks:
- oracleasm configure -i (to configure ASM library software)
- oraceasm scandissk (to make the disks available to other nodes)
- /etc/init.d/oracleasm createdisk DISK1 /dev/dasdxxxx (mark a disk as SAM disk)
- ASM config file in /etc/sysconfig/oracleasm
- Monitoring/diagnostics
- $Grid_home/bin/diagcollection.pl ... (collect diag information from CW)
- oclumon debug log deamonmodule all/osysmond/ologgerd/client debuglevel 0-3
- oclumon dumpnodeview (dump node view for all or some nodes, for time interval. This will list system usage, top consumer, processes, devices, amongst all)
- crsctl set trace "crs=1" (crs/cfs/css/evm, and level can be 0-5)
- export SRVM_TRACE=TRUE
- The column CLUSTER_WAIT_TIME in V$SQLAREA represents the wait time incurred by individual SQL for global chache events.
- Check AWR report and RAC related events:
- gc current block 2-way (block requested to be accessed in the current mode (U/D), sent by the master instance via Cache Fusion)
- gc current block busy (block being used by another instance, or holding instance couldn't write it fast enough)
- gc buffer busy acquire/release (block busy is now split into acquire and release)
- gc current/cr block congested (high load, CPU saturation)
RAC 11gR2 installation on DR Linux server
Notes from RAC installation on DR servers:
Prerequisites check:
Prerequisites check:
- runcluvfy.sh (post hw & sw, pre-installation)
- Check/set IP address resolution
- Set up SSH connectivity
- Create directories in /opt for the software
- Check ASMLib is configured
- Set .profile
- Install Grid Infrastructure and database software
- No shutdown necessary, it will happen automatically (rolling fashion?)
- Patch as root "opatch auto ... -ocmrf"
- Install the 11g OEM agent (chmod 750 directory)
- Patch the agent
- Create the DR database (easy with grid control)
Labels: Oracle RAC 11gR2 installation Linux DR