02 February 2021

Cloning a database where source Oracle Home has a higher DST patch than a destination Oracle home 

Example: Source database has a DST patch version 23, and database was cloned into an Oracle home with DST version 22.

Symptoms

rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Tue Feb 2 13:39:27 2021
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
ORACLE error from target database:
ORA-04045: errors during recompilation/revalidation of SYS.DBMS_RCVMAN
ORA-01804: failure to initialize timezone information
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_RCVMAN"
error executing package DBMS_RCVMAN in TARGET database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-06429: TARGET database is not compatible with this version of RMAN

No rows in
select * from V_$TIMEZONE_FILE;

If, on the NEW database, select from DATABASE_PROPERTIES:

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 10) value
FROM DATABASE_PROPERTIES 
WHERE PROPERTY_NAME = 'DST_PRIMARY_TT_VERSION'; 
PROPERTY_NAME                VALUE
---------------------------------------- ----------
DST_PRIMARY_TT_VERSION          23

Solution

update sys.props$ set VALUE$=22 where NAME='DST_PRIMARY_TT_VERSION';
Restart the database.
Solution is not elegant at all, but it worked for us.

Labels:


18 June 2019

OEM repository commands


OEM13c occasionally leaves orphaned alerts or targets in suspended mode, that clearstate can't remove.  After some investigation, I've created and used these commands for few years, and they are working well.  

Get targets out of suspended mode

select * from sysman.em_current_availability  where current_status in (4,5,6) and SEVERITY_GUID is not null;
update sysman.em_current_availability set CURRENT_STATUS=1,SEVERITY_GUID=null  where current_status in (4,5,6) and SEVERITY_GUID is not null;

Clear up orphaned alerts

col msg for a50
select TARGET_GUID, EVENT_SEQ_ID, msg, REPORTED_DATE,CLOSED_DATE  from sysman.GC$EVENTS where event_seq_id in (select event_seq_id from sysman.EM_EVENT_SEQUENCES_E where open_status=1 and msg like '%&msg_text%');
update sysman.EM_EVENT_SEQUENCES_E set open_status=0, closed_date=sysdate, severity=0, prev_severity=16 where (TARGET_GUID = '&target_id' or  EVENT_SEQ_ID='&event_seq_id' ) and open_status=1; 
--or
update sysman.EM_EVENT_SEQUENCES_E set open_status=0, closed_date=sysdate, severity=0, prev_severity=16 where open_status=1 and event_seq_id in (select EVENT_SEQ_ID from sysman.GC$EVENTS where event_seq_id in  (select event_seq_id from sysman.EM_EVENT_SEQUENCES_E where open_status=1 and msg like '%&msg_text%'));





Labels:


18 June 2018

Convert single instance database to a clustered database

Clusterize the database

oh; cd dbs
cp initDBNAME.ora initDBNAME.ora.singleinstance

create pfile='/oracle/DBNAME/121/dbs/initDBNAME.ora' from spfile;

vi initDBNAME.ora
Add:
*.cluster_database = TRUE
*.remote_listener='scanename:xxx'
DBNAME1.instance_number=1
DBNAME2.instance_number=2
DBNAME1.thread=1
DBNAME2.thread=2
DBNAME1.undo_tablespace='UNDO1'
DBNAME2.undo_tablespace='UNDO2'
If you see andy DBNAME.__ parameters, change them to DBNAME1 and DBNAME2. Example:
DBNAME1__db_cache_size=1895825408
DBNAME2__db_cache_size=1895825408
Important: Remove *.undo_tablespace parameter

create undo tablespace PUNDO1 datafile '+ORADATA3' SIZE 2G AUTOEXTEND ON MAXSIZE 5G;
create undo tablespace PUNDO2 datafile '+ORADATA3' SIZE 2G AUTOEXTEND ON MAXSIZE 5G;

Add log files for thread 2

List log files from v$logfile;

#Add as many groups as you have in thread1, with the same size.
Example:
alter database add logfile thread 2 group 31 size 200M;
alter database add logfile thread 2 group 32 size 200M;
alter database add logfile thread 2 group 33 size 200M;
...
alter database enable public thread 2;

oh; cd dbs
cp initDBNAME.ora initDBNAME1.ora
cp initDBNAME.ora initDBNAME2.ora
cp orapwDBNAME orapwDBNAME1
cp orapwDBNAME orapwDBNAME2
vi ../network/admin/tnsnames.ora
Change DBNAME now to be:
DBNAME=
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = scanname)(PORT = xxxx))
    (CONNECT_DATA =  (SERVER = DEDICATED)  (SERVICE_NAME = DBNAME)  ) )

Start the new cluster database

shutdown immediate
#Start the both instance individually
export ORACLE_SID=DBNAME2
startup nomount pfile='/oracle/DBNAME/121/dbs/initDBNAME.ora'
create spfile='+ORADATA3/DBNAME/spfileDBNAME.ora' from pfile='/oracle/DBNAME/121/dbs/initDBNAME.ora';
shutdown immediate
startup pfile='/oracle/DBNAME/121/dbs/initDBNAME2.ora'

ssh to other node
export ORACLE_SID=DBNAME1
startup pfile='/oracle/DBNAME/121/dbs/initDBNAME1.ora'
!! shutdown immediate on BOTH nodes !!

srvctl add database -d DBNAME -o /oracle/DBNAME/121
srvctl add instance -d DBNAME -i DBNAME1 -n node1
srvctl add instance -d DBNAME -i DBNAME2 -n node2
srvctl modify database -d DBNAME -p '+ORADATA3/DBNAME/spfileDBNAME.ora'
srvctl config database -d DBNAME

srvctl start instance -i DBNAME2 -d DBNAME
srvctl start instance -i DBNAME1 -d DBNAME
srvctl stop database -d DBNAME
srvctl start database -d DBNAME

Voila!

Duplicate database to a diferent server

Prepare OLD database

cd $ORACLE_HOME/dbs; cp orapwDBNAME orapwDBNAME.beforemigration
orapwd file=/oracle/DBNAME/12102/dbs/orapwDBNAME sysdg=yes sysbackup=yes force=y
Stop all cronjobs on OLD server

Prepare  NEW database

Prepare all directories
oh; cd dbs
orapwd file=/oracle/DBNAME/121/dbs/orapwDBNAME sysdg=yes sysbackup=yes force=y

Update oratab

vi /etc/oratab
Add line:
DBNAME:/oracle/DBNAME/121:N
. oraenv

Create a separate listener for duplicate process

We are doing this so we don't have to touch SCAN or VIP listeners and disturb existing cluster
oh; cd network/admin
lsnrctl stop listener_duplicate

vi listener.ora
Add this:
LISTENER_DUPLICATE=
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = servername)(PORT = xxxx)) )

ADR_BASE_LISTENER_DUPLICATE = /oracle/DBNAME/saptrace
CONNECT_TIMEOUT_LISTENER_DUPLICATE = 10
TRACE_LEVEL_LISTENER_DUPLICATE = OFF

SID_LIST_LISTENER_DUPLICATE =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = DBNAME)
      (ORACLE_HOME = /oracle/DBNAME/121)
      (SID_NAME = DBNAME)  ) )

lsnrctl start listener_duplicate

Set up init.ora

Ensure these parameters are changed:
*.compatible=12.1.0.2.0
*.control_files
*.db_create_file_dest
*.db_recovery_file_dest
*.db_recovery_file_dest_size
*.db_create_online_log_dest_1
*.db_create_online_log_dest_2
*.dg_broker_start=FALSE
*.db_unique_name=DBNAME
*.log_archive_dest_1
*.log_archive_dest_state_1='ENABLE'

Set up TNS connectivity between servers

Add to tnsnames.ora on NEW server:

DBNAME_OLD=
  (DESCRIPTION =
    (ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = oldhost)(PORT = xxxx) ))
    (CONNECT_DATA = (SID = DBNAME) (GLOBAL_NAME = DBNAME.WORLD)))

DBNAME=
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST=servername)(PORT = xxxx))
    (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = DBNAME)(UR = A) )
  )

LISTENER_DUPLICATE =(DESCRIPTION =(PROTOCOL = TCP)(HOST = servername)(PORT = xxxx))

Check all with:
sqlplus sys@DBNAME_OLD as sysdba
sqlplus sys@DBNAME as sysdba
tnsping listener_duplicate

Startup nomount NEW database

startup nomount pfile='/oracle/DBNAME/121/dbs/initDBNAME.ora';
create spfile from pfile='/oracle/DBNAME/121/dbs/initDBNAME.ora';

Startup mount OLD database

We are doing this so we don't have to do recovery on the NEW db
STARTUP MOUNT;

Duplicate

#No need to allocate channels as we are going directly over the network
rman AUXILIARY sys@DBNAME target sys@DBNAME_OLD
RUN
{
DUPLICATE TARGET DATABASE TO DBNAME FROM ACTIVE DATABASE;
}

Check the health of the database before commencing.

Standby Catchup with incremental SCN backup

If standby is considerably lagging behind production, “roll it forward” using the incremental backup.

DR

DGMGRL> edit database 'DBNAMEDR_A' set state='APPLY-OFF';

Find both SCNs from 2 queries below and take the lower one. Most of the blog posts refer just to the current_scn, but file header SCN could be lower than current_scn, so you have to check both. The reason they are different is that datafile header is checkpoint-ed from time to time, while current_scn is updated continuously. Actually, current_scn is mostly higher than fhscn.

 In this example, you would take 5716576040, not 5716599640.
select current_scn from v$database;
-----------
 5716599640
select min(fhscn) from x$kcvfh;
-----------
 5716576040

Primary

rman target /
backup incremental from scn 5716576040 database format '/backup/DBNAME_inc%U' tag 'incremental backup';

DR

rman target /
catalog start with '/backup/DBNAME_inc';
recover database noredo;

DGMGRL> edit database 'DBNAMEDR_A' set state='APPLY-ON';

This is a simple scenario, which assumes no new datafile have been created on the primary since the given SCN.



29 May 2018

Migration of OEM to another hosts fails with "Setup EM infrastructure failed!"


This was a tricky one. I migrated OEM 13c from Linux 5 to Linux 6 without any issues. When I tried the same migration, just this time from Linux 6 to 7, the recovery failed as nodemanager couldn't start on the new node. First time I saw that error , I had a niggling it was nodemanger's properties file, but I wasn't sure which parameters are the problem.

There are few Oracle bugs associated with this issue, but they all had  different symptoms..  It took Oracle a week to connect those bugs and this issue, and it was really the nodemanagers.properties file that needed changing.

I'm publishing it, hoping someone else will be able to get a quick solution when they get the same errors. :-)

Symptoms

$ORACLE_HOME/bin/omsca recover -as -ms -nostart -backup_file /data/backup/oradata/oms/opf_ADMIN_20180529_093215.bka
Oracle Enterprise Manager Cloud Control 13c Release 13.2.0.0.0
Copyright (c) 1996, 2016, Oracle. All rights reserved.

OS check passed.
OMS version check passed.
Performing Admin Server Recovery...
Retrieved Admin Server template.
Source Instance Host name where configuration is exported : oldserver
instHost name retrieved from local host name : newserver
Populated install params from backup...
Enter Administration Server user password:
Confirm Password:
Enter Node Manager Password:
Confirm Password:
Enter Repository database user password:
Enter Agent Registration password:
Confirm Password:

Doing pre requisite checks ......
Pre requisite checks completed successfully

Checking Plugin software bits
Proceed to recovery
Setting up domain from template...
Setup EM infrastructure failed!
OMSCA Recover failed

When you check the logs, you find errors like these:
Fatal error in NodeManager server: Identity key store file not found: 

weblogic.nodemanager.NMConnectException: Connection refused. Could not connect to NodeManager.

Cause

Nodemanager is configured with custom certs.

Solution

Make a backup of nodemanager.properties
Delete the lines from the file starting with the following parameters and restart OMS:
CustomIdentityAlias=orakey 
KeyStores=CustomIdentityAndCustomTrust 
CustomTrustKeyStoreFileName 
CustomTrustKeyStorePassPhrase 
CustomIdentityKeyStorePassPhrase=
CustomIdentityKeyStoreFileName=
CustomTrustKeyStoreFileName=
CustomIdentityPrivateKeyPassPhrase=
CustomTrustKeyStorePassPhrase= 

Labels:


14 March 2018

OEM target stuck in "Notification Blackout"

This is OMS 13c bug.

As there is no emctl or emcli command that can fix this problem, update the brownout status in EM repository.

update sysman.MGMT_CURRENT_AVAILABILITY
set brownout_status=0
where target_guid in ('107F8CCD2C86105AE0531D5A710A8A7F','4D4FA30DBA388074E6821EDE4DB4B031');


08 November 2017

ORA-15137 cluster in rolling patch

This happened during the apply of PSU 26610308 on GI. The opatch script rolled back 2 patches on one node of the cluster, but not on the other.  DBA who was applying the patch didn't notice this, as patch reported "Success" at the end. 

Next day, when  we tried to add diskgroup, we got ORA-15137 .

Symptoms

ORA-15137 when running Grid configuration commands . For example:

   CREATE DISKGROUP ... etc...
   ORA-15018: diskgroup cannot be created 
   ORA-15137: cluster in rolling patch 

When I checked patch releases in Grid on both nodes, they were different. Because of that, GI was still in rolling patch mode.
   [racnode1:~] crsctl query crs softwarepatch 
   Oracle Clusterware patch level on node racnode1 is [2100979899]. 

   [racnode2:~] crsctl query crs softwarepatch 
   Oracle Clusterware patch level on node racnode2 is [1930497511]. 

While listing patches on both nodes, I noticed two more patches on the first node.
    $ORACLE_HOME/bin/kfod op=patches 
   List of Patches
   ===============
   19769480
   20299023
   20831110
   21359755
   21436941
   21359761  <== These two patches were not on node2
   21359758  <== These two patches were not on node2
   21948354
   22291127
   23054246
   24006101
   24732082
   25755742
   25869830
   26609783

This meant we needed to rollback those two patches. When I tried to do this with the normal opatch -rollback option, GI said that patches were 'inactive' and couldn't rolled them back. So, we needed a way to force a rollback.

Solution

I'm publishing this because Google wasn't helping with this error. It took me a while before I stumbled on patchgen and we realised we could use it for this. I'm hoping this helps out other DBAs who need to force patch rollback.

Login to the node with "extra" patches (racnode1 for us)
Stop crs  
cd GI_HOME/bin 
./patchgen commit -rb 21359761 
./patchgen commit -rb 21359758 
rootcrs.pl -patch 
Verify with kfod that patch levels are the same.
Restart crs 

Lesson learned

Testing softwarepatch release number was added to a patching process, so this kind of situation wouldn't happen again.

Labels:


26 July 2017

OEM: Metric "Redo Apply Rate (KB/second)" shows 0

We received an alert for one of our cluster standby databases, with OEM metric "Redo Apply Rate (KB/second) " showing 0, indicating there was no apply going on. When I checked DataGuard, it was showing consistent stream of redo apply, so async apply was working fine. 

Cause

I searched through Oracle support site and Google, but there is no known bug in Oracle. I think this was caused by one instance applying the redo and the other doesn't. OEM must be reporting 0 because the metric is being calculating on the non-applying instance.

Workaround

Change the sql in Agent's PL file to look over both clustered instance, and take the maximum redo apply.

File:
/opt/oracle/middleware/agent12c/plugins/oracle.sysman.db.agent.plugin_12.1.0.6.0/scripts/db/dg/dgutil.pl

This is my workaround, not offical Oracle's, but it fixed our problem and has been running fine since.

From

 {
    $sql = "SELECT s.value*512/1000 KB_bytes from v\$sysstat s where s.name='redo blocks read for recovery'";
  }

To

 {
    $sql = "SELECT max(s.value*512/1000) KB_bytes from gv\$sysstat s where s.name='redo blocks read for recovery'";
  }



19 July 2017

Oracle bug: wrong spfile in init*.ora

This is a pretty simple problem, and there is an Oracle bug explained in Doc ID 2145349.1, but Oracle support doesn't offer solution, so I thought I should connect those two.

Symptoms



Mon Jul 17 21:50:44 2017ERROR: Unable to get logical block size for spfile '+ORADATA3/PPEN/spfilePPEN.ora'.
Mon Jul 17 21:50:45 2017ERROR: Unable to get logical block size for spfile '+ORADATA1/PPEN/spfilePPEN.ora'.
Cause


The problem is caused by init*.ora file pointing to a non-existing spfile, while spfile instance parameter is pointing to the correct one.

Solution



Create an alias in ASM that matches the name in init*.ora file.

ASMCMD> pwd+oradata1/PPEN
ASMCMD> ls PARAMETERFILE/spfile.268.948615175
ASMCMD> mkalias PARAMETERFILE/spfile.268.948615175 spfilePPEN.ora


Labels:


17 May 2017

Error while using OEM to create RAC standby

When you use OEM to create a standby on RAC, it will create a single instance on one standby node, which you later need to convert to a clustered database.

In my example that instance is called PPEN (playpen).

I converted PPEN to a clustered database, and I could see 2 instances: PPEN1 and PPEN2. Previous "PPEN" instance has been automatically removed from the cluster node, and I removed it manually from OEM. The problem is that it wasn't removed within Data Guard configuration.

Symptoms


When you go to OEM, and chose 'Availability'/'Verify Database Data Guard Configuration', you will get warnings:

Initializing
Connected to instance host2:PPEN2
Starting alert log monitor...
Updating Data Guard link on database homepage...
WARNING: Cluster database instance PPEN is not discovered.
WARNING: Switchover or failover may not succeed as a result.
Skipping verification of fast-start failover static services check.

Data Protection Settings:
  Protection mode : Maximum Performance
  Redo Transport Mode settings:
    PPEN: ASYNC
    PPENDR_A: ASYNC
... etc...

Solution


Fix is to delete the initial PPEN instance from Data Guard configuration.

DGMGRL>  show database verbose "PPENDR_A";
Database - PPENDR_A

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 4.00 KByte/s
  Active Apply Rate:  781.00 KByte/s
  Maximum Apply Rate: 1.01 MByte/s
  Real Time Query:    OFF
  Instance(s):
    PPEN  -- this instance shouldn't be here
    PPEN1 (apply instance)
    PPEN2

DGMGRL> remove instance "PPEN" on database "PPENDR_A";

This looks like a bug, but search through Oracle support didn't show it, and google-ing didn't help me find this warning, so hopefully this helps other DBAs.

Labels: , ,


25 November 2016

Two after-12c-upgrade Oracle bugs

Note
After upgrading to 12.1.0.2 on RAC cluster, we had a very slow SQL below that was reading column group statistics:


     select count(*) from sys.col_group_usage$ where obj# = :1 and cols = :2
     and trunc(sysdate) = trunc(timestamp) and bitand(flags, :3) = :3"  (Doc ID 2013851.1 ).



According to Oracle support site, this was a known bug and you could work around this by deleting extended statistics, or by setting cursor_sharing to FORCE. After experimenting with both options, we switched the cursor_sharing to 'FORCE'. That caused another bug. A SQL below started running in 2.5 seconds per execution, and was waiting on "GV$: slave acquisition retry wait time". 

     select inst_id, instance_name, host_name from gv$instance
     where instance_mode in (:"SYS_B_0",:"SYS_B_1") order by inst_id 



Solution

Oracle had a suggestion on how to work around this, but that didn't apply to us, as we already had parallel_execution_message_size=16K (Doc ID 1589592.1)

Oracle released a new merged patch for us: 
Bug 24434288 - MERGE REQUEST ON TOP OF DATABASE PSU 12.1.0.2.160719 FOR BUGS 19543384 20907061.


22 July 2015

OEM: create wallet and import trusted certificates


Create wallet

mkdir $ORACLE_HOME/wallets
cd  $ORACLE_HOME/wallets
orapki wallet create -wallet $ORACLE_HOME/wallets -pwd xxx -auto_login
orapki wallet add -wallet $ORACLE_HOME/wallets  -dn "CN=servername.domain.nz, OU=ICT, O=CompanyName, L=YourCity, ST=YourCity, C=NZ" -keysize 2048 -pwd xxx

Create requirement file

orapki wallet export -wallet $ORACLE_HOME/wallets -dn "CN=servername.domain.nz, OU=ICT,  O=CompanyName, L=YourCity, ST=YourCity, C=NZ"  -request $ORACLE_HOME/wallets/user_cert.req -pwd xxx

Add trusted certificates

Send the requirement file to security team and wait for them to send you the certs back.
orapki wallet add -wallet $ORACLE_HOME/wallets -trusted_cert -cert $ORACLE_HOME/wallets/Root_CA.cer -pwd xxx
orapki wallet add -wallet $ORACLE_HOME/wallets -trusted_cert -cert $ORACLE_HOME/wallets/Policy.cer -pwd xxx
orapki wallet add -wallet $ORACLE_HOME/wallets -trusted_cert -cert $ORACLE_HOME/wallets/Issuing.cer -pwd xxx
orapki wallet add -wallet $ORACLE_HOME/wallets -user_cert -cert $ORACLE_HOME/wallets/SSO_Cert.cer -pwd x

Secure console

emctl secure console -wallet  $ORACLE_HOME/wallets
restart oms

Secure OMS with third-party certificates 

Create a new file trust_certs.cer and copy Root, Policy and Issuing text inside. Don't add user cert.

emctl secure oms -wallet $ORACLE_HOME/wallets -trust_certs_loc $ORACLE_HOME/wallets/trusted_certs.txt
Restart OMS

Re-secure all agents

For each Agent:
emctl secure agent
emctl upload

Add SSL Self-Signed Certificate to master agent

Check the certificates:
emctl secdiag openurl -url https://servername:xxxx/empbs/upload
openssl s_client -connect servername:xxxx:xxxx

Re-secure each agent with:
emctl pingOMS
emctl stop agent; emctl secure agent
.. registration password…
#Add all certificates to the agent (Oracle Doc ID 2220788.1 )
emctl secure add_trust_cert_to_jks -trust_certs_loc /opt/oracle/middleware13c/wallets/RootCA.cer -alias RootCA -password welcome
emctl secure add_trust_cert_to_jks -trust_certs_loc /opt/oracle/middleware13c/wallets/IssuingCA1.cer  -alias IssuingCA1 -password welcome
emctl secure add_trust_cert_to_jks -trust_certs_loc /opt/oracle/middleware13c/wallets/usersert.cer -alias usersert -password welcome
emctl start agent

emctl start agent; emctl upload agent



25 May 2015

Switch off parallelism in Tuning Advisor profile recommendations


BEGIN 
DBMS_AUTO_SQLTUNE.set_auto_tuning_task_parameter( 
parameter => '_sqltune_control', 
value => 15); 
END; 

30 April 2015

SQL Tuning Advisor - Switch off plan recommendations with parallelism


Symptoms

SQL Tuning Advisor recommends lots of plans with PX hints, claiming that parallelism puts the cost down. That might be a case, but in the case you don't want to use parallelism, there is a way to remove those recommendations.

Solution

BEGIN
DBMS_AUTO_SQLTUNE.set_auto_tuning_task_parameter(
parameter => '_sqltune_control',
value => 1+2+4+8+32);
END;
/  

To switch it back:
value => 1+2+4+8+10+32);

10 April 2015

ASM Disk - Wrong Label


Symptoms

After a RAC node reboot, one of the ASM disks didn't get mounted.

We could see the disk (partition) /dev/> with "fdisk -l", but for some reason ASM either couldn't see it, or couldn't recognise it as a ASM disk.

We looked at the ASM header (with kfed command). The header was correct. We checked the other blocks of the disk, they were all OK. Then we noticed that ASM label was empty, and it's not supposed to be.

Errors


NOTE: Disk ORADATA2_0000 in mode 0x7f marked for de-assignment
NOTE: Disk  in mode 0x7f marked for de-assignment
ERROR: diskgroup ORADATA2 was not mounted
ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "1" is missing from group number "3"

2015-04-09 14:13:21.421: [ default]failed to initialize skgp context
2015-04-09 14:13:21.422: [ default]slos op  :  sslssreghdlr
2015-04-09 14:13:21.422: [ default]slos dep :  Error 0 (0)
2015-04-09 14:13:21.422: [ default]slos loc :  sskgpinit1
2015-04-09 14:13:21.422: [ default]slos info:[   CLWAL]clsw_Initialize: OLR initlevel [30000]
2015-04-09 14:13:21.423: [ default]a_init: Unable to get log name. Retval:[-4]

Cause

ASM disk lost the ASM label.
List the label:
$ asmtool -I -l "/dev/oracleasm" -n /dev/> -a label  
     asmtool: Disk "/dev/>" is marked an ASM disk with the label ""  

As you can see, the label is "" which is wrong. It should be "ORADATA22"

Fix

Stop the RAC on that node, change the label.

su - root
[root@bin]# crsctl stop crs
[root@bin]# /etc/init.d/oracleasm force-renamedisk /dev/> ORADATA22 
[root@bin]# asmtool -I -l "/dev/oracleasm" -n /dev/> -a label  
asmtool: Disk "/dev/>" is marked an ASM disk with the label "ORADATA22"
[root@bin]# oracleasm scandisks


We needed to change the label on the second node, too.

06 March 2015

Oracle RAC Listener relink "Linux Error: 32: Broken pipe"


Symptoms


TNS-12518: TNS:listener could not hand off client connection TNS-12547: TNS:lost contact
TNS-12560: TNS:protocol adapter error
TNS-00517: Lost contact
Linux Error: 32: Broken pipe

Cause


$ORACLE_HOME/bin was removed by (we think) dbua
We copied the files backup from the other node, and re-ran root.sh. Instance started up fine, but we couldn't remotely connect to it.

Fix


Relink:$ORACLE_HOME/oui/bin/runInstaller -relink -waitForCompletion -maketargetsxml $ORACLE_HOME/inventory/make/makeorder.xml -logLocation $ORACLE_HOME/install ORACLE_HOME=$ORACLE_HOME  > $ORACLE_HOME/install/relink.log 

30 January 2015

Oracle OEM 12c 2-system upgrade - Notes on bugs/issues


The upgrade had quite a few bugs/problems, for which we couldn't find solutions either on Oracle support site or Google, therefore listing them here:

Error


Error during "Repository upgrade" with ORA-00955 with file severity_indexes.sql failing when command "create index em_violations_idx_08 ..." was run.

Workaround


drop index sysman.em_violations_idx_08;

Error


java.sql.SQLSyntaxErrorException "The host on which you are performing the Two system upgrade does not match with the host name you entered in the Enterprise Manager 12c Upgrade Console"

Workaround

Although Oracle support site said to check PRE_UPGC_MASTER_INFO table, the hostname in that table was correct. Problem was that we had lots of bugs previously in the installation, so somehow the whole installation got confused. We stopped that runInstaller, ran another, and the error didn't show up again.

Error


ORA-00600: internal error code, arguments: [qcisSetPlsqlCtx:tzi init], [], [], [], [], [], [], [], [], [], [], []

Workaround


Increase parameters to at least 3G SGA, 1G PGA, 250 session cashed cursors.

Error


Empty temporary tablespace: TEMP

Workaround


Manually add TEMP tablespace (we had to add TEMP1, switch to that one, drop TEMP, create TEMP, switch back)
Error

ORA-00942 SCHEMA_VERSION_REGISTRY is missing. One of the log files are showing that OBG$MIG was missing, but that was a red herring.
Workaround

create or replace force view "SYSTEM"."SCHEMA_VERSION_REGISTRY"
(
  "COMP_ID",
  "COMP_NAME",
  "MRC_NAME",
  "MR_NAME",
  "MR_TYPE",
  "OWNER",
  "VERSION",
  "STATUS",
  "UPGRADED",
  "START_TIME",
  "MODIFIED"
)
as
    selectcomp_id,
           comp_name,
           mrc_name,
           mr_name,
           mr_type,
           owner,
           version,
           status,
           upgraded,
           start_time,
           modified
      from SYSTEM.SCHEMA_VERSION_REGISTRY$
  order by comp_id;

Error


MDS Schema Creation status: FAILURE 
MDS Schema Creation is failed
Failed to create OPSS schema

$more emmdscreate.err
ERROR:MDS: ORA-00955: name is already used by an existing object
File:/data1/opt/oracle/middleware/oms/rcu/integration/mds/sql/cremdcmtbs.sql
Statement:create table MDS_PARTITIONS (
 PARTITION_ID                  NUMBER NOT NULL,
 PARTITION_NAME                VARCHAR2(200) NOT NULL,
 PARTITION_LAST_PURGE_TIME     TIMESTAMP,
 PARTITION_MT_STATE            NUMBER
 )
Creation of MDS schema fails

Workaround


Press "Retry". This solution was in Oracle Support.
Error

ERROR: Invalid username and/or password
LOG: Local Authentication Failed...Attempt PAM authentication...PAM failed with error: No account present for user
Workaround

After you click "Deploy and Configure Agents", chose "Override Oracle Home Preferred Credentials" instead of preferred. Probably due to new OMS 12c not having default preferred credentials.

18 October 2014

TDP SAP for Oracle 5.5 library errors

I couldn't find this error anywhere on Google, and IBM Support has been looking into it for 2 days without help. I found the workaround, it would be good to share it with community.

The errors showed up during the upgrade of TSM TDP for SAP for Oracle v5.5 to v6.3. We restored v5.5 back, deinstalled, installed, and at the end finished the errors below.

Error

When running tdpoconf or backup:

ld.so.1: tdpoconf: fatal: relocation error: file /opt/tivoli/tsm/client/oracle/bin64/tdpoconf: symbol tdpoQueryVersion: referenced symbol not found
Killed

RMAN-03009: failure of allocate command on sbt_1 channel at 10/18/2014 18:15:55

ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library

BKI9009W: The following products are not compatible: ESD_DataMover (5.5.0.0) and ESD_Prole (6.3.0.0).

Workaround 

(run as root, and make a backup first)
  rm /usr/lib/libobk.so
  rm /usr/lib/sparcv9/libobk.so
  rm /oracle/$ORACLE_SID/112_64/lib/libobk.so
  ln -s /opt/tivoli/tsm/client/oracle/bin64/libobk.so /usr/lib/libobk.so
  ln -s /opt/tivoli/tsm/client/oracle/bin64/libobk.so /usr/lib/sparcv9/libobk.so

The above links the libobk Media Management library to Oracle TDP instead of SAP. .

During the TDP 6.3 install, /usr/lib/sparcv9/libobk.so got linked to SAP library /opt/tivoli/tsm/tdp_r3/ora64/libtdp_r3.so According to the documentation, that's how it is supposed to be, but either library libtdp was corrupted or documentation was wrong.


07 October 2014

INS-10331 Oracle 12c runInstaller on a Linux server from NFS mounted disk


I was installing 12c database from the NFS mounted share, and the installation failed at the second step with the INS-10331 error.

Oracle Support has a document that suggests you add space to /tmp (which wasn't my case), or copy the installation files to /tmp.

Workaround


Copy the installation files to any of the local disks. Something about running the Installer from the NFS mounted disk is causing this problem. Interestingly, another server has the same share mounted, but it worked from there.

I don't have a solution to the problem, just a workaround, but as Google-ing "Linux INS-01331" didn't produce anything, this article might help someone else.

Linux RedHat

Oracle INS-01331 Error

Cause - Failed to access the temporary location.

Action - Ensure that the current user has required permissions to access the temporary location.



19 September 2014

Default value for _sys_logon_delay Oracle 12c parameter

I couldn't find this anywhere else on the Internet, therefore worth posting.

Default value of  _sys_logon_delay 12c parameter is 1.

Value of zero will switch off this feature, so this proves it is enabled by default. One Oracle blog said the higher the values, the bigger the delay is.


NAME      Type  Value Is Def ISSES ISSYS_MOD DESCRIPTION
---------------- ----- ------ ----- ---------------------
_sys_logon_delay  3 1 TRUE   FALSE FALSE failed logon delay for sys

02 September 2014

Using OEM 12c - notes from Oracle course

2 products exist for managing: Cc + Ops Center (host management)

Oracle HTTP server is started up implicitly when OMS is started. Managed by opmnctl process (if we need explicitly to run something).

Discovery
=======
Agent-based (self explanatory) or Agent-less (scans all IP n/w addresses. Then you can review and promote what you want)

Backup
=====
Backup s/w home. That will cover WLS admin server, the whole web logic, OMs and webtier configuration files.
Backup the instance home: emctl exportconfig oms command.

Monitor
=====
What happens during the upgrade to user-defined metrics? They are called metric extensions now.
Linux and Solaris hosts can be monitored but YaST needs to be installed.
Targets can be grouped in a "group". Notification can be done 'per group'.
Part of service Level Mgmt pack:
  -- Define "SYSTEM" for bunching up all targets that are the infrastructure for the services (like SAP, PS, etc...)
  -- One SYSTEM can be an infrastructure for multiple services.
Use SYSTEMs and beacons for end-to-end monitoring.
  -- Service test is end-to-end test.
  -- Availability can be defined per service

Some events are statefull and some stateless. Statefull
emcli clear_stateless_elements -older_then=7 -target_type=oracle_database  (older than 7 days)

Events/Incidents/Problems
====================
For Enterprise Manager 12c, the concept and function of notification rules has been replaced with a two-tier stem consisting of Event Rules and Incident Rule Sets.
Create rules on any of these, to email or do something.
Problem is a type of incident with the same root cause
Incident is something you need to act on
Event Rules: Operate at the lowest level granularity (on discrete events) and performs the same role as notification rules from earlier releases.

Software Lifecycle Management
=======================
Roles can be more granular: admin, designer (designs workflows), operators
DBA needs to configure the software library, put gold images inside
  -- OMS and all agents need to be able to access the shared file system
Deployment:
  -- create a database profile
  -- create a procedure workflow (Like "Clone Database", "Patch database")
  -- publish it and Launch it
  -- Ability to pause and resume upgrade (by setting breakpoints

Reports
======
Lots of Predefined reports
!! We can replace our daily report with one of the OEM reports
https://zpgrid1:7799/public/reports,

Oracle Enterprise Manager Connectors Integration
=====================================
Guide number E25163-4

OEM 12c Installation and Upgrade - notes from Oracle course

OMS is a J2EE app deployed via Web Logic. It has 3 apps:
1 - The Console serves up all the /em URLs.
2 - PBS (Platform Background Services) serves up all the /empbs URLs. It is where agents
upload their metrics.
3 - OCMRepeater is the link between EM CC and My Oracle Support for consolidating
configuration data collected from agents.

12 new:
Each agent has plug-ins. They are automatically discovered.

Backup/Restore OMS:
/u01/app/oracle/product/middleware/oms/bin/emctl exportconfig oms
emctl importconfig oms –file /.../EMGC_OMS1/sysman/backup/OMS_backups.bka

Default ports:
2889/2900 or 3872/3870 between agent and OMS
7788/7799  between OMS and CC
JDBC:1521 from OMS to Rep

-Installation from s/w distribution is possible.
-11g required a pre-existing WLS installation; 12c does not.
-12c metrics are gathered by agent plug-ins
-OMs replication means copy the config files only (OMS needs to be shut down)
-OMS and OMR can run on the same server. No advantage to separating it.

Enterprise Manager startup sequence:
1. OMR listener and database instance
2. OMS
3. Agent on OMS server
• Shutdown is in the reverse order.

Installation:
1. First install OREP database, as normal
2. Download software from http://www.oracle.com/technetwork/oem/grid-control/downloads/linuxx8664soft-085949.html
3. Unpack
4. Run prerequsite on OREP
5. cd to it, ./runInstaller
6. Choose medium size db...
7. Backup OMS
8. Check that https://host:7799/em is working. Secure it with certificate from the trusted domain.

Upgrade:

1-system upgrade:
- The Upgrade Console is installed into your current OMS by applying Patch 14375077
- Follow the process.
- Grid is down all the time.
- Agents first, then OMS

2-system upgrade:
- No loss of metric data
- We can have 11g and 12c running in parallel
- The Upgrade Console is installed into your current OMS by applying Patch 14375077
- Execute post-installation scripts
$OH/bin/rcuJDBCEngine sys/Oracle123@host:1521:orep JDBC_SCRIPT post_install_script.sql $PWD $ORACLE_HOME
$OHbin/rcuJDBCEngine  sys/Oracle123@host:1521:em11rep JDBC_SCRIPT post_install_patch.sql $PWD $ORACLE_HOME
Apply latest PSU
- Install the 12c agent software
   -- Put it in the directory where console can see it
   -- Copy the emkey from oms to repository
- Configure Upgrade console
  -- In "Identify Host.." specify new hostname
  -- In Manage Softare, specify the link to the software
  -- In "Provide Backup.." put date of the templates, or date after
- Clone the current repository
  -- Create a template (with data) from your existing 11g repository
  -- Start the 11g OMS and remove the emkey from the repository
        ##The emkey is a random number that is generated during the installation of the Repository and is stored in a table. When the OMS is installed, the emkey is copied from the  Rep to the $OH/sysman/config/emkey.ora file. After the emkey has been copied, you must remove it from the  Repository as it is not considered secure. If it is not removed, data such as database passwords, server passwords and other sensitive information can be easily decrypted.
  -- Scp the template accross
  -- Use dbca to create a new repository from the template
  -- Other option is to clone it any other way, as long as data is the same #What about removing the emkey?
  -- grant execute on dbms_random to sysman, dbsnmp; #additional privileges needed
- Check the new 12c repository database
  -- emprereq command
  -- If changes are required, run the emprereqkit again and let it make changes that can be auto-corrected.
- Install OMS 12c
  -- ./runINstaller on the new host
     # Chose an option "Upgrade an Existing OEM with 2-Server" not the "Create new OEM"
- In "Link to old OMR" part:
  --grep CONNECTDESCRIPTOR /u01/app/oracle/product/middleware/gc_inst/em/EMGC_OMS1/emgc.properties
    and attach that into the first field
- Use the Upgrade console to deploy 12c agents (check space first)
  --Agent on zpgrid1 can be skipped
  -- Add/Select other agents in “Deploy and Configure Agents”, follow the instructions and Submit a Job
- Agent Health check
  -- Click the “Generate Health Report of Deployed Agents” link, add the targets that have 12c agent deployed
  -- Sign off
- Switch to 12c agent
  -- "Switch Agents" and add the target
  -- This taks supposedlyalso runs Accrued data
- Access Post-Upgrade Console
  -- Navigate to Setup > Manage Cloud Control > Post Upgrade Tasks \ Accrued Data and Submit job for all Components
  -- This is just a read-only view (apparently).
  -- Run what you need in the Deferred Data.
  -- Sign off : This task also de-installs the 11g agent
- Wait for all agents to be activated on 12c before decommissioning the current OMS #if that's what you want to do
  -- While waiting, 11g agents will be active
- If all agents can't be activated, revert to 11g

How to discover targets:
1. Manually discover the host
2. Set up ssh to it from OMS host
3. Push the agent to it
4. Do auto discover

Maintenance
=========
Web Logic has to be patched separately

Security
======
- Backup the emkey emctl exportconfig oms -dir  
- Remove emkey from the 11g repository #don't remove till you backup
- Assuming that you use the default secure communications between the agents and the OMS,
disable unsecure communications with the emctl command: emctl secure lock.
- Secure the console with an SSL certificate from a trusted authority # check what has been done
- Check OEM DR whether it's working

Software enhancement request for Oracle 12c +

Idea

Flashback to be enabled per pluggable database, instead of per the whole instance.

Why?


Because I like flashback and use it a lot for patching/upgrades/production releases/security against user errors. Because it makes DBA's life easier.

Can it be done?

As flashback logs contain the whole block copy of the database, I'm sure somewhere in the block Oracle writes down the pluggable database name, simple because Oracle needs to know to which database that block belongs. 

Action


Software enhancement request raised with Oracle.

12c New Features - Notes

Latest patch right now: 12.1.0.2

OEM CC

Chargeback basaed on capacity
App Mgmt: based on APIs: PS, VMware,...
            Plugg-ins are used instead of db2 (life for SAP)
                        agent has to be installed
Default ports:
            4889/4900 (http/https) – agents
            7788/7801 – OMS
            1521 – JDBC – Repository
Agents can monitor PS suite
Homepage based on the role
Express EM is free: Web Servier is running within XMLDB of the instance. It it’s down, the no monitoring, It monitor only through SQLs. https://host:5500/em (browser requires a Flash Plugin)
In 12c, XMLDB is a mandatory option.
It uses dispatcher, so you need shared servers setup

SQL Developer

It allows DBA operations

Multi Tenant

PD$SEED (like a r/o Model db in SQL) + PD$ROOT (root container) + pluggable dbs
They all share memory, redo logs, flashbadck, control files, undo, temp
CDB_ views; SHOW CON_NAME; SHOW CON_ID
Common vs. Local (Common users use C## prefix)
Don’t share: SYSTEM, SYSAUX

Creating

CREATE DB... ENABLE PLUGGABLE DATABASE ...
            SEED FILE_NAME_CONVERT (‘/dir1’,’/dir2’)
or use DB_CREATE_FILE_DEST or PDB_FILE_NAME_CONVERT
Two containers created by default: CBD$ROOT and CBD$SEED
Default service name is DBNAME.DOMAIN
New PDB from the SSID will have PDBA user with PDB_DBA role
CREATE PLUGGABLE DATABASE dbname
ADMIN USER admin1 IDENTIFIED BY pwd ROLES=(CONNECT) FILES_NAME...
There is local and common DBA role

Plugging in

Violations are listed in PDB_PLUG_IN_VIOLATIONS
Non CDB --> PDB gets SYSTEM, SYSAUX, catalog, service
            EXEC DBMS_PDB.DESCRIBE(‘/tmp/ORCL.xml’);
            CREATE PLUGGABLE DB (NAME) USING ‘/tmp/ORCL.xml’;
            ... @/noncdb_to_pdb
You can unplug PDB from one CDB and plug it into another

Dropping

ALTER DB... CLOSE; DROP PLUGGABLE DB ... [INCLUDING DATAFILES];

Migration 11G--> 12C

1) Upg and plug OR 2) Create empty PDB and exp/impdp

Managing

SQL> SHOW CON_NAME #Show db you are connected to
STARTUP MOUNT; ALTER PLUGGABLE DB ALL OPEN/CLOSE;
ALTER SESSION SET CONTAINER=name; #to switch to a container
2 new EVENT triggers: Before unplug and after;
You cannot create local user in ROOT PDB.

Backup/Recovery

S/w request for introduction flashback for each PDB, as you can only flashback instance
New RC_PDBS

HEAT MAP or ILM – Advanced Compression option

Enable it to use ADO – Automatic Data Optimization. HEAT_MAP=ON
As data gets less used or tblsp fuller, it compresses more and more
6 levels: BASIC, ADVANCED, QUERY (LOW or HIGH), ARHIVE (l or h)
Can be set on tblspc level, segment, partition, row
Lowest level policy will first take effect
ALTER TABLE tab1 ILM ENABLE_ALL; - enable all policies
Disable on instance level: EXEC DBMS_ILM_ADMIN.DISABLE_ILA:

Move DF online

ALTER DATABASE MOVE DATAFILE ‘...’ TO ‘...’ [REUSE/KEEP];
Monitor progress in v$SESSION_LONGOPS

IN-DB ARCHIVING

ALTER TABLE... ROW ARCHIVAL;
Column ORA_ARCHIVE_STATE added to each table (0 for new rows, 1 for archived)
ALTER SESSION SET ROW ARCHIVE VISIBILITY=ALL;
This is not automatic. It has to be done manually

Temporal Validity

CREATE TABLE... ( ..., PERIOD FOR user_time);
It creates 2 new columns: user_time_start/end
DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME(‘ASOF’,date);
SELECT ... AS OF ....

Unified Auditing – an option

GEN0 process flushes audit records from the memory, or you can flush them manully from the SGA, to the unifiied trail. SYS.UNIFIED_AUDIT_TRAIL view;
            To flush: EXEC DBMS_AUDIT+AUDIT_MGMT.FLUSH_UNIFIED...
You can configure them Immediate-Write mode of Queued_Write (from memory)
CREATE POLICY for the “component” and enable it.
            Components: DP, RMAN, FGA, OLS, DVault, RA Security
To switch it on: relink the libraries
Roles: AUDIT_ADMIN and AUDIT_VIEWER
Policy can be on: actions, priv, roles with WHEN stmt; and can be “BY user1, user2”

Privileges

New: SYSBACKUP, SYSDG, SYSKM (they are in PW file)
Privilege Analysis work with db Vault option
DBMS_PRIVILEGE_CAPTURE.ENABLE/DISABLE/CREATE_CAPTURE
DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
View DBA_[UN]USED_PRIVS
Grant INHERIT PRIVILEGE user tanja to simon; -- this means Tanja trusts that Simon’s code is honest enough to run stuff over Tanja’s objects
Recreate pw file after the 12c upgrade to get the new PW format
Revoke INHERIT PRIV fromPUBLIC, as every new user gets them

Redaction – option (hiding data) !

DBMS_REDACT pkg ADD/DROP/ALTER/ENABLE/DISABLE_POLICY
Full redaction, partial (xxxx-xxxx-xxxx-3424), randmon

RMAN

We can run SQL statms from RMAN prompt
In 11g, use MULTI SECTON BACKUP with SECTION SIZE option.
DUPLICATE uses backup sets in 12c by default. You can leave dest db mounted.
Transport dbs across platform by using backup sets. In 11g, we had to copy files and convert. DB source neds to be r/o
Table Recovery – uses AUX instance (1Gb SGA), called Bpsw.
            Space used: SYSTEM +  recovered tablepsaced + undo  + SYSAUX

Monitoring

11.1v+ Real-Time SQL Monitoring – see where inexec plan SQL stmt is
DB Ops – Tuning Pack ! – group statemts by users or app

Schema Comparison DBLM Pack !

It propages schema changes to diff envs

Adaptive SQL Plan Mgmt – Tuning pack !

optimizer_user_sql_plan_baseline = TRUE
It takes first plan P1 as baseline B1. When P2 comes along, it will compare. If it is at least 3 times better create B2, otherwise, wait for DBA to approve.
In 11g, P2 was compared to P1 just once. In 12c, it reconsiders them every time “comparison job” runs.
            11g: DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE
            12C: sys_audot_smp_evolve_task

Adaptive Executions Plan

V$SQL.IS_RESOLVED_ADAPATIVE_PLAN
Set OPTIMIZER_ADAPTIVE_REPORTING_ONLY=FALSE + compat>=12.1
Two techniques: Dynamic plans or re-optimization
            It doesn’t touch SQLs with profile or baseline
1) Just 2 sub-plans are considered, with the lowest costs
2) Checks whether stats have changed by some threshold
Cardinality feedback : _optimizer_user_feedback=TRUE
It saves STATS on predicate conditions, like ‘p>100 and t=’femail)=200 rows in real life, not 400,000 optimizer thinks it is

Stats

CREATE TABLE.. AS SELECT and INSET INTO empty table will do stats by auto
Gathering stats can be concurrent (parallelised)
Each session has its own stats for shared GLOBAL TEMP table
Extended stats on column group or expressions
EXEC DBMS_STATS.SEED_COL_USAGE – O does the rest automatically
Dynamic sampling=11 will consider any of the previous numbers

Histograms

Height balance are not used any more
2 new: Top-frequency and Hybrid
1) When most data is in few buckets
2) It picks only top n popular values and puts them into n buckets
abbbbbcccdd
Hybird: b5, c3,d2         Freq: a1,b6,c8,d10       Top-freq:b6,c8,d10

Emergency Monitoring – Diag pack

You don’t have to wait for ADDM. Just run it in real-time:
select dbms_addm.real_time_addm_report() from dual;
The report does cause and effect analysis.

ASH

MMON writes to a memory buffer. When full or periodically, MMNL empties it to AWR snapshots.

Multi-threaded architecture

It uses 1 main process that spawns threads instead of lots of processes.
ALTER SYSTEM SET threaded_execution=true
SCMN – spawns client processes instead of listener

Flash Cache

Add flash drives to SGA memory. In 11g=> only 1 possible

Paralel In-memory on RAC

parallel_degree_policy=AUTO
It divides segments into chunks and sends them to separate nodes

Temporary UNDO

In 11g, undo was logged to redo
In 12c, no redo at all. ALTER SYSTEM/SESS SET TEMPO_UNDO_ENABLED

Multiple Indexes

You can create multiple indexes on the same columns, but only B Tree or bitmap. Either one or the other must be visible at a time.

Invisible Columns

CREATE TABLE... (... col2 NUMBER INVISIBLE);
It can be accessed only by explicitly naming the column in DML commands or SET COLINVISIBLE ON in SQL session.

Online Redefinition

You can add dml_lock_timeout for REDEF procedure to waits on other DMLs
It works better with VPD

Compression

Renamed: Compress in 11g is Row Store Compress in 12c
Compress for OLTP is called row store compress advanced.
Compression Adviser is slightly more clever. – option!

ONLINE DMLs

DROP INDEX/CONSTRAINT; ALTER INDEX UNUSABLE; SET COLUMN UNUSED

ADR

New directories: Under SID/log/ddl and debug
ALTER SYSTEM SET enable_ddl_logging=TRUE- for DDL logging  FALSE for debug
adrci: show log

Network performance: Compression – option!

SDU is 8K by default for dedicated sessions, 64K for shared, maximum allowed is 2Mb
SQLNET.COMPRESSION=ON

Data Pump, Loader & External Tables

11.2.0.3+ whole db can be transported into another. It works like an enhanced version of transportable tblscp.  Expdp FULL=y TRANSPORTABLE=always ...
Take tblespaces into r/o mode; expdp; copy (use rman convert or dbms_file_transfer for endians); then impdp TRANSPORT_DATAFILES=....
Or you can use db link from the destitation: impdp FULL=Y NETWORK_LINK=sourcedb TRANSPORTABLE=always....
Redo logging during import can be disabled
            impdp ... TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y|N[:TABLE|IND]
Views can be exported as tables; Constraints preserved. expdp VIEWS_AS_TABLES=...
Encryption password ENCRYPTION_PWD_PROMPT=Y – provided silently at run time
Compression can be defined at import time: TRANSFORM=TABLE_COMPRESSION_CLAUSE:’ROW STORE COMPRESS|...”
LOBs can be converted to secure files during import: LOB+STORAGE:SECUREFILE
SQL*Loader supports insert into IDENTITY columns, even via direct path roads.
CREATE TABLE (col1 NUMBER GENERATED
[ALWAYS/BY DEFAULT on NULL] AS IDENTITY
SQL*Loader allows wildcards INFILE (‘emp?.sh*’)
SQL*Loader supports CSV files with newlines:  FIELDS CSV WITH EMBEDDED
SQL*Loader- FIELDS ... , NULLIF = “value”
            FIELD NAMES FIRST FILE – skip the first field
            ALL FIELDS OVERRIDE – skip the columns with default values
            Express mode: $sqlldr user TABLE=tablename - must have tablename.dat file

Partitioning

ALTER TABEL ... MOVE PARTITION p1 [COMPRESS .../TABLESPACE t1]
UPDATE INDEXES ONLINE; - Basic compression is free.
Reference partitioning introduced in11g. Child is partitioned in the same way as the parent (inherited through FK key), even if the child doesn’t have the partition column.
            In 12c: INTERVALs on reference ;
In 12c: TRUNCATE/EXCHANGE PARTITION CASCADE
11g –only 2 partitions at a time 12c-> all at a time; for Add/truncate/drop/merge/split
            Indexes are also affected in the same way
Partitioned Indexes: each partition is a completely diff index.
Local index can be created on a group of selected partitions. ...
...PARTITION p1 ....) INDEXING ON/OFF,
CREATE INDEX ... INDEXING PARTIAL.. – creates global index only on partitions which have “INDEXING ON’.
DROP/TRUNCATE partition with UPDATE INDEXES will mark the change in meta-data. Later, maint. job SYS.PMO_DEFERRED_GIDX_MAINT_JOB kicks in to clean up those ORPHANED_ENTRIES (column in DBA views). You can manually kick this with DBMS_PART.CLEANUP_GID;

Various

TRUNCATE TABLE name CASCADE; as long as there is 1 constr with ON DELETE CASCADE clause
select * from tale(dbms_xplan.display_curosr(format=>’basice +note’);
[N]VARCHAR2 and RAW is now 32767 bytes – alter system set max_string_size=extended;, run utl32k.sql script, bounce the instance, rebuild indexes.
DB Migration Assistant – migrates dbs from standard to Unicode set.
SecureFiles are now default storage mechanism for LOBs
SELECT ... FETCH FIRST 5 ROWS ONLY; or FETCH FIRST 10 PERCENT ONLY;


SELECT ... OFFSET 100 FETCH NEXT 10 rows only: --fetches 101-110 rows

This page is powered by Blogger. Isn't yours?