22 February 2007
Peoplesoft performance on Oracle 10g -our experience
This note highlights the actions we did in order to get acceptable performance for Peoplesoft 8.49 on Oracle 10g R2 (10.2.0.3, Solaris 64-bit). Database size is 700G, with the growth of 15-20G per month.
Hopefully, this note might help some other DBAs with Peoplesoft apps.
Actions taken to stabilize the performance:
Hopefully, this note might help some other DBAs with Peoplesoft apps.
Actions taken to stabilize the performance:
- Switched the bind variables off : Metalink Note:387394.1
- Switched the density recalculation off : Metalink Note:338113.1
- Switched the default analyzing off
- Re-analyzde with COMPUTE where possible, with "FOR ALL COLUMNS SIZE AUTOSKEW"
- Saved the stable database statistics and stopped re-analyzing
- We only re-analyze the statistics when absolutely needed
- Collected the system statistics over the averagely loaded day
- Bug 5762712 * Switched the new subquery selectivity off
- Used /*+ NO_UNNEST */ hints for sub-queries, especially for the effective dating sub-queries
- Sometimes switched the unnesting off on the session level, especially for the batch jobs
- Occasionally used the LEADING hints
- Cached 150 cursors per session
- Introduced KEEP and RECYLING caches
- Two parameter found to cause wrong cost for queries that contain sub-queries:
- alter system set "_optimizer_correct_sq_selectivity"=false scope=both;
- alter system set "_optimizer_squ_bottomup"=false scope=both;
- This was fixed by logging a bug with Oracle. Will add bug number in the future. Bug was about the wrong costing for inequality joins, logged for 10.2.0.3 in y2007.
- Update in 2008: Oracle confirmed the bug 4422100.8 and we applied the fix
- Enhancement request raised about the way Oracle handles the effective dating subqueries (1 index pass suggested over 3 index passes). ER accepted.
- Enhancement request raised about the wrong join selectivity for the unique equal predicates
- Introduce regular analyzing only where and when needed
- Cursor pinning for the relevant cursors
- Apply Peoplesoft recommended patch bundles
- At the end of 2007, we set unnest_subquery=false on the instance level
- Increased SGA to 6G and PGA to 700M
- Increased KGL latches to 17
- Increased number of hash buckets considerably
Join selectivity in Oracle 10g
We have been having few Peoplesoft queries whose plans were producing wrong cardinality in joins. I started digging a bit deeper and found that Oracle couldn't make an accurate decision about how many rows will be returned.
I have raised today an enhancement request with Oracle. It will be interesting to hear what they say about it.
I was moving around table's columns and creating and removing histograms and nothing helped.
I found yesterday Wolfgang Breitling's document about join selectivity with the similar example, so just to point out I did not copy it, I came to my own conclusions about this. What I did not know is the join selectivity formula, which comes as this:
create table t1 (id number);
create table t2 (id number);
insert into t1 values (1);
insert into t1 values (2);
insert into t1 values (3);
insert into t1 values (4);
insert into t1 values (5);
insert into t1 values (6);
insert into t1 values (7);
insert into t1 values (8);
insert into t1 values (9);
insert into t1 values (10);
insert into t2 values (0);
insert into t2 values (3);
insert into t2 values (4);
insert into t2 values (5);
insert into t2 values (11);
insert into t2 values (12);
insert into t2 values (13);
insert into t2 values (14);
insert into t2 values (15);
insert into t2 values (16);
commit;
execute dbms_stats.gather_table_stats('SYS','T1', method_opt=>'for all columns size 11');
execute dbms_stats.gather_table_stats('SYS','T2',method_opt=>'for all columns size 11');
set autotrace on explain
select * from t1,t2 where t1.id=t2.id;
ID ID
---------- ----------
3 3
4 4
5 5
Execution Plan
----------------------------------------------------------
Plan hash value: 1819147781
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 50 | 5 (20)| 00:00:01 |
|* 1 | HASH JOIN | | 10 | 50 | 5 (20)| 00:00:01 |
| 2 | TABLE ACCESS FULL | T1 | 10 | 30 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | T2 | 10 | 20 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
As you can see the estimated cardinality for the join selectivity is 10 (achieved by 10*10*1/10 =10 formula).
Obviously, it should be 3.
Then, a DBA can insert these values inside:
So, when optimizer is determining the join selectivity for
I can only assume that when optimizer looks at unique ID column, it can see that it is unique (NDV=#rows) and then it just uses to the general join selectivity formula.
On the other hand, if ID is not a unique column, the optimizer will check the histograms and produce a pretty good cardinality guess.
I have raised today an enhancement request with Oracle. It will be interesting to hear what they say about it.
TEST EXAMPLE
As a test, I created 2 tables, each with 10 unique rows. I made them so that only 3 rows overlap. After joining that tables, Oracle kept returning 10 rows as an estimated cardinality (check example below).I was moving around table's columns and creating and removing histograms and nothing helped.
I found yesterday Wolfgang Breitling's document about join selectivity with the similar example, so just to point out I did not copy it, I came to my own conclusions about this. What I did not know is the join selectivity formula, which comes as this:
- Cardinality of T1 * Cardinality of T2 * max( selectivity of T1.ID, selectivity of T2.ID)
- In my example below: 10 * 10 * 1/10
create table t1 (id number);
create table t2 (id number);
insert into t1 values (1);
insert into t1 values (2);
insert into t1 values (3);
insert into t1 values (4);
insert into t1 values (5);
insert into t1 values (6);
insert into t1 values (7);
insert into t1 values (8);
insert into t1 values (9);
insert into t1 values (10);
insert into t2 values (0);
insert into t2 values (3);
insert into t2 values (4);
insert into t2 values (5);
insert into t2 values (11);
insert into t2 values (12);
insert into t2 values (13);
insert into t2 values (14);
insert into t2 values (15);
insert into t2 values (16);
commit;
execute dbms_stats.gather_table_stats('SYS','T1', method_opt=>'for all columns size 11');
execute dbms_stats.gather_table_stats('SYS','T2',method_opt=>'for all columns size 11');
set autotrace on explain
select * from t1,t2 where t1.id=t2.id;
ID ID
---------- ----------
3 3
4 4
5 5
Execution Plan
----------------------------------------------------------
Plan hash value: 1819147781
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 50 | 5 (20)| 00:00:01 |
|* 1 | HASH JOIN | | 10 | 50 | 5 (20)| 00:00:01 |
| 2 | TABLE ACCESS FULL | T1 | 10 | 30 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | T2 | 10 | 20 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
As you can see the estimated cardinality for the join selectivity is 10 (achieved by 10*10*1/10 =10 formula).
Obviously, it should be 3.
Suggestion on how to fix this
If the optimizer in the future versions can figure it out on its own, then great. Otherwise, Oracle should allow DBAs to manually set join statistics. For example, Oracle can create a new statistic table:- STATS$JOIN
- (
- FIRST_TABLE_NAME,
- FIRST_COLUMN_NAME,
- SECOND_TABLE_NAME,
- SECOND_COLUMN_NAME,
- JOIN_SELECTIVITY
- );
Then, a DBA can insert these values inside:
- insert into stats$join ('T1','ID','T2','ID',3/100);
So, when optimizer is determining the join selectivity for
- T1.ID=T2.ID
UPDATE: 26 Mar 2007
I tested this further and found if I insert non-unique rows into T1 and T2 tables, then cardinality starts working correctly. But, if ID columns have unique values, then cardinality is not correct.I can only assume that when optimizer looks at unique ID column, it can see that it is unique (NDV=#rows) and then it just uses to the general join selectivity formula.
On the other hand, if ID is not a unique column, the optimizer will check the histograms and produce a pretty good cardinality guess.
Join selectivity formula
The selectivity of a join is defined as the selectivity of the most selective join column adjusted by the proportion of not null values in each join column. Sel = 1/max[NDV(t1.c1),NDV(t2.c2)] * ( (Card t1 - # t1.c1 NULLs) / Card t1) * ( (Card t2 - # t2.c2 NULLs) / Card t2)