30 January 2007
Were those statistics collected with dbms_stats or analyze command?
If the column dba_tables.global_stats is:
SQL> analyze table tanya compute statistics;
Table analyzed.
SQL> select OWNER, TABLE_NAME, GLOBAL_STATS
from dba_tables where table_name='TANYA';
OWNER TABLE_NAME GLO
----------- ---------------- ---
SYS TANYA NO
SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS('SYS','TANYA');
PL/SQL procedure successfully completed.
SQL> select OWNER, TABLE_NAME, GLOBAL_STATS
from dba_tables where table_name='TANYA';
OWNER TABLE_NAME GLO
----------- ---------------- ---
SYS TANYA YES
Thanks to Keith Harding for sharing this interesting information with me.
From Metalink Note 236935.1:
- "NO" => analyze command
- "YES" => dbms_stats
SQL> analyze table tanya compute statistics;
Table analyzed.
SQL> select OWNER, TABLE_NAME, GLOBAL_STATS
from dba_tables where table_name='TANYA';
OWNER TABLE_NAME GLO
----------- ---------------- ---
SYS TANYA NO
SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS('SYS','TANYA');
PL/SQL procedure successfully completed.
SQL> select OWNER, TABLE_NAME, GLOBAL_STATS
from dba_tables where table_name='TANYA';
OWNER TABLE_NAME GLO
----------- ---------------- ---
SYS TANYA YES
Thanks to Keith Harding for sharing this interesting information with me.
From Metalink Note 236935.1:
- When ANALYZE gathers statistics GLOBAL_STATS is always set to NO
- When DBMS_STATS gathers statistics, the GLOBAL_STATS column is populated thus:
- GLOBAL_STATS = YES - means global statistics have been gathered
- GLOBAL_STATS = NO - means statistics will be derived by aggregation from lower level objects
How to stabilize execution plans in 10g
This is the list of actions that can be taken to stabilize the 10g database execution plans:
Keywords:
"unstable plans" "plan changed" "stabilize execution plans" "execution plan changed" "changed SQL plans" "changed execution plans"
- Switch off the bind variable peeking
- Bind variable peeking means that when a cursor is compiled, the optimizer will "peek" to see the real literal values. It will use the statistics for those values to create a plan. Of course, if the first literal values are "odd" ones, you will get a plan that does not suit the rest of the executions.
- I have not yet seen a 10g database that benefited from this feature. I have raised a service enhancement request with Oracle to expand the peeking to all executions and change the explain plan only if required (or spawn a new plan if required).
- This parameter is "TRUE" by default in 10g.
- alter system set "_optim_peek_user_binds"=FALSE;
- Switch off the density recalculation in 10g
- By default, Oracle 10g recalculates the computed density when the cursor is reloaded. This can cause the plans to change even if no stats have been gathered in the meantime.
- alter system set event='10139 trace name context forever' scope=spfile
- Check whether cursor_sharing=similar is causing the frequent change in execution
- This parameter can cause issues if your histograms are not correct. Check them.
- Re-gather the histograms with either SKEWONLY or SIZE 255
- Gather the histograms for ALL columns, not just indexed. Histograms can be used for join columns and for the predicate columns without indexes.
- If you regularly re-analyze, check whether your statistics are good
- I know this is a bit of work, but there are few bugs that produce the wrong statistics.
- It is always worth browsing the Metalink bug database for the bugs and fixes for your version.
- Temporarily stop analyzing, if you can afford it
- Create a "snapshot" of your plans once when you stop analyzing
- Create a "snapshot" of your statistics, with dbms_stats.export_*_stats
- Once when you have the stable system, slowly start re-analyzing
- Manipulate your statistics
- At the end, statistics are just the numbers that give you certain plans at a certain time.
- If you need to manipulate the statistics to produce the good results and you know what you are doing, go for it.
- Decrease the dynamic sampling
- The sampling helps Oracle "improve" the non-existing or stale statistics
- Try switching it off on the session level to see whether it changes the plan
- Keep the history of your execution plans and statistics, for a comparison
- Oracle 10g keeps the plan history in AWR table WRH$_SQL_PLAN
- Create the outlines for your statements
- Export the statistics with dbms_stats.export_schema_stats (or similar)
- Keep the different STATIDs to compare
- Take 10053 traces for the SQLs you know are problematic
- Metalink note 338113.1 and bug 4567767
Keywords:
"unstable plans" "plan changed" "stabilize execution plans" "execution plan changed" "changed SQL plans" "changed execution plans"
10 January 2007
Oracle 10g CBO Cost for unnesting queries
I noticed today that a 10g query with high costs.
I ran couple of similar queries and saw that anything with nested sub-queries has very high costs.
Query:
select /*+ index(c) */ c.eid, c.effdate, c.c, c.d, c.e
from sysadm.jobt c
where c.status in ('A','L','P','S')
and c.effdate = (select /*+ no_unnest index_desc(z) */ max(z.effdate)
from sysadm.JOBT z
where z.eid = c.eid
and z.ercd = c.ercd
and z.effdate <= sysdate)
and c.effseq = (select /*+ no_unnest */ max(z.effseq)
from sysadm.JOBT z
where z.eid = c.eid and z.ercd = c.ercd and z.j_date = c.j_date);
Those of you who use PeopleSoft will recognise the common unnesting for the maximum effective date and the corresponding maximum sequence.
The cost with optimizer_features_enabled=10.2.0.1 is 717276, while it should have been around 2000
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=717276 Card=1 Bytes=32)
TABLE ACCESS (BY INDEX ROWID) OF JOBT (TABLE) (Cost=1893 Card=190 Bytes=6080)
INDEX (FULL SCAN) OF JOBT (INDEX (UNIQUE)) (Cost=1667 Card=238)
SORT (AGGREGATE) INDEX (RANGE SCAN DESCENDING) OF AJOB (INDEX) (Cost=3 Card=22 Bytes=308)
SORT (AGGREGATE) FIRST ROW (Cost=3 Card=1 Bytes=16)
INDEX (RANGE SCAN (MIN/MAX)) OF AJOB (INDEX) (Cost=3 Card=1 Bytes=16)
After experimenting with parameters, I found the following three parameters will set the transformation cost down to the "normal" level.
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1894 Card=317306 Bytes=10153792)
TABLE ACCESS (BY INDEX ROWID) OF JOBT (TABLE) (Cost=1893 Card=190 Bytes=6080)
INDEX (FULL SCAN) OF JOBT (INDEX (UNIQUE)) (Cost=1667 Card=238)
SORT (AGGREGATE)
INDEX (RANGE SCAN DESCENDING) OF AJOB (INDEX) (Cost=3 Card=22 Bytes=308)
SORT (AGGREGATE)
FIRST ROW (Cost=3 Card=1 Bytes=16)
INDEX (RANGE SCAN (MIN/MAX)) OF AJOB (INDEX) (Cost=3 Card=1 Bytes=16)
CBQT factor is some multiplication factor for any cost based transformation. The default value is 50.
Correct SQ selectivity is "forcing the correct computation of subquery selectivity" and it's a bit puzzling as to why is this parameter making such a difference.
SQU bottom "enables unnesting of subquery in a bottom-up manner", instead of the old 9i top-down manner.
I'll investigate CBQT and SQU parameters further some other time and update the blog.
Update (31 Jan 2007)
Oracle support reply:
...
Regarding costing below:
--------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
--------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 1827 | |
| 1 | INDEX FULL SCAN | TEST_QF_I| 274 | 6576 | 110 | 00:00:02 |
| 2 | SORT AGGREGATE | | 1 | 22 | | |
| 3 | FIRST ROW | | 274 | 6028 | 2 | 00:00:01 |
| 4 | INDEX RANGE SCAN (MIN/MAX) | TEST_QF_I| 274 | 6028 | 2 | 00:00:01 |
--------------------------------------------------+-----------------------------------+
Here is the input from our BDE:
=================================================================
Cardinality used in the cost calculation of subquery filter is the NDV of the subquery as per dynamic sampling which is 857.
The formula used for the final costing is
(cost of evaluating subquery * cbft * # rows after all joins) + cost of main query
In your case..
Final cost = (2*1*857) + 110 = 1714+110 =~1827 the cbft is 1 as this is the default.
=================================================================
Update (20 Feb 2007)
We had 2 issues here: one with the question how the cost jumps from 110 to 1827. That was answered by Oracle support analyst.
The other question was how was cost so high with bottom_up unnesting approach. I have raised a different SR with Oracle and it appears we hit the bug 5595091 where the workaround is to turn off the "correct" subquery selectivity calculation.
alter system set "_optimizer_correct_sq_selectivity"=false;
Key words "Oracle 10g CBO cost factors" "10g optimizer costs" "10g optimizer cost factors" "difference in costs between 9i and 10g"
I ran couple of similar queries and saw that anything with nested sub-queries has very high costs.
Query:
select /*+ index(c) */ c.eid, c.effdate, c.c, c.d, c.e
from sysadm.jobt c
where c.status in ('A','L','P','S')
and c.effdate = (select /*+ no_unnest index_desc(z) */ max(z.effdate)
from sysadm.JOBT z
where z.eid = c.eid
and z.ercd = c.ercd
and z.effdate <= sysdate)
and c.effseq = (select /*+ no_unnest */ max(z.effseq)
from sysadm.JOBT z
where z.eid = c.eid and z.ercd = c.ercd and z.j_date = c.j_date);
Those of you who use PeopleSoft will recognise the common unnesting for the maximum effective date and the corresponding maximum sequence.
The cost with optimizer_features_enabled=10.2.0.1 is 717276, while it should have been around 2000
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=717276 Card=1 Bytes=32)
TABLE ACCESS (BY INDEX ROWID) OF JOBT (TABLE) (Cost=1893 Card=190 Bytes=6080)
INDEX (FULL SCAN) OF JOBT (INDEX (UNIQUE)) (Cost=1667 Card=238)
SORT (AGGREGATE) INDEX (RANGE SCAN DESCENDING) OF AJOB (INDEX) (Cost=3 Card=22 Bytes=308)
SORT (AGGREGATE) FIRST ROW (Cost=3 Card=1 Bytes=16)
INDEX (RANGE SCAN (MIN/MAX)) OF AJOB (INDEX) (Cost=3 Card=1 Bytes=16)
After experimenting with parameters, I found the following three parameters will set the transformation cost down to the "normal" level.
- alter session set "_optimizer_squ_bottomup" = false;
- alter session set "_optimizer_correct_sq_selectivity"=false /* sometimes help, depends */
- alter session set "_optimizer_cbqt_factor"=1
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1894 Card=317306 Bytes=10153792)
TABLE ACCESS (BY INDEX ROWID) OF JOBT (TABLE) (Cost=1893 Card=190 Bytes=6080)
INDEX (FULL SCAN) OF JOBT (INDEX (UNIQUE)) (Cost=1667 Card=238)
SORT (AGGREGATE)
INDEX (RANGE SCAN DESCENDING) OF AJOB (INDEX) (Cost=3 Card=22 Bytes=308)
SORT (AGGREGATE)
FIRST ROW (Cost=3 Card=1 Bytes=16)
INDEX (RANGE SCAN (MIN/MAX)) OF AJOB (INDEX) (Cost=3 Card=1 Bytes=16)
CBQT factor is some multiplication factor for any cost based transformation. The default value is 50.
Correct SQ selectivity is "forcing the correct computation of subquery selectivity" and it's a bit puzzling as to why is this parameter making such a difference.
SQU bottom "enables unnesting of subquery in a bottom-up manner", instead of the old 9i top-down manner.
I'll investigate CBQT and SQU parameters further some other time and update the blog.
Update (31 Jan 2007)
Oracle support reply:
...
Regarding costing below:
--------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
--------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 1827 | |
| 1 | INDEX FULL SCAN | TEST_QF_I| 274 | 6576 | 110 | 00:00:02 |
| 2 | SORT AGGREGATE | | 1 | 22 | | |
| 3 | FIRST ROW | | 274 | 6028 | 2 | 00:00:01 |
| 4 | INDEX RANGE SCAN (MIN/MAX) | TEST_QF_I| 274 | 6028 | 2 | 00:00:01 |
--------------------------------------------------+-----------------------------------+
Here is the input from our BDE:
=================================================================
Cardinality used in the cost calculation of subquery filter is the NDV of the subquery as per dynamic sampling which is 857.
The formula used for the final costing is
(cost of evaluating subquery * cbft * # rows after all joins) + cost of main query
In your case..
Final cost = (2*1*857) + 110 = 1714+110 =~1827 the cbft is 1 as this is the default.
=================================================================
Update (20 Feb 2007)
We had 2 issues here: one with the question how the cost jumps from 110 to 1827. That was answered by Oracle support analyst.
The other question was how was cost so high with bottom_up unnesting approach. I have raised a different SR with Oracle and it appears we hit the bug 5595091 where the workaround is to turn off the "correct" subquery selectivity calculation.
alter system set "_optimizer_correct_sq_selectivity"=false;
Key words "Oracle 10g CBO cost factors" "10g optimizer costs" "10g optimizer cost factors" "difference in costs between 9i and 10g"