Monday, December 07, 2009

Turn on INTERVAL PARTITIONING on a range partition table

Create a range partition table.

SQL:

drop _TABLE_ date_range_t purge;

CREATE TABLE date_range_t
(  
d1   DATE,
n1   NUMBER
) 
PARTITION BY RANGE ( d1 ) 
SUBPARTITION BY LIST ( n1 ) 
SUBPARTITION TEMPLATE ( 
SUBPARTITION s_1    VALUES ( 1,2 ) , 
SUBPARTITION s_2    VALUES ( 3,4 ) 
)
( PARTITION p200911  VALUES LESS THAN ( TO_DATE('2009/12/01', 'YYYY/MM/DD') )
)
;

Turn on INTERVAL PARTITIONING by month on a range partition table.

SQL:

ALTER TABLE date_range_t SET INTERVAL ( NUMTOYMINTERVAL ( 1, 'MONTH' ) );

Wednesday, November 18, 2009

Effective table selectivity index selectivity and index column sequence

/*********
-- file://A:/SQL/CBO/index_selectivity.sql

| Modification History:
| Date Who What
| 18-Nov-2009: Charlie(Yi): Create the file,
|

Effective table selectivity selectivity and index column sequence

Goal
----
Research the order of the index columns and the effective index selectivity.

Solution
--------
Create a table with number column and date column,
Create 2 indexes on the table, one with ID in index first position, one with date_col on first position.
Benchmark the query with equal scan and range scan predicate on each index.

Outcome
-------
The logical reads on index access is: BLevel + leaf_blocks * effective index selectivity.

* LIO is 5 when access index with column ID on first position
select 2 + Ceil(1520 * 0.2 * (40 * 100)/500000) from dual;

* LIO is 15 when access index with date_col on first position,
select 2 + Ceil(1520 * (40 * 100)/500000) from dual;


"Because as soon as we have a range scan on a column used somewhere in the
middle of an index definition or fail to supply a test on such a column, the predicates on later
columns don’t restrict the selection of index leaf blocks that we have to examine."

From page74 of book Cost Based Oracle Fundamentals

For 2nd query, the effective index selectivity has to be calculated from the predicate on just the
date_col column. Because the test on date_col is range-based, the predicates on ID do not
restrict the number of index leaf blocks we have to walk.

Note
----
In plan output, Buffers = Logical Reads.

*******/

Setup
-----

create table date_index_sel
(
id number,
date_col date,
n2 number
)
nologging;

insert --+ append
into date_index_sel(id,date_col)
select mod(rownum,5), sysdate + rownum/100
from dual
connect by level <= 500000;

commit;

create index date_index_sel_i1 on date_index_sel(id,date_col) nologging;
create index date_index_sel_i2 on date_index_sel(date_col,id) nologging;

exec dbms_stats.gather_table_stats(user,'date_index_sel');

select INDEX_NAME, BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS, CLUSTERING_FACTOR, NUM_ROWS
from user_indexes
where index_name like 'DATE_INDEX_SEL%';

INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS

DATE_INDEX_SEL_I1 2 1520 500000 5475 500000
DATE_INDEX_SEL_I2 2 1520 500000 1095 500000


set serveroutput off
set linesize 370
set pagesize 0

REM TEST1, date_col is NOT the first index column, range scan on date_col,

select --+ index(a,date_index_sel_i1) gather_plan_statistics
count(*) from date_index_sel a
where id=3 
and date_col between sysdate + 100 and sysdate + 140;

--SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'basic iostats memstats last partition'));
SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'basic cost allstats last partition'));

Plan hash value: 341633530

-------------------------------------------------------------------------------
| Id  | Operation          | Name              |Cost (%CPU)| A-Rows | Buffers |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |    3 (100)|      1 |       5 |
|   1 |  SORT AGGREGATE    |                   |           |      1 |       5 |
|*  2 |   FILTER           |                   |           |    800 |       5 |
|*  3 |    INDEX RANGE SCAN| DATE_INDEX_SEL_I1 |    3   (0)|    800 |       5 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SYSDATE+100<=SYSDATE+140)
   3 - access("ID"=3 AND "DATE_COL">=SYSDATE+100 AND "DATE_COL"<=SYSDATE+140)

REM TEST2, date_col is the first index column,

select --+ index(a,date_index_sel_i2) gather_plan_statistics
count(*) from date_index_sel a
where id=3 
and date_col between sysdate + 100 and sysdate + 140
;

SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'basic cost iostats memstats last partition'));

Plan hash value: 1386725119

-------------------------------------------------------------------------------
| Id  | Operation          | Name              | Cost (%CPU)|A-Rows | Buffers |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |    15 (100)|     1 |      15 |
|   1 |  SORT AGGREGATE    |                   |            |     1 |      15 |
|*  2 |   FILTER           |                   |            |   800 |      15 |
|*  3 |    INDEX RANGE SCAN| DATE_INDEX_SEL_I2 |    15   (0)|   800 |      15 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SYSDATE+100<=SYSDATE+140)
   3 - access("DATE_COL">=SYSDATE+100 AND "ID"=3 AND "DATE_COL"<=SYSDATE+140)
       filter("ID"=3)

***.

Sunday, November 15, 2009

row level timestamp implementation

Goal
----
Identify latest row in one Bulk SQL INSERT,
e.g. JDBC Update Batching, Batch Value is 1000,
or PL/SQL FORALL statement.

Case
----
One Bulk SQL insert 1000 rows, select from an Oracle external table,
De-duplicate by primary key, identify latest row in the file.

Constraint: All rows get same SCN or timestamp value if they are in one INSERT SQL.

Solution
-- -- ---
plain simple Oracle Sequence, for row update timestamp implementation.

drop table t purge;
create table t(id number, ts timestamp(6), seq number);
drop sequence s;
create sequence s;

declare
lt_id dbms_sql.number_table;
begin
select rownum bulk collect into lt_id
from dual
connect by level <= 10;

forall i in 1..lt_id.count
insert into t(id, ts)
values(lt_id(i), systimestamp);

commit;

end;
/

select id, ts from t;


--
-- With update_change_sequence

declare
lt_id dbms_sql.number_table;
begin
select rownum bulk collect into lt_id
from dual
connect by level <= 10;

forall i in 1..lt_id.count
insert into t(id, ts, seq)
values(lt_id(i), systimestamp, s.nextval );

commit;

end;
/

select id, seq, ts from t;

Wednesday, November 11, 2009

Tune SQL with SQL plan baseline

Goal
----
Tune SQL plan without change the source code(SQL text),

A:\SQL\CBO\tune_sql_by_baseline01.sql

| Modification History:
| Date Who What
| 11-Nov-2009: Charlie(Yi): Create the file,
| 18-Nov-2009: Charlie(Yi): Show binding variable value,


Solution
--------
* tune SQL execution plan by SQL plan baseline, load_plans_from_cursor_cache to
load the hint/changed resulting plan(s) into the SQL plan baseline of the original SQL statement,

Step
----

REM display current "bad" sql plan,

SQL>
set serveroutput off
set linesize 300

var l_bad_sql_id varchar2(30);
var l_sql_text varchar2(200);
exec :l_bad_sql_id := 'cbzfmb3mc9047';
exec :l_sql_text := q'{FROM WANTS WHERE CLIENTID = :1 AND ROWDF = 'F' AND UPPERTITLE >= :2}';

REM 2nd option to get SQL_ID,
SELECT b.executions EXEC, INST_ID, b.sql_id, CHILD_NUMBER,
ROUND(b.ROWS_PROCESSED/b.executions,1) rows_per,
ROUND(b.BUFFER_GETS/b.executions,1) gets_per,
ROUND(b.ELAPSED_TIME/b.executions/1000000,3) ela_per,
b.module, parse_calls, first_load_time, disk_reads,
b.sql_text
FROM gv$sql b
WHERE UPPER(b.sql_text) LIKE UPPER('%'||:l_sql_text||'%')
AND b.executions > 0
and b.sql_text not like 'SELECT b.executions EXEC%'
ORDER BY b.executions DESC;


select * from table(dbms_xplan.display_awr(sql_id=>:l_bad_sql_id, format=>'basic rows cost partition note last'));

or

select * from table(dbms_xplan.display_cursor(sql_id=>:l_bad_sql_id, format=>'basic rows cost partition note last'));


PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------

SQL_ID cbzfmb3mc9047
--------------------
SELECT * FROM WANTS WHERE CLIENTID = :1 AND ROWDF = 'F' AND UPPERTITLE >= :2 ORDER BY UPPERTITLE;

Plan hash value: 1289812832

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)|
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 363 (100)|
| 1 | TABLE ACCESS BY INDEX ROWID| WANTS | 38233 | 363 (0)|
| 2 | INDEX RANGE SCAN | WANTSUPPERTITLE | 530 | 12 (0)|
----------------------------------------------------------------------------

REM - show binding variable value,
select name, position,datatype_string, last_captured, value_string, b.child_address, b.address
from v$sql_bind_capture b
where sql_id = :l_bad_sql_id
and was_captured <> 'NO';


REM create sql plan baseline for the SQL,

declare
l_pls number;
begin
l_pls := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE (
sql_id => :l_bad_sql_id,
plan_hash_value => NULL,
fixed => 'NO',
enabled => 'NO'
);
u$err.trc('Number of plans loaded:'||l_pls);

end;
/

REM get sql text, sql_handle
select sql_text, sql_handle, plan_name, enabled, accepted, fixed, autopurge from dba_sql_plan_baselines
where sql_text like '%'||:l_sql_text||'%';

var l_sql_handle varchar2(30);
exec :l_sql_handle := 'SYS_SQL_ce7588e9ccd926d0';

REM change sql hint to tune SQL execution plan,
set serveroutput off
set linesize 300
var l_sql_id varchar2(30);
var l_sql_plan_hash_value number;
var l_sql_child_number number;

var B1 number;
exec :B1 := 123;
var B2 varchar2(30);
exec :B2 := '1S & 2ND KINGS';

SELECT --+ index_rs_asc(wants WANTSCLIENTID)
* FROM WANTS WHERE CLIENTID = :B1 AND ROWDF = 'F' AND UPPERTITLE >= :B2 ORDER BY UPPERTITLE;

REM capture new SQL_ID and plan_hash_value,
begin
with m as (select sid from v$mystat where rownum <= 1)
select prev_sql_id, prev_child_number
into :l_sql_id, :l_sql_child_number
from m, v$session s
where s.sid = m.sid;
select plan_hash_value into :l_sql_plan_hash_value from v$sql
where sql_id = :l_sql_id
and rownum <= 1;

end;
/

print :l_sql_id;
print :l_sql_plan_hash_value;
print :l_sql_child_number;

REM verify the new SQL plan,
select * from table(dbms_xplan.display_cursor(sql_id=>:l_sql_id, cursor_child_no=>:l_sql_child_number,
format=>'basic rows cost partition note last'));

REM load new sql plan into plan baseline, with same sql_handle,
declare
l_pls number;
begin

l_pls := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE (
sql_id => :l_sql_id,
plan_hash_value => :l_sql_plan_hash_value,
sql_handle => :l_sql_handle,
fixed => 'NO',
enabled => 'NO'
);
u$err.trc('Number of plans:'||l_pls);
end;
/

REM display sql plan baseline,
SELECT *
FROM table(dbms_xplan.display_sql_plan_baseline(
sql_handle => :l_sql_handle,
format => 'typical'
))
;


PLAN_TABLE_OUTPUT

--------------------------------------------------------------------------------
SQL handle: SYS_SQL_ce7588e9ccd926d0
SQL text: SELECT * FROM WANTS WHERE CLIENTID = :1 AND ROWDF = 'F' AND UPPERTITLE
>= :2 ORDER BY UPPERTITLE
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_ccd926d0604f1815
Enabled: NO Fixed: NO Accepted: YES Origin: MANUAL-LOAD
--------------------------------------------------------------------------------

Plan hash value: 1289812832

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 217 | 17021 (1)| 00:03:08 |
|* 1 | TABLE ACCESS BY INDEX ROWID| WANTS | 1 | 217 | 17021 (1)| 00:03:08 |
|* 2 | INDEX RANGE SCAN | WANTSUPPERTITLE | 19756 | | 361 (0)| 00:00:04 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("CLIENTID"=TO_NUMBER(:1) AND "ROWDF"='F')
2 - access("UPPERTITLE">=:2)

--------------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_ccd926d0ff894fe1
Enabled: YES Fixed: YES Accepted: YES Origin: MANUAL-LOAD
--------------------------------------------------------------------------------

Plan hash value: 1555025420

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 217 | 8 (13)| 00:00:01 |
| 1 | SORT ORDER BY | | 1 | 217 | 8 (13)| 00:00:01 |
|* 2 | TABLE ACCESS BY INDEX ROWID| WANTS | 1 | 217 | 7 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | WANTSCLIENTID | 7 | | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("UPPERTITLE">=:2 AND "ROWDF"='F')
3 - access("CLIENTID"=TO_NUMBER(:1))


REM enable new plan in plan baseline,
REM changes an attribute of a single plan or all plans,
declare
l_pls number;
begin

l_pls := dbms_spm.ALTER_SQL_PLAN_BASELINE(
sql_handle => :l_sql_handle,
plan_name => 'SYS_SQL_PLAN_ccd926d0ff894fe1',
attribute_name => 'enabled',
attribute_value => 'YES'
);
u$err.trc('Number of plans:'||l_pls);
end;
/

REM fixed new plan in plan baseline,
REM if it is realy a good plan,
declare
l_pls number;
begin

l_pls := dbms_spm.ALTER_SQL_PLAN_BASELINE(
sql_handle => :l_sql_handle,
plan_name => 'SYS_SQL_PLAN_ccd926d0ff894fe1',
attribute_name => 'fixed',
attribute_value => 'YES'
);
u$err.trc('Number of plans:'||l_pls);
end;
/

REM verify attribute ENABLED and FIXED,
select sql_text, sql_handle, plan_name, enabled, accepted, fixed, autopurge
from dba_sql_plan_baselines
where sql_handle = :l_sql_handle;


REM verify new plan is picked up by CBO optimizer,
explain plan for
SELECT * FROM WANTS WHERE CLIENTID = :1 AND ROWDF = 'F' AND UPPERTITLE >= :2 ORDER BY UPPERTITLE;

select * from table(dbms_xplan.display());

Monday, June 01, 2009

single INSERT vs array BULK INSERT

Goal
----
Compare the performance of single INSERT vs array BULK INSERT.

Result
------
After you run benchmark a few more times, you will get about same result as I saw.

Bulk DML spend 20% more CPU time,
Latches are about same. No big difference.
db block gets, physical reads, consistent gets, session logical reads are all about same.
Bulk DML take 2MB more session pga memory is expected, to cache association array (1200 bytes * 1000 rows) in shared_pool memory. no big deal.

Conclusion
-----------
Consider single SQL first.
Only when the data process logic is too complicated, we may go with Bulk DML, there will be a little performance impact, e.g. 20% more CPU used.

Notes: Thanks for Tom Kyte pointing out the design strategy.
(http://mujiang.blogspot.com/2011/06/question-to-asktom-in-oracle-magazine.html)

Reference
---------
Oracle® Database PL/SQL Language Reference
11g Release 1 (11.1)
12 Tuning PL/SQL Applications for Performance

Reducing Loop Overhead for DML Statements and Queries with Bulk SQL.

* Running One DML Statement Multiple Times (FORALL Statement)
* Retrieving Query Results into Collections (BULK COLLECT Clause)

Setup
-----
drop table listing_wide purge;
create table listing_wide
(
vendor_id   number(10,0)  Not Null ,
listing_id   number(20,0)  Not Null ,
match_col               varchar2(1)     Not Null ,
vendor_listing_id  VarChar2(40)  Not Null ,
hash_lite         varchar2(32) ,
hash_code  VarChar2(32) ,
qty   number(5,0)  Not Null  ,
price  number(10,2) Not Null  ,
create_date Date default sysdate Not Null ,
update_date Date default sysdate Not Null ,
bsa_vend  Varchar2(30)     ,
isbn_raw  Varchar2(25)             ,
subject  Varchar2(2000)    ,
author   varchar2(750)     ,
title  varchar2(750)   ,
vend_catalog varchar2(255)   ,
description varchar2(4000) 
)
NOLOGGING
PARTITION BY LIST (vendor_id) 
(  
PARTITION v2134736  values( 2134736 ),
PARTITION v8408184  values( 8408184 ),
PARTITION v1  values( 1 ),
PARTITION v2  values( 2 ),
PARTITION vendor_NULL  VALUES (NULL),
PARTITION vendor_small VALUES (DEFAULT)
)
NOCOMPRESS 
NOPARALLEL
ENABLE ROW MOVEMENT;

drop table listing_purge;
create table listing_purge
NOLOGGING
PARTITION BY LIST (vendor_id) 
(  
PARTITION v2134736  values( 2134736 ),
PARTITION v8408184  values( 8408184 ),
PARTITION v1  values( 1 ),
PARTITION v2  values( 2 ),
PARTITION vendor_NULL  VALUES (NULL),
PARTITION vendor_small VALUES (DEFAULT)
)
as
select a.* 
from listing_wide a
where 1=0;

alter table listing_purge modify (
create_date Date default sysdate ,
update_date Date default sysdate 
);


Test
----

truncate table listing_wide;
truncate table listing_purge;

insert /*+ append */ into listing_purge 
(listing_id, vendor_id, price, qty,hash_lite, hash_code, vendor_listing_id, author, title,subject,isbn_raw, bsa_vend
,match_col)
select
rownum listing_id, 
mod(rownum,2) + 1 vendor_id,
1.11 price,
10 qty,
rpad('x',31,'x') hash_lite,
rpad('x',31,'x') hash_code,
rownum vendor_listing_id,
rpad(rownum,400,'x') author,
rpad('x',400,'x') title,
rpad('x',300,'x') subject,
rownum isbn_raw,
mod(rownum,100)||','||mod(Round(rownum/10),100) bsa_vend
,'x' match_col
from dual
connect by level <= 80100;

commit;

set serveroutput on
set linesize 200
begin
tmp_benchmark;
end;
/

select vendor_id, count(*) from listing_wide
group by vendor_id;

select vendor_id, count(*) from listing_purge
group by vendor_id;


Code
----
CREATE OR REPLACE procedure tmp_benchmark
as
l_cnt number := 0; l_upd_rows number := 0;
l_fetch_limit PLS_INTEGER := 1000;

cursor c2 is
select listing_id, vendor_id, price, qty,hash_lite, hash_code, vendor_listing_id, author, title,subject,isbn_raw, bsa_vend, match_col
from listing_purge partition (v2);

TYPE listing_purge_tab is table of c2%rowtype;
lt_book listing_purge_tab;

BEGIN

runStats_pkg.rs_start;

INSERT INTO listing_wide
(listing_id, vendor_id, price, qty,hash_lite, hash_code, vendor_listing_id, author, title,subject,isbn_raw, bsa_vend, match_col)
select listing_id, vendor_id, price, qty,hash_lite, hash_code, vendor_listing_id, author, title,subject,isbn_raw, bsa_vend, match_col
from listing_purge partition (v1);

commit;

runStats_pkg.rs_middle;

-- Open cursor 
OPEN c2;

LOOP
fetch c2 BULK COLLECT INTO lt_book LIMIT l_fetch_limit;

FORALL j IN lt_book.FIRST..lt_book.LAST
INSERT into listing_wide
(listing_id, vendor_id, price, qty,hash_lite, hash_code, vendor_listing_id, author, title,subject,isbn_raw, bsa_vend, match_col)
values (lt_book(j).listing_id, lt_book(j).vendor_id, lt_book(j).price, lt_book(j).qty, lt_book(j).hash_lite, lt_book(j).hash_code, 
lt_book(j).vendor_listing_id, lt_book(j).author, lt_book(j).title, lt_book(j).subject, lt_book(j).isbn_raw, lt_book(j).bsa_vend
, lt_book(j).match_col)
;

l_upd_rows := l_upd_rows + SQL%RowCount;

exit when lt_book.COUNT < l_fetch_limit;
--exit when c2%notfound;

END LOOP;
close c2;

commit;

runStats_pkg.rs_stop;

EXCEPTION
WHEN others THEN
--u$err.err;
raise;
END;
/

RunStats
--------
Run1 ran in 669 hsecs
Run2 ran in 914 hsecs
run 1 ran in 73.19% of the time

Name                                  Run1        Run2        Diff
STAT...parse count (hard)                2           3           1
LATCH.DML lock allocation              328         327          -1
STAT...sorts (memory)                    1           0          -1
STAT...parse time cpu                    2           0          -2
LATCH.internal temp table obje           4           2          -2
LATCH.shared pool simulator              6           8           2
LATCH.ASM db client latch                4           6           2
STAT...enqueue releases              1,222       1,224           2
LATCH.space background task la           7           5          -2
STAT...redo log space requests           0           2           2
STAT...Heap Segment Array Upda         146         148           2
STAT...Heap Segment Array Inse          12          14           2
STAT...HSC Heap Segment Block       50,222      50,224           2
STAT...index scans kdiixs1              98          96          -2
STAT...enqueue requests              1,220       1,224           4
STAT...application wait time             0           4           4
STAT...physical read total mul         153         148          -5
STAT...redo ordering marks           1,121       1,126           5
STAT...messages sent                    97         105           8
STAT...session logical reads       147,017     147,029          12
STAT...commit cleanouts              3,364       3,376          12
STAT...consistent changes               24          38          14
STAT...db block changes            154,297     154,313          16
LATCH.Consistent RBA                    96          80         -16
STAT...parse time elapsed               67          51         -16
LATCH.lgwr LWN SCN                      97          78         -19
STAT...redo entries                102,505     102,524          19
STAT...redo buffer allocation            0          22          22
LATCH.post/wait queue                    1          31          30
STAT...recursive cpu usage              91         121          30
STAT...redo log space wait tim           0          30          30
STAT...CPU used by this sessio          91         125          34
LATCH.session allocation             7,536       7,574          38
LATCH.In memory undo latch               9          48          39
STAT...physical read total IO          275         235         -40
LATCH.row cache objects              2,688       2,642         -46
STAT...execute count                   261         310          49
STAT...consistent gets              19,683      19,632         -51
LATCH.shared pool                      138         196          58
STAT...db block gets               127,334     127,397          63
LATCH.undo global data               8,608       8,537         -71
STAT...active txn count during       8,180       8,107         -73
STAT...recursive calls               3,858       3,941          83
LATCH.enqueues                       2,034       2,132          98
STAT...user I/O wait time              564         670         106
LATCH.enqueue hash chains            2,525       2,634         109
STAT...table scan rows gotten       50,445      50,315        -130
LATCH.MinActiveScn Latch            16,931      16,796        -135
LATCH.redo allocation                  295         467         172
STAT...physical reads               10,010      10,194         184
STAT...Number of read IOs issu           0         184         184
LATCH.cache table scan latch           231          19        -212
STAT...Elapsed Time                    669         920         251
STAT...undo change vector size   3,185,292   3,185,592         300
LATCH.redo writing                     295         709         414
LATCH.multiblock read objects          464          38        -426
LATCH.object queue header heap           3         451         448
LATCH.active checkpoint queue            2         457         455
LATCH.messages                         368       1,250         882
LATCH.cache buffer handles          15,010      13,714      -1,296
STAT...free buffer inspected        10,054      11,522       1,468
LATCH.cache buffers lru chain       20,881      22,578       1,697
LATCH.simulator hash latch          13,132      10,817      -2,315
STAT...hot buffers moved to he         560       3,105       2,545
STAT...IMU Redo allocation siz       8,236      11,044       2,808
STAT...dirty buffers inspected           1       4,860       4,859
LATCH.checkpoint queue latch           201       7,974       7,773
LATCH.cache buffers chains         545,026     536,670      -8,356
LATCH.object queue header oper      34,953      44,292       9,339
STAT...physical reads cache pr       9,735         133      -9,602
STAT...free buffer requested        20,720      10,987      -9,733
STAT...consistent gets from ca      10,729         959      -9,770
STAT...physical reads cache         10,010         184      -9,826
STAT...physical reads direct             0      10,010      10,010
STAT...consistent gets direct            0      10,010      10,010
STAT...consistent gets from ca      19,683       9,622     -10,061
STAT...redo size                78,097,020  78,123,668      26,648
STAT...IMU undo allocation siz      52,228         908     -51,320
STAT...physical read total byt  82,001,920  83,509,248   1,507,328
STAT...physical IO disk bytes   82,001,920  83,509,248   1,507,328
STAT...cell physical IO interc  82,001,920  83,509,248   1,507,328
STAT...physical read bytes      82,001,920  83,509,248   1,507,328
STAT...session pga memory                0   2,162,688   2,162,688
STAT...Effective IO time                 0   6,195,022   6,195,022

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
672,749     681,506       8,757     98.72%

PL/SQL procedure successfully completed.

Wednesday, April 15, 2009

Benchmark row-by-row and set based process

Goal
----
Compare the performance benchmark for row-by-row update and single SQL set based update.
The benchmark matrix supply supported information to Product Manager to prove why we should move some
row-by-row "real time"(that can be near real time) processes to Asynchronous set based bulk process.

file://T:/NextAction/benchmark_update_listing.sql

Solution
--------
Generate testing data, update some rows, benchmark with RunStats package.

Result
------
From benchmark, you can see

- Set based process vs. row-by-row process, commit out the loop.
Row-by-row process takes 3 times longer Elapsed Time, 3 times more CPU, 3 times more logical I/O,
and use 258% more Latches, latches are a type of lock, locks are serialization devices,
serialization devices inhibit scalability.

- Set based process vs. row-by-row process, commit in the loop.
Row-by-row process takes 5 times longer Elapsed Time, 5 times more CPU, 5 times more logical I/O,
and use 1,019.26% more Latches.

- If we introduce blocking row-by-row process from OIMS, by introducing more latches from different concurrent users,
it will be even worse.
A latch is a serialization device, the more people that request the same latch at the some time necessarily implies
"longer runtimes" - as the number of concurrent users go up, so will the observed run times and cpu times
(as you spin on the latch trying to get it), as we experience contention.

So,

SQL SET BASED BULK PROCESSING SLOW BY SLOW

You did good by turning slow by slow into BULK.

You could have done better by turning BULK into SET OPERATION.



Stats
-----
*) commit out the loop

Run1 ran in 79 hsecs
Run2 ran in 25 hsecs
run 1 ran in 316% of the time

Name                                              Run1          Run2          Diff
STAT...db block changes                          4,555         4,568            13
STAT...db block gets                             2,418         2,444            26
STAT...Elapsed Time                                 80            26           -54
STAT...CPU used by this session                     81            26           -55
LATCH.simulator hash latch                         308           181          -127
STAT...table scan rows gotten                    2,728         2,222          -506
STAT...redo size                             5,841,592     5,842,304           712
LATCH.checkpoint queue latch                     2,223           102        -2,121
STAT...execute count                             2,224             4        -2,220
STAT...sorts (memory)                            2,222             0        -2,222
LATCH.MinActiveScn Latch                         2,428           198        -2,230
LATCH.object queue header operation              2,862           393        -2,469
STAT...undo change vector size               2,837,256     2,839,872         2,616
LATCH.cache buffers lru chain                    4,445           195        -4,250
STAT...session logical reads                     9,292         2,751        -6,541
STAT...consistent gets                           6,874           307        -6,567
LATCH.cache buffers chains                      23,525        12,415       -11,110

Run1 latches total versus runs -- difference and pct
Run1          Run2          Diff        Pct
36,466        14,118       -22,348    258.29%

PL/SQL procedure successfully completed.


*) commit in the loop

Run1 ran in 129 hsecs
Run2 ran in 24 hsecs
run 1 ran in 537.5% of the time

Name                                              Run1          Run2          Diff
STAT...CPU used by this session                    130            26          -104
STAT...Elapsed Time                                132            25          -107
LATCH.Consistent RBA                             1,260             8        -1,252
STAT...commit cleanouts                          2,224           100        -2,124
LATCH.redo writing                               3,782            25        -3,757
LATCH.cache buffers lru chain                    4,454           195        -4,259
STAT...db block changes                          8,916         4,572        -4,344
LATCH.session allocation                         4,481            52        -4,429
STAT...db block gets                             8,917         2,456        -6,461
STAT...consistent gets                           6,882           408        -6,474
STAT...recursive calls                           6,714            35        -6,679
LATCH.redo allocation                            8,225            29        -8,196
LATCH.In memory undo latch                       8,893            11        -8,882
LATCH.DML lock allocation                        8,891             7        -8,884
LATCH.undo global data                          11,136           129       -11,007
STAT...session logical reads                    15,799         2,864       -12,935
LATCH.enqueue hash chains                       13,351            25       -13,326
LATCH.cache buffers chains                      60,632        12,552       -48,080
STAT...undo change vector size               2,989,432     2,841,148      -148,284
STAT...redo size                             6,391,412     5,844,136      -547,276
STAT...IMU undo allocation size              4,193,320         1,080    -4,192,240

Run1 latches total versus runs -- difference and pct
Run1          Run2          Diff        Pct
145,815        14,306      -131,509  1,019.26%

PL/SQL procedure successfully completed.


Setup
-----

drop table listing_wide purge;
create table listing_wide
(
vendor_id   number(10,0)  Not Null ,
listing_id   number(20,0)  Not Null ,
vendor_listing_id  Varchar(40)  Not Null ,
hash_code  VarChar2(32)  Not Null  ,
qty   number(5,0)  Not Null  ,
price  number(10,2) Not Null  ,
currency  varchar2(3) default 'USD' Not Null,
listingtype  number(4,0) default 1001  Not Null,
book_lang varchar2(3) default ' ' Not Null ,
create_date Date default sysdate Not Null ,
update_date Date default sysdate Not Null ,
abe_cat0 varchar2(100)     ,
abe_cat1  varchar2(100)      ,
abe_cat2  varchar2(100)      ,
vend_bsa  Varchar2(30)     ,
abe_bsa  Varchar2(30)     ,
binding  varchar2(30)    ,
booktype varchar2(30)     ,
bookcondition varchar2(30)     ,
cat_providerid Number(5,0)    ,
cat_providerkey Varchar2(50)    ,
edition_num  number(10)      ,
edition_txt  varchar2(40)      ,
editor  varchar(255)    ,
illustrator varchar2(255)    ,
inscription varchar2(50)    ,
isbn_raw  Varchar2(25)             ,
isbn_vendor13 Number(13,0)     ,
isbn_matched13 Number(13,0)     ,
jacketcondition Varchar2(30)    ,
pictures Varchar2(255)    ,
printing  Varchar2(20)      ,
publisher_name  varchar2(750)      ,
publisher_year  Number(4)      ,
publisher_location  varchar2(750)     ,
item_size   Varchar2(50)     ,
subject  Varchar2(2000)     ,
synopsis Varchar2(255)    ,
author   varchar(750)      ,
title  varchar2(750)    ,
vend_cat varchar2(255)    ,
volume  Varchar2(50)    ,
weight  Varchar2(40)    ,
description varchar2(4000)
--, constraint listing_wide_pk primary key(listing_id)
)
-- TABLESPACE data_auto
NOLOGGING
PARTITION BY LIST (vendor_id) 
(  
PARTITION v2134736  values( 2134736 ),
PARTITION v8408184  values( 8408184 ),
PARTITION v999  values( 999 ),
PARTITION vendor_NULL  VALUES (NULL),
PARTITION vendor_small VALUES (DEFAULT)
)
NOCOMPRESS 
NOPARALLEL
ENABLE ROW MOVEMENT;


create index listing_wide_i_hs on listing_wide(vendor_listing_id, hash_code) local nologging;
create index listing_wide_i_id on listing_wide(listing_id) local nologging;

truncate table listing_wide;

insert /*+ append */ into listing_wide 
(listing_id, vendor_id, price, qty, hash_code, vendor_listing_id, author, title,description,isbn_raw, vend_bsa)
select
rownum listing_id, 
mod(rownum,8) vendor_id,
10.55 price,
30 qty,
rpad('x',31,'x') hash_code,
rownum vendor_listing_id,
rpad(rownum,400,'x') author,
rpad('x',400,'x') title,
rpad('x',300,'x') description,
rownum isbn_raw,
mod(rownum,100)||','||mod(Round(rownum/10),100) bsa
from dual
connect by level <= 100100;

commit;

select vendor_id, isbn_raw, hash_code, author
from listing_wide where rownum <= 3;

exec dbms_stats.gather_table_stats(user,'listing_wide');

drop table listing_file;

create table listing_file
PARTITION BY LIST (time_part_key)
SUBPARTITION BY LIST (match_col)
SUBPARTITION TEMPLATE 
(SUBPARTITION vendor_id_part  VALUES (1) ,
SUBPARTITION hash_code_part  VALUES (2) ,
SUBPARTITION listing_id_part VALUES (3) ,
SUBPARTITION other_part VALUES (DEFAULT)
)
(
PARTITION data_part_0 VALUES (0),
PARTITION data_part_1 VALUES (1),
PARTITION data_part_2 VALUES (2),
PARTITION data_part_3 VALUES (3),
PARTITION data_part_4 VALUES (4),
PARTITION data_part_5 VALUES (5),
--PARTITION dummy_null  VALUES (NULL),
PARTITION dummy_other VALUES (DEFAULT)
)
--tablespace data_auto
nologging
as
select a.* 
--, cast(null as timestamp(3)) update_date
, cast(null as number(2))    time_part_key
, 9 dest_queue
, 9 match_col
from listing_wide a
where 1=0;

alter table listing_file modify (
currency  varchar2(3) default 'USD' ,
listingtype  number(4,0) default 1001  ,
book_lang varchar2(3) default ' '  ,
create_date Date default sysdate  ,
update_date Date default sysdate 
);

alter table listing_file modify
(
update_date    default systimestamp,
time_part_key  default mod(floor(to_number(to_char(sysdate,'mi'))/10),10)
);

truncate table listing_file;

insert /*+ append */ into listing_file
(listing_id, vendor_id, price, qty, hash_code, vendor_listing_id, author, title,description,isbn_raw, vend_bsa
, match_col)
select
rownum listing_id, 
mod(rownum,8) vendor_id,
10.55 price,
30 qty,
rpad('x',31,'x') hash_code,
rownum vendor_listing_id,
rpad(rownum,400,'x') author,
rpad('x',400,'x') title,
rpad('x',300,'x') description,
rownum isbn_raw,
mod(rownum,100)||','||mod(Round(rownum/10),100) bsa,
3 match_col
from dual
connect by level <= 2222;

commit;

exec dbms_stats.gather_table_stats(user,'listing_file');

Test
----

set linesize 120
set serveroutput on

Begin
runstats_pkg.rs_start;

for s in (select * from listing_file)
loop
update  listing_wide d
set
d.price      = s.price    ,
d.qty       = s.qty    ,
d.hash_code      = s.hash_code   ,
d.vendor_listing_id = s.vendor_listing_id,
d.author      = s.author    ,
d.title      = s.title   ,
d.description     = s.description  ,
d.isbn_raw      = s.isbn_raw   ,
d.vend_bsa     = s.vend_bsa   
where s.vendor_id = d.vendor_id and s.listing_id = d.listing_id
;
--commit;

end loop;
commit;

runstats_pkg.rs_middle;

MERGE INTO listing_wide d
using listing_file s
on (s.vendor_id = d.vendor_id and s.listing_id = d.listing_id)
when matched then
update set
d.price      = s.price    ,
d.qty       = s.qty    ,
d.hash_code      = s.hash_code   ,
d.vendor_listing_id = s.vendor_listing_id,
d.author      = s.author    ,
d.title      = s.title   ,
d.description     = s.description  ,
d.isbn_raw      = s.isbn_raw   ,
d.vend_bsa     = s.vend_bsa   
;
commit;

runstats_pkg.rs_stop(5);
End;
/

Reference
---------
http://tkyte.blogspot.com/2006/10/slow-by-slow.html

  • bulking it up with BULK COLLECT,  bulk-update them back into the database, using the FORALL statement with the UPDATE

http://www.oracle.com/technetwork/issue-archive/2011/11-may/o31asktom-354139.html

Tuesday, April 14, 2009

table stats tool script

Since I saw Ningoo show his dandy tool tbsql and made my mouth watering, and he was reluctant to share his favorite tool, so I have to create one myself.
The good news is he promised he will open source it in someday.

Purpose:
Display table and its index statistics by specify Owner and Table_Name.
It can help you investigate the SQL query optimizer issue, cooperate with 10053 trace tool. And it can do more...

Reference:
http://www.ningoo.net/html/2009/how_to_get_extent_id_from_rowid.html

Here is the script:


/*

-- File name: table.sql
-- Goal: display table and index statistics by specify Owner and Table_Name
--
-- Author: Yi (Charlie) Zhu
-- update: 29-Jan-2009
-- Blog: http://mujiang.blogspot.com/
--
-- Usage: @table [owner] [table_name]

29-Jan-2009: Charlie: Show popular value chart for height balance histogram


Reference
--
http://www.ningoo.net/html/2009/how_to_get_extent_id_from_rowid.html

*/

define owner='&1'
define table_name='&2'

set serveroutput off
set timing off
set verify off
set linesize 200
set define &

COLUMN SEGMENT_NAME FORMAT A40 HEADING 'Segment Name'
COLUMN SEGMENT_TYPE FORMAT A25 HEADING 'Segment Type'
COLUMN SEGMENT_SUBTYPE FORMAT A25 HEADING 'Segment SubType'
COLUMN Tablespace_Name FORMAT A30 HEADING 'Tablespace_Name'
COLUMN Partition_NAME FORMAT A25 HEADING 'Partition Name'
COLUMN subpartition_name FORMAT A22 HEADING 'SubPartition Name'
COLUMN INDEX_NAME FORMAT A30 HEADING 'INDEX Name'
COLUMN COLUMN_NAME FORMAT A30 HEADING 'COLUMN NAME'
COLUMN DATA_TYPE FORMAT A14 HEADING 'DATA TYPE'
COLUMN LOW_VALUE FORMAT A14 HEADING 'LOW_VALUE'
COLUMN HIGH_VALUE FORMAT A14 HEADING 'HIGH_VALUE'
COLUMN owner FORMAT A14
COLUMN table_name FORMAT A30
COLUMN HISTOGRAM FORMAT A9
COLUMN endpoint_actual_value format A30

COLUMN BLOCKS FORMAT 999,999,999 HEADING 'Blocks'
COLUMN MBytes FORMAT 999,999,999 HEADING 'MBytes'
COLUMN TOTAL_BYTES FORMAT 999,999 HEADING 'TotByte_MB'
COLUMN extents FORMAT 999,999
COLUMN LAST_ANALYZED FORMAT A11 HEADING 'LAST|ANALYZED'
COLUMN pop_chart FORMAT A11 HEADING 'Pop Chart'

COLUMN unf FORMAT 999,999.99 HEADING 'unf_K'
COLUMN fs1 FORMAT 999,999.99 HEADING 'fs1_K'
COLUMN fs2 FORMAT 999,999.99 HEADING 'fs2_K'
COLUMN fs3 FORMAT 999,999.99 HEADING 'fs3_K'
COLUMN fs4 FORMAT 999,999.99 HEADING 'fs4_K'
COLUMN full FORMAT 999,999.99 HEADING 'full_K'
COLUMN total_blocks FORMAT 999,999.99 HEADING 'total_blocks_K'
COLUMN unused_blocks FORMAT 999,999.99 HEADING 'unused_blocks_K'
COLUMN unfb FORMAT 999,999 HEADING 'unfb_M'
COLUMN fs1b FORMAT 999,999 HEADING 'fs1b_M'
COLUMN fs2b FORMAT 999,999 HEADING 'fs2b_M'
COLUMN fs3b FORMAT 999,999 HEADING 'fs3b_M'
COLUMN fs4b FORMAT 999,999 HEADING 'fs4b_M'
COLUMN fullb FORMAT 999,999 HEADING 'fullb_M'
COLUMN total_bytes FORMAT 999,999 HEADING 'total_bytes_M'
COLUMN unused_bytes FORMAT 999,999 HEADING 'unused_bytes_M'
COLUMN last_extf FORMAT 999,999 HEADING 'last_extf_id'
COLUMN last_extb FORMAT 9999999999 HEADING 'last_extb_id'
COLUMN lastusedblock/1024 FORMAT 9999999999 HEADING 'lastusedblock'

col num_rows new_value l_num_rows;


spool table_info.log

prompt table info

SELECT owner,table_name,pct_free,num_rows,blocks,chain_cnt,
avg_row_len,to_char(last_analyzed,'yyyy-mon-dd') last_analyzed,buffer_pool
FROM dba_tables
WHERE owner = (Upper('&&owner'))
and table_name IN (Upper('&&table_name'))
ORDER BY blocks desc;

prompt table stats

SELECT partition_name,subpartition_name,num_rows, blocks, empty_blocks, avg_space, chain_cnt, avg_row_len
FROM dba_tab_statistics
WHERE owner = (Upper('&&owner'))
and table_name IN (Upper('&&table_name'))
order by table_name,partition_position;

-- IOT come with Primary Key index name
select owner||'.'||segment_name segment_name,partition_name,round(bytes/1024/1024) mbytes,
blocks,extents,segment_type,tablespace_name
from dba_segments
WHERE owner = (Upper('&&owner'))
and segment_name IN (Upper('&&table_name'))
;

prompt column data type

SELECT --OWNER,TABLE_NAME,COLUMN_ID,
COLUMN_NAME,DATA_TYPE,DATA_LENGTH,DATA_PRECISION
FROM dba_tab_cols
WHERE owner = (Upper('&&owner'))
and TABLE_NAME IN (Upper('&&table_name'))
order by owner,table_name,column_name
;

prompt column stats

SELECT
column_name AS "NAME",
num_distinct AS "#DST",
low_value,
high_value,
density AS "DENS",
num_nulls AS "#NULL",
avg_col_len AS "AVGLEN",
histogram,
num_buckets AS "#BKT"
FROM dba_tab_col_statistics
WHERE owner = (Upper('&&owner'))
and TABLE_NAME IN (Upper('&&table_name'))
order by column_name
;

prompt column frequency histogram

SELECT column_name, endpoint_value, endpoint_actual_value, endpoint_number,
endpoint_number - lag(endpoint_number,1,0) OVER (partition by column_name ORDER BY endpoint_number) AS frequency
FROM dba_tab_histograms
WHERE owner = (Upper('&&owner'))
and table_name = (Upper('&&table_name'))
and column_name in
(select column_name FROM dba_tab_col_statistics
where owner = (Upper('&&owner'))
and TABLE_NAME IN (Upper('&&table_name'))
and histogram = 'FREQUENCY'
)
ORDER BY column_name, endpoint_number;

prompt column height-balanced histogram

SELECT column_name, endpoint_value, endpoint_number
, Max(endpoint_number) OVER (partition by column_name) Max_EPN
, endpoint_number - lag(endpoint_number,1,0) OVER (partition by column_name ORDER BY endpoint_number) AS popular_count
, &l_num_rows * (endpoint_number - lag(endpoint_number,1,0) OVER (partition by column_name ORDER BY endpoint_number))
/(Max(endpoint_number) OVER (partition by column_name)) AS Est_Card
, rpad('.',endpoint_number - lag(endpoint_number,1,0) OVER (partition by column_name ORDER BY endpoint_number) - 1,'x') pop_chart
FROM dba_tab_histograms
WHERE owner = (Upper('&&owner'))
and table_name = (Upper('&&table_name'))
and column_name in
(select column_name FROM dba_tab_col_statistics
where owner = (Upper('&&owner'))
and TABLE_NAME IN (Upper('&&table_name'))
and histogram = 'HEIGHT BALANCED'
)
ORDER BY column_name, endpoint_number;

prompt Index name,physical size,including columns and selectivity

SELECT m.INDEX_NAME,COLUMN_NAME,
COLUMN_POSITION,DISTINCT_KEYS,LEAF_BLOCKS,NUM_ROWS, COLUMN_LENGTH
FROM DBA_INDEXES m,dba_ind_columns d
WHERE m.owner = (Upper('&&owner'))
AND m.INDEX_NAME = d.INDEX_NAME
AND m.OWNER = d.TABLE_OWNER
AND m.TABLE_NAME = d.TABLE_NAME
AND m.TABLE_NAME IN (Upper('&&table_name'))
ORDER BY index_name,column_position;

prompt index stats

SELECT
index_name,
blevel,
leaf_blocks AS leaf_blks,
distinct_keys AS dst_keys,
num_rows,
clustering_factor AS clust_fact,
avg_leaf_blocks_per_key AS leaf_per_key,
avg_data_blocks_per_key AS data_per_key
FROM dba_ind_statistics
WHERE owner = (Upper('&&owner'))
AND TABLE_OWNER = (Upper('&&owner'))
AND TABLE_NAME IN (Upper('&&table_name'));

VARIABLE total_blocks NUMBER
VARIABLE total_bytes NUMBER
VARIABLE unused_blocks NUMBER
VARIABLE unused_bytes NUMBER
VARIABLE last_extf NUMBER
VARIABLE last_extb NUMBER
VARIABLE lastusedblock NUMBER
begin
DBMS_SPACE.UNUSED_SPACE(Upper('&&owner'), Upper('&&table_name'), 'TABLE',:total_blocks,
:total_bytes,:unused_blocks,:unused_bytes,:last_extf,
:last_extb,:lastusedblock);
end;
/

select
:total_bytes /1024/1024 total_bytes ,
:unused_bytes /1024/1024 unused_bytes ,
:total_blocks /1024 total_blocks ,
:unused_blocks/1024 unused_blocks,
:lastusedblock/1024 lastusedblock ,
:last_extf last_extf ,
:last_extb last_extb
from dual;

variable unf number;
variable unfb number;
variable fs1 number;
variable fs1b number;
variable fs2 number;
variable fs2b number;
variable fs3 number;
variable fs3b number;
variable fs4 number;
variable fs4b number;
variable full number;
variable fullb number;

-- shows the space usage of data blocks under the segment High Water Mark

begin
dbms_space.space_usage(Upper('&&owner'),Upper('&&table_name'),
'TABLE',
:unf,:unfb,
:fs1,:fs1b,
:fs2,:fs2b,
:fs3,:fs3b,
:fs4,:fs4b,
:full,:fullb);
end;
/

select
:unf /1024 unf
,:fs1 /1024 fs1
,:fs2 /1024 fs2
,:fs3 /1024 fs3
,:fs4 /1024 fs4
,:full /1024 full
,:unfb /1024/1024 unfb
,:fs1b /1024/1024 fs1b
,:fs2b /1024/1024 fs2b
,:fs3b /1024/1024 fs3b
,:fs4b /1024/1024 fs4b
,:fullb /1024/1024 fullb
from dual;

spool off

/*
-- unf: unformatted_blocks

print unf ;
print unfb ;
print fs4 ;
print fs4b;
print fs3 ;
print fs3b;
print fs2 ;
print fs2b;
print fs1 ;
print fs1b;
print full;
print fullb;


-- Returns the list of segments that are associated with the object
select * from Table(
DBMS_SPACE.OBJECT_DEPENDENT_SEGMENTS(
objowner => Upper('&&owner'),
objname => Upper('&&table_name'),
partname => Null,
objtype => 1)
);

SQL> print

TOTAL_BLOCKS TOTAL_BYTES UNUSED_BLOCKS UNUSED_BYTES LASTEXTF LAST_EXTB LASTUSEDBLOCK
------------ ----------- ------------- ------------ ---------- ---------- -------------
1536 12582912 0 0 5 1545 128


UNF UNFB
---------- ----------
0 0

FreeSpace1: 0 to 25% free space
FreeSpace2: 25 to 50% free space
...

FS1 FS2 FS3 FS4 FULL
---------- ---------- ---------- ---------- ----------
386 25 16 945 132


FS1B FS2B FS3B FS4B FULLB
---------- ---------- ---------- ---------- ----------
3162112 204800 131072 7741440 1081344

*/

Wednesday, April 08, 2009

debug.f to instrument PL/SQL code

Purpose
-------
Code Instrumentation is the fine art of making every other line of code be debug.
Here we show you how to install and implement debug.f in you PL/SQL code.

Download
--------

debugf.zip contains the code, then extract and save all files to a local folder.
http://asktom.oracle.com/~tkyte/debugf -- This URL expired,
use this one to download debugf:  https://drive.google.com/file/d/0B9XqiVdd05wXaEhDMTNYX1E5dWM/edit?usp=sharing


Discussion
----------
"Instrumentation overhead"
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:10445025326812

"The 10 (Oracle Development) Commandments"
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:51960184066540

Setup
-----
suggest you

create a utility user:
create user utility identified by utility;
grant create session, create procedure, create table, create trigger to utility;
alter user utility default tablespace users quota unlimited on users;

connect sys as sysdba
grant execute on utl_file to utility;
grant create public synonym, create any DIRECTORY to utility;

run install_debug in that schema 
create a public synonym for debug 
And then you can use it. 


connect utility/utility
@install_debug.sql
--@A:\plsql\debugf\install_debug.sql

create public synonym debug for utility.debug;
grant execute on utility.debug to public;

DROP DIRECTORY debug_dir;
CREATE or replace DIRECTORY debug_dir AS '/data1/debug';
CREATE or replace DIRECTORY debug_dir AS '/u01/debug';

CREATE or replace DIRECTORY debug_dir AS 'c:/temp/log';

--GRANT READ, WRITE ON DIRECTORY debug_dir TO PUBLIC; 
--exec debug.clear;
--exec debug.init(p_file=>'DEBUG_DIR/system.dbg');

-- Enable debug trace for current login user
declare
  ls_user varchar2(30) DEFAULT USER;
begin
 debug.clear;
 debug.init(p_file=>'DEBUG_DIR/'||ls_user||'.dbg');
 debug.status;
end;
/


-- Set module name to trace, p_modules=>''
-- Enable trace for a user, p_user=>'SYSTEM'
declare
 ls_trace_user varchar2(30) := 'CS_USER';
BEGIN
 debug.clear(p_user=>ls_trace_user);
 debug.init(p_modules=>'ALL', p_user=>ls_trace_user,
   --p_date_format=>'YYYY/MM/DD HH24:MI:SS',
   p_show_sesid=>'YES',
   p_file=>'DEBUG_DIR/'||ls_trace_user||'.dbg');
 debug.status(p_user=>ls_trace_user);
end;
/

-- batch setup logging users
declare
 ls_trace_user ct_type;
BEGIN

 ls_trace_user := ct_type ('SYS','SYSTEM','ABEDBA','ABELISTING');

 FOR i IN ls_trace_user.FIRST .. ls_trace_user.LAST
 LOOP
  debug.clear(p_user=>ls_trace_user(i));
  debug.init(p_modules=>'ALL', p_user=>ls_trace_user(i),
    --p_date_format=>'YYYY/MM/DD HH24:MI:SS',
    p_show_sesid=>'NO',
    p_file=>'DEBUG_DIR/'||ls_trace_user(i)||'.dbg');
  debug.status(p_user=>ls_trace_user(i));
 End Loop;
end;
/

Demo 1
------

Begin
  debug.f('start %s : %s', 1, sysdate);
  debug.f(SubStr(TO_CHAR(SQLCODE)||': '||SQLERRM,1,100));
Exception
    WHEN OTHERS THEN
      debug.f(SubStr(TO_CHAR(SQLCODE)||': '||SQLERRM,1,100));
End;
/

Demo 2
------

@connect "/ as sysdba"
grant execute on dbms_lock to ops$tkyte;
@connect /

set echo on
/*
first we'll start by enabling debug - this would be done once, sets up a 
table with values...

After that, we'll watch the debug....
*/
exec debug.init
exec debug.f( 'hello world' );
pause
set define off
host xterm -e tail -f /tmp/OPS\$TKYTE.dbg &
set define on

create or replace procedure want_to_trace
as
begin
    for i in 1 .. 10
    loop
        if ( mod(i,2) = 0 )
        then
            debug.f( 'processing step %s of 10 - %s', i , 'odd');
            dbms_lock.sleep(1);
        else
            debug.f( 'processing step %s of 10', i );
            dbms_lock.sleep(1);
        end if;
    end loop;
end;
/

exec want_to_trace

monitor
-------

To trace it:
tail -f [DEBUG_DIR]/[LOGIN USER].dbg

For example:
tail -f /data1/debug/SYSTEM.dbg

Tuesday, April 07, 2009

New efficient HASH FULL OUTER join in 11g

Purpose:
--
Oracle 11g query optimizer introduced a new HASH JOIN FULL OUTER execution method.

Starting with Oracle Database 11g Release 1 (11.1), Oracle Database automatically uses a native execution method based on a hash join for executing full outer joins whenever possible. When the new method is used to execute a full outer join, the execution plan for the query contains HASH JOIN FULL OUTER.

To instruct the optimizer not to consider using the hash full outer join execution method, apply the NO_NATIVE_FULL_OUTER_JOIN hint. The NO_NATIVE_FULL_OUTER_JOIN hint instructs the optimizer to exclude the native execution method when joining each specified table. Instead, the full outer join is executed as a union of left outer join and an anti-join.

Result
--
The new HASH JOIN FULL OUTER execution method logical reads(Buffers in execution plan output) cost equals too two base table FULL scan logical I/O adds up, nothing more.
The old full outer join cost 4 times more LIO on table scan.


Benchmark
---------

drop table t1 purge;
drop table t2 purge;

create table t1
as
select
rownum + 5 id,
rpad('x',500,'x') c1
from dual
connect by level <= 5000;

create table t2
as
select
rownum + 15 id,
rpad('x',500,'x') c1
from dual
connect by level <= 5000;

set serveroutput off
set linesize 170
set pagesize 0

select max(t1c1), max(t2c1)
from (
select /*+ gather_plan_statistics */ t1.id, t2.id, t1.c1 t1c1, t2.c1 t2c1
from t1 full outer join t2
on t1.id = t2.id
);

-- 10g way
SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'runstats_last'));

-- 11g way
SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'iostats memstats last partition'));

-----------------------------------------------------------------------------------------
| Id | Operation | Name | A-Rows | Buffers | Reads | Writes| A-Time |
-----------------------------------------------------------------------------------------
| 1 | SORT AGGREGATE | | 1 | 732 | 101 | 0 |00:00:00.15 |
| 2 | VIEW | | 5010 | 732 | 101 | 0 |00:00:00.06 |
| 3 | UNION-ALL | | 5010 | 732 | 101 | 0 |00:00:00.05 |
|* 4 | HASH JOIN RIGHT OUTER| | 5000 | 366 | 101 | 0 |00:00:00.04 |
| 5 | TABLE ACCESS FULL | T2 | 5000 | 183 | 56 | 0 |00:00:00.01 |
| 6 | TABLE ACCESS FULL | T1 | 5000 | 183 | 45 | 0 |00:00:00.01 |
|* 7 | HASH JOIN RIGHT ANTI | | 10 | 366 | 0 | 0 |00:00:00.01 |
| 8 | TABLE ACCESS FULL | T1 | 5000 | 183 | 0 | 0 |00:00:00.01 |
| 9 | TABLE ACCESS FULL | T2 | 5000 | 183 | 0 | 0 |00:00:00.01 |
-----------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |00:00:00.09 | 184 | | | |
| 1 | SORT AGGREGATE | | 1 |00:00:00.09 | 184 | | | |
| 2 | VIEW | VW_FOJ_0 | 5010 |00:00:00.03 | 184 | | | |
|* 3 | HASH JOIN FULL OUTER| | 5010 |00:00:00.03 | 184 | 3239K| 1058K| 3539K (0)|
| 4 | TABLE ACCESS FULL | T2 | 5000 |00:00:00.01 | 92 | | | |
| 5 | TABLE ACCESS FULL | T1 | 5000 |00:00:00.01 | 92 | | | |
------------------------------------------------------------------------------------------------------

Reference
--
Oracle® Database Performance Tuning Guide 11g Release 1 (11.1)
- 11 The Query Optimizer, 11.6.7 Outer Joins
OracleQueryOptimizer11g_20081202.pdf by Christian Antognini

Wednesday, April 01, 2009

PL/SQL instrument utility u$err package

Purpose
-------
1. PL/SQL utility to Instrument and Trace.
2. Err_Trap.

debug.f() write to db server trace file
u$err.trc() write to utility.error_log table, with autonomous transaction.

u$err.err() write error stack to utility.error_log table, and DBMS_OUTPUT to std_io.

File: file://A:/PLSQL/debugf/u$err_pkg.sql

Setup
-----

drop table utility.error_log;
create table utility.error_log
(err_code varchar2(10), err_module varchar2(30), err_action varchar2(30),
create_date date default SYSDATE, create_user varchar2(30) default USER,
err_text varchar2(4000)
)
partition by range (create_date) INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
(
PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2009', 'DD-MM-YYYY'))
)
;

grant select on utility.error_log to qa_admin;


Code
----

CREATE OR REPLACE PACKAGE utility.u$err AS

/***
DEFUG.F : log trace information
***/

PROCEDURE trc (p_msg IN VARCHAR2);

PROCEDURE err (p_msg IN VARCHAR2 default '');

END;
/

CREATE OR REPLACE PACKAGE BODY utility.u$err AS

/***
Goal: 1. Trace, 2. Err_Trap

Who| Date | What
Charlie(Yi 木匠) | 12-June-2008 | combine FORMAT_ERROR_BACKTRACE and DBMS_UTILITY.FORMAT_ERROR_STACK to one string

DEFUG.F

set serveroutput on size 200000

alter package utility.u$err compile;
grant execute on utility.u$err to public;
create or replace public synonym u$err for utility.u$err;

File: A:\PLSQL\debugf\u$err_pkg.sql

***/

PROCEDURE trc (p_msg IN VARCHAR2)
IS
PRAGMA AUTONOMOUS_TRANSACTION;
ls_module varchar2(32767);
ls_action varchar2(32767);
ls_code varchar2(10);
BEGIN
DBMS_APPLICATION_INFO.READ_MODULE (
module_name => ls_module,
action_name => ls_action);
ls_code := to_char(sqlcode);

INSERT INTO error_log (err_code,err_module,err_action,err_text)
VALUES(ls_code,substr(ls_module,1,30),substr(ls_action,1,30), p_msg);
COMMIT;
DBMS_OUTPUT.PUT_LINE('trc:'||ls_code||':'||ls_action||':'||ls_module||':'||p_msg);
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
DBMS_OUTPUT.PUT_LINE('Error in log_err: '||SUBSTR(SQLERRM,1,120));
END;

PROCEDURE err (p_msg IN VARCHAR2)
IS
ls_err varchar2(32767);
BEGIN
ls_err := SubStr(p_msg||'.'||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE||'<---'||
chr(10)||DBMS_UTILITY.FORMAT_ERROR_STACK, 1, 2000);
debug.f(ls_err);
u$err.trc (ls_err);
DBMS_APPLICATION_INFO.SET_MODULE(Null, Null);
EXCEPTION
WHEN OTHERS THEN
u$err.trc (SubStr(DBMS_UTILITY.FORMAT_ERROR_STACK,1,500));
DBMS_OUTPUT.PUT_LINE('Error stack in log_err: '||SUBSTR(SQLERRM,1,120));
END;

END;
/

alter package utility.u$err compile;
grant execute on utility.u$err to public;
create or replace public synonym u$err for utility.u$err;



Test u$err.err and u$err.trc
----
exec u$err.err;

declare
li pls_integer;
procedure p_in
is
begin
li := 5 / 0;
exception
when others then
u$err.err;
raise;
end;

begin
u$err.trc('start');
p_in;
u$err.trc('end');
exception
when others then
u$err.err('outer error');
u$err.trc('before raise');
raise;
u$err.trc('after raise');
end;
/

-- Query data
select * from
(
select * from utility.error_log
-- PARTITION FOR(to_date('11-mar-2009','dd-mon-yyyy'))
order by create_date desc
) where rownum < 10;


Housekeeping
----
--Purge old log
delete utility.error_log where create_date < sysdate - 100;
commit;

ALTER TABLE utility.error_log DROP PARTITION FOR(to_date('11-mar-2009','dd-mon-yyyy'));



Reference
---------
Oracle PL/SQL Best Practices - Steven Feuerstein
http://www.oreilly.com/catalog/9780596514105

Friday, March 13, 2009

How to display SQL code in text format exactly

In a blog post, we often need to display the program code, e.g. PL/SQL, SQL, and pure text output and trace file contents in text format exactly as it is, including space.

It had been bothering me for a long time. Finally, I found the solution, to use < pre > and < /pre > HTML tags.

Here is an example for RunStats output:
(Tested on http://www.blogger.com)

Without format:

..............................Fixed-Length...Delimited........Diff
STAT...CPU used by this sessio 179 202 23 ***
STAT...Elapsed Time 179 91 -88 ***
STAT...DB time 0 421 421 xxx
LATCH.process queue reference 1 58,878 58,877 ***
STAT...physical read bytes 212,992 0 -212,992
STAT...session uga memory max 604,728 115,320 -489,408
STAT...session pga memory max 720,896 131,072 -589,824

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
7,629 64,444 56,815 11.84%


With < pre > and < /pre > tags to format text:

..............................Fixed-Length...Delimited........Diff
STAT...CPU used by this sessio 179 202 23 ***
STAT...Elapsed Time 179 91 -88 ***
STAT...DB time 0 421 421 xxx
LATCH.process queue reference 1 58,878 58,877 ***
STAT...physical read bytes 212,992 0 -212,992
STAT...session uga memory max 604,728 115,320 -489,408
STAT...session pga memory max 720,896 131,072 -589,824

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
7,629 64,444 56,815 11.84%

I learned something new today, I'm so happy.

Reference


http://www.echoecho.com/htmltext04.htm

Thursday, February 12, 2009

NO_INDEX hint and deprecated ROWID hint

Scope
-----
Many people asked how to specify ROWID hint for a SQL statement after Oracle 10g, and there is no answer by search Google.
The RowID access path is helpful when you do iterative update(query the rows, do some changes, and then update back).

Goal
----
Guide SQL access TABLE BY ROWID where there is a possible index access path.

Solution
--------

*) approach 1: NO_INDEX sql hint
*) approach 2: Suppress index scan by add expression/function to predicate column.

Result
------
After benchmark, we can see that SQL optimizer and SQL run time engine select the "TABLE ACCESS BY USER ROWID" access path by implement the our solutions.
The Logical reads(Buffers in the execution plan) reduced from 2 to 1.

Setup
-----

drop table yi01;
create table yi01
(myid number,
mycol varchar2(30),
constraint yi01_pk primary key(myid) using index
);

truncate table yi01;

insert into yi01(myid, mycol) values(1, Null);
insert into yi01(myid, mycol) values(2, Null);
insert into yi01(myid, mycol) values(3, Null);
insert into yi01(myid, mycol) values(4, 'a');
insert into yi01(myid, mycol) values(5, Null);
insert into yi01(myid, mycol) values(6, 'b');
insert into yi01(myid, mycol) values(7, Null);

insert into yi01(myid, mycol)
select rownum + 7, Null
from dual
connect by level <= 5010;

commit;

exec dbms_stats.gather_table_stats(user,'YI01');

Benchmark
---------

SQL>

set linesize 120
set pagesize 0
set serveroutput off

COLUMN rid NEW_VALUE l_rid
COLUMN myid NEW_VALUE l_id

select rowid rid, myid from yi01
where rownum <= 1;

*) index range/unique scan

update /*+ gather_plan_statistics index(a) */ YI01 a
set MYCOL = 'F'
WHERE rowid = '&l_rid' and MYID = &l_id;

SELECT * FROM table(CAST (dbms_xplan.display_cursor(NULL,NULL, 'runstats_last') AS SYS.DBMS_XPLAN_TYPE_TABLE));

-----------------------------------------------------------------
| Id | Operation | Name | E-Rows | A-Rows | Buffers |
-----------------------------------------------------------------
| 1 | UPDATE | | | 1 | 2 |
|* 2 | INDEX UNIQUE SCAN| YI01_PK | 1 | 1 | 2 |
-----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("MYID"=1165)
filter(ROWID='AAAYH+AAEAAAAhoASM')

*) approach 1: NO_INDEX sql hint

update /*+ gather_plan_statistics no_index(a) */ YI01 a
set MYCOL = 'F'
WHERE rowid = 'AAA5znAG2AAAPm0AAF' and MYID = 7;

SELECT * FROM table(CAST (dbms_xplan.display_cursor(NULL,NULL, 'runstats_last') AS SYS.DBMS_XPLAN_TYPE_TABLE));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------
| Id | Operation | Name | E-Rows | A-Rows | Buffers |
-----------------------------------------------------------------------
| 1 | UPDATE | | | 1 | 1 |
-----------------------------------------------------------------------
|* 2 | TABLE ACCESS BY USER ROWID| YI01 | 1 | 1 | 1 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("MYID"=:SYS_B_2)

rollback;

*) approach 2: Surpress index scan by add expression/function to predicate column.

update /*+ gather_plan_statistics */ YI01 a
set MYCOL = 'F'
WHERE rowid = '&l_rid' and MYID + 0 = &l_id;

SELECT * FROM table(CAST (dbms_xplan.display_cursor(NULL,NULL, 'runstats_last') AS SYS.DBMS_XPLAN_TYPE_TABLE));

-----------------------------------------------------------------------
| Id | Operation | Name | E-Rows | A-Rows | Buffers |
-----------------------------------------------------------------------
| 1 | UPDATE | | | 1 | 1 |
|* 2 | TABLE ACCESS BY USER ROWID| YI01 | 1 | 1 | 1 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("MYID"+0=1954)




Reference
---------

*) To get the current SQL execution plan in shared_pool.

Oracle 11g
SELECT * FROM table(dbms_xplan.display_cursor);

Oracle 10.1
SELECT * FROM table(CAST (dbms_xplan.display_cursor(NULL,NULL, 'runstats_last') AS SYS.DBMS_XPLAN_TYPE_TABLE));

-- 10gR1
SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'runstats_last'));

-- 11g
SELECT * FROM table(dbms_xplan.display_cursor(NULL,NULL, 'iostats memstats last partition'));

Tuesday, February 03, 2009

update and undo

Goal
----
Research how to skip the update on some columns.
Logically, Update table set column = column is an option.
Need to check the resource usage.

Result
------
Update column without change the value, still generate same amount of undo and redo.
The undo is proportion to target value size.

Tested on Oracle 11.1.0.7

Setup
-----

drop table t;
drop table t1;

create table t nologging
as
select rownum id, lpad('x',500,'x') c1
from dual
connect by level <= 50000;

create table t1 nologging
as
select rownum id, lpad('x',500,'x') c1
from dual
connect by level <= 50000;

set serveroutput on

Benchmark
---------
exec runstats_pkg.rs_start;
update t set id = id, c1 = c1;
commit;
exec runstats_pkg.rs_middle;
update t1 set id = id;
commit;
exec runstats_pkg.rs_stop;


Run1 ran in 414 hsecs
Run2 ran in 119 hsecs
run 1 ran in 347.9% of the time

Name Run1 Run2 Diff
LATCH.shared pool 117 120 3
STAT...redo synch time 23 7 -16
STAT...redo buffer allocation 16 0 -16
LATCH.DML lock allocation 22 5 -17
STAT...physical reads 1,786 1,808 22
LATCH.redo writing 62 22 -40
LATCH.redo allocation 78 23 -55
STAT...CPU used by this sessio 73 16 -57
LATCH.enqueues 152 84 -68
STAT...commit cleanouts 979 1,120 141
STAT...rollback changes - undo 142 0 -142
LATCH.cache buffers lru chain 4,909 5,184 275
STAT...Elapsed Time 414 119 -295
STAT...DB time 418 119 -299
STAT...IMU Redo allocation siz 9,412 15,396 5,984
STAT...consistent changes 48,013 30 -47,983
STAT...consistent gets 51,861 1,878 -49,983
STAT...redo entries 54,046 3,579 -50,467
STAT...db block gets 56,883 2,111 -54,772
STAT...IMU undo allocation siz 63,992 1,080 -62,912
STAT...db block changes 110,023 7,330 -102,693
STAT...session logical reads 108,744 3,989 -104,755
STAT...physical read bytes 29,261,824 29,622,272 360,448
LATCH.cache buffers chains 487,509 26,861 -460,648
STAT...undo change vector size 32,362,024 2,399,528 -29,962,496
STAT...redo size 69,525,648 5,092,152 -64,433,496

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
521,401 45,711 -475,690 1,140.65%

Monday, February 02, 2009

External table benchmark fixed-length vs. delimited format

Goal
----
Compare the External Table read performance on Fix length format vs Tab(any char) Delimited format.

Result
------
Fixed-length format flat file use up about 10% less CPU and 7 times less Latches resource than delimited format.
But Delimited format took 50% less Elapsed Time in my test case.
And one interesting point is that DB time for Fixed-length format is 0.

Tested on Oracle 11.1.0.7


..............................Fixed-Length...Delimited........Diff
STAT...CPU used by this sessio 179 202 23 ***
STAT...Elapsed Time 179 91 -88 ***
STAT...DB time 0 421 421 xxx
LATCH.process queue reference 1 58,878 58,877 ***
STAT...physical read bytes 212,992 0 -212,992
STAT...session uga memory max 604,728 115,320 -489,408
STAT...session pga memory max 720,896 131,072 -589,824

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
7,629 64,444 56,815 11.84%


# Fixed-length records are processed faster than records terminated by a string.
# Fixed-length fields are processed faster than delimited fields.

Advice
------
When runing single batch job, Delimited format is 2 times faster, a little more CPU used, less PGA memory.
When runing multi concurrent batch jobs, Fixed-length format is better, and good for scale out,
but Fixed-length format require more PGA memory and more physical reads, since the file size is bigger to hold the space.

Solution
--------
Set up a typical emp test table, generate 700,000 rows, and run the benchmark.

setup
-----

drop TABLE EMP;
CREATE TABLE EMP
(
EMPNO NUMBER(4) NOT NULL,
ENAME VARCHAR2(10 BYTE),
JOB VARCHAR2(9 BYTE),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2),
DML_TIME TIMESTAMP(3) DEFAULT systimestamp
)
;

alter table emp add CONSTRAINT EMP_PK
PRIMARY KEY
(EMPNO) using INDEX NOLOGGING
;

SET DEFINE OFF;

Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7369, 'SMITH', 'CLERK', 7902, TO_DATE('12/17/1980 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 30847.56, 20, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO, DML_TIME)
Values (7499, 'ALLEN', 'SALESMAN', 7698, TO_DATE('02/20/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 1600, 301, 30, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO, DML_TIME)
Values (7521, 'WARD', 'SALESMAN', 7698, TO_DATE('02/22/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 1250, 501, 30, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7566, 'JONES', 'MANAGER', 7839, TO_DATE('04/02/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 2975, 20, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO, DML_TIME)
Values (7654, 'MARTIN', 'SALESMAN', 7698, TO_DATE('09/28/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 1250, 1401, 30, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7698, 'BLAKE', 'MANAGER', 7839, TO_DATE('05/01/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 2850, 30, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7782, 'CLARK', 'MANAGER', 7839, TO_DATE('06/09/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 2450, 10, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7788, 'SCOTT', 'ANALYST', 7566, TO_DATE('12/09/1982 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 3000, 20, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7839, 'KING', 'PRESIDENT', TO_DATE('11/17/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 5000, 10, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO, DML_TIME)
Values (7844, 'TURNER', 'SALESMAN', 7698, TO_DATE('09/08/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 1500, 1, 30, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7876, 'ADAMS', 'CLERK', 7788, TO_DATE('01/12/1983 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 1100, 20, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7900, 'JAMES', 'CLERK', 7698, TO_DATE('12/03/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 950, 30, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7902, 'FORD', 'ANALYST', 7566, TO_DATE('12/03/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 3000, 20, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into SCOTT.EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO, DML_TIME)
Values (7934, 'MILLER', 'CLERK', 7782, TO_DATE('01/23/1982 00:00:00', 'MM/DD/YYYY HH24:MI:SS'), 1300, 10, TO_TIMESTAMP('2/2/2009 2:56:51.766 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
COMMIT;

CREATE OR REPLACE DIRECTORY data_dir AS '/u01/exp';

set linesize 1000
set pagesize 0
set serveroutput off
set trimspool on
set trimout on
set head off
set echo off
set term off
set verify off
set feedback off

-- Fixed length format external table

spool emp.txt
with a as (select rownum from dual connect by level <= 50000)
select
rpad(EMPNO, 10, ' ')||rpad(ENAME, 20, ' ')||rpad(job, 20, ' ')||trim(to_char(sal,'00000.00'))||' '
||rpad(to_char(HIREDATE,'MM/DD/YYYY HH24:MI:SS'), 21, ' ')
||rpad(to_char(systimestamp,'MM/DD/YYYY HH24:MI:SS.FF3'), 25, ' ')
from scott.emp, a;
spool off

drop table emp_ext_fix;
create table emp_ext_fix
(
EMPNO NUMBER(4) ,
ENAME VARCHAR2(10 BYTE),
JOB VARCHAR2(9 BYTE),
SAL Number(7,2),
HIREDATE date,
DML_Time TIMESTAMP
)
organization external
(type oracle_loader
default directory data_dir
access parameters
(
records delimited by newline
DATE_CACHE 3000
badfile data_dir:'emp.bad'
logfile data_dir:'emp.log'
fields LRtrim
missing field values are null
REJECT ROWS WITH ALL NULL FIELDS
(
empno position(1:10) INTEGER EXTERNAL,
ename position(*:+20) CHAR ,
job position(*:+20) char ,
sal position(*:+9) char ,
HIREDATE position(*:+21) DATE "MM/DD/YYYY HH24:MI:SS",
DML_TIME position(*:+25) CHAR(25) date_format TIMESTAMP MASK "MM/DD/YYYY HH24:MI:SS.FF3"
)
)
LOCATION ('emp.txt')
)
NoPARALLEL
REJECT LIMIT UNLIMITED;

select * from EMP_ext_fix where rownum < 5;


--Tab delimited or csv file

spool emp.csv
with a as (select rownum from dual connect by level <= 50000)
select EMPNO||chr(9)||ENAME||chr(9)||job||chr(9)||sal||chr(9)
||to_char(HIREDATE,'MM/DD/YYYY HH24:MI:SS')||chr(9)
||to_char(systimestamp,'MM/DD/YYYY HH24:MI:SS.FF3')
from scott.emp,a;
spool off

drop TABLE SCOTT.EMP_ext_csv;
CREATE TABLE SCOTT.EMP_ext_csv
(
EMPNO NUMBER(4) ,
ENAME VARCHAR2(10 BYTE),
JOB VARCHAR2(9 BYTE),
SAL Number(7,2),
HIREDATE date,
DML_Time TIMESTAMP
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir
ACCESS PARAMETERS
(
records delimited by newline
badfile data_dir:'emp.bad'
logfile data_dir:'emp.log'
fields terminated by 0x'09'
missing field values are null
(
EMPNO, ENAME, JOB, SAL, HIREDATE DATE "MM/DD/YYYY HH24:MI:SS",
dml_time char(25) date_format TIMESTAMP MASK "MM/DD/YYYY HH24:MI:SS.FF3"
)
)
LOCATION ('emp.csv')
)
PARALLEL
REJECT LIMIT UNLIMITED;
select /* */ * from emp_ext_csv a where rownum < 5;

alter system flush buffer_cache;

Benchmark
---------

exec runstats_pkg.rs_start;
select count(HIREDATE) from emp_ext_csv;
exec runstats_pkg.rs_middle;
select count(HIREDATE) from emp_ext_fix;
exec runstats_pkg.rs_stop;

begin
runstats_pkg.rs_start;
for c in (select ename,sal,hiredate from emp_ext_csv)
loop
Null;
end loop;
runstats_pkg.rs_middle;
for c in (select ename,sal,hiredate from emp_ext_fix)
loop
Null;
end loop;
runstats_pkg.rs_stop;
end;
/

begin
runstats_pkg.rs_start;
for c in (select * from emp_ext_fix)
loop
Null;
end loop;
runstats_pkg.rs_middle;
for c in (select * from emp_ext_csv)
loop
Null;
end loop;
runstats_pkg.rs_stop;
end;
/

*) Output for last test case:

Run1 ran in 179 hsecs
Run2 ran in 91 hsecs
run 1 ran in 196.7% of the time

Name Run1 Run2 Diff
STAT...opened cursors current 1 0 -1
STAT...user I/O wait time 3 2 -1
STAT...db block changes 36 37 1
STAT...shared hash latch upgra 4 3 -1
STAT...calls to kcmgas 0 1 1
STAT...redo ordering marks 0 1 1
STAT...redo subscn max counts 0 1 1
STAT...Batched IO single block 1 0 -1
STAT...Batched IO block miss c 1 0 -1
STAT...Batched IO double miss 1 0 -1
STAT...active txn count during 0 1 1
STAT...cleanout - number of kt 0 1 1
STAT...queries parallelized 0 1 1
STAT...DFO trees parallelized 0 1 1
STAT...Parallel operations not 0 1 1
LATCH.process allocation 0 1 1
LATCH.ksuosstats global area 0 1 1
LATCH.OS process allocation 1 0 -1
LATCH.KMG MMAN ready and start 1 0 -1
LATCH.mostly latch-free SCN 0 1 1
LATCH.lgwr LWN SCN 0 1 1
LATCH.Consistent RBA 0 1 1
LATCH.active checkpoint queue 1 0 -1
LATCH.archive process latch 0 1 1
LATCH.redo allocation 3 4 1
LATCH.list of block allocation 0 1 1
LATCH.Change Notification Hash 1 0 -1
LATCH.space background task la 1 0 -1
LATCH.session timer 1 0 -1
LATCH.In memory undo latch 3 1 -2
LATCH.job_queue_processes para 2 0 -2
LATCH.ASM db client latch 0 2 2
STAT...db block gets 30 27 -3
STAT...db block gets from cach 30 27 -3
STAT...shared hash latch upgra 0 3 3
LATCH.undo global data 1 4 3
LATCH.channel operations paren 7 11 4
LATCH.redo writing 1 5 4
STAT...table scans (short tabl 1 6 5
STAT...lob reads 0 5 5
STAT...undo change vector size 2,472 2,480 8
STAT...session cursor cache hi 1 10 9
STAT...free buffer requested 17 7 -10
LATCH.messages 16 6 -10
LATCH.cache buffers lru chain 12 1 -11
STAT...physical read total IO 13 0 -13
STAT...physical reads 13 0 -13
STAT...physical reads cache 13 0 -13
STAT...physical read IO reques 13 0 -13
LATCH.enqueues 82 68 -14
LATCH.object queue header heap 16 1 -15
LATCH.resmgr group change latc 0 16 16
LATCH.resmgr:actses change gro 1 17 16
LATCH.compile environment latc 0 16 16
LATCH.PL/SQL warning settings 0 16 16
LATCH.Real-time plan statistic 0 17 17
LATCH.kokc descriptor allocati 2 20 18
LATCH.parallel query stats 1 20 19
STAT...CPU used by this sessio 179 202 23 ***
STAT...recursive cpu usage 171 199 28
LATCH.object queue header oper 34 6 -28
STAT...enqueue releases 5 34 29
LATCH.session state list latch 3 32 29
LATCH.simulator hash latch 14 45 31
STAT...enqueue conversions 0 32 32
LATCH.parameter table manageme 0 32 32
LATCH.resmgr:free threads list 0 32 32
LATCH.dummy allocation 1 33 32
LATCH.resmgr:active threads 0 32 32
LATCH.error message lists 0 33 33
STAT...session cursor cache co 7 43 36
LATCH.JS queue state obj latch 36 0 -36
STAT...enqueue requests 5 50 45
STAT...parse count (total) 7 53 46
STAT...execute count 7 53 46
LATCH.enqueue hash chains 86 133 47
LATCH.checkpoint queue latch 17 65 48
STAT...workarea memory allocat -26 37 63
STAT...user calls 0 64 64
LATCH.query server freelists 1 68 67
STAT...redo size 2,976 3,044 68
STAT...opened cursors cumulati 7 78 71
LATCH.active service list 8 82 74
LATCH.SQL memory manager worka 5 82 77
STAT...Elapsed Time 179 91 -88 ***
STAT...index scans kdiixs1 18 108 90
STAT...table scan blocks gotte 18 114 96
LATCH.client/application info 1 97 96
LATCH.process queue 1 99 98
LATCH.parallel query alloc buf 1 161 160
LATCH.session idle bit 3 167 164
LATCH.session allocation 3 189 186
STAT...recursive calls 7,043 7,300 257
STAT...no work - consistent re 77 474 397
STAT...consistent gets from ca 75 490 415
STAT...DB time 0 421 421 xxx
LATCH.shared pool 14 517 503
STAT...table fetch by rowid 1,695 186 -1,509
STAT...calls to get snapshot s 1,697 129 -1,568
LATCH.MinActiveScn Latch 1,689 106 -1,583
STAT...rows fetched via callba 1,676 78 -1,598
STAT...index fetch by key 1,676 68 -1,608
LATCH.row cache objects 104 1,881 1,777
STAT...table scan rows gotten 360 2,280 1,920
STAT...buffer is not pinned co 3,415 533 -2,882
LATCH.cache buffers chains 5,332 1,320 -4,012
STAT...PX local messages recv 0 4,262 4,262
STAT...PX local messages sent 0 4,262 4,262
STAT...consistent gets 5,112 706 -4,406
STAT...consistent gets from ca 5,112 706 -4,406
STAT...session logical reads 5,142 733 -4,409
STAT...consistent gets - exami 5,028 210 -4,818
LATCH.process queue reference 1 58,878 58,877 ***
STAT...cell physical IO interc 212,992 0 -212,992
STAT...physical IO disk bytes 212,992 0 -212,992
STAT...physical read total byt 212,992 0 -212,992
STAT...physical read bytes 212,992 0 -212,992
STAT...session uga memory max 604,728 115,320 -489,408
STAT...session uga memory 604,728 84,664 -520,064
STAT...session pga memory 720,896 131,072 -589,824
STAT...session pga memory max 720,896 131,072 -589,824

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
7,629 64,444 56,815 11.84%

PL/SQL procedure successfully completed.


Reference
---------
Oracle® Database Utilities 11g Release 1 (11.1)
12 External Tables Concepts: Performance Hints When Using External Tables


10 mile point

Thursday, January 22, 2009

Output from 10053 trace and dbms_xplan.display_cursor

Oracle 10.1.0.4 on RedHat Linux 3.0

*************************************
PARAMETERS WITH ALTERED VALUES
******************************
parallel_threads_per_cpu = 4
db_file_multiblock_read_count = 32
optimizer_mode = first_rows_10
cursor_sharing = similar
optimizer_index_caching = 25
query_rewrite_enabled = false
query_rewrite_integrity = stale_tolerated
optimizer_dynamic_sampling = 0
*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
optimizer_features_enable = 10.1.0.4

***************************************
BASE STATISTICAL INFORMATION
***********************
Table stats Table: PO_ITEM Alias: PO_ITEM
TOTAL :: CDN: 81294 NBLKS: 3257 AVG_ROW_LEN: 132
Index stats
Index: PO_ITEM_POID COL#: 2
TOTAL :: LVLS: 2 #LB: 365 #DK: 80100 LB/K: 1 DB/K: 1 CLUF: 3191
Index: PO_ITEM_STATUS_RANDOM_UPDDT COL#: 4 6
TOTAL :: LVLS: 2 #LB: 507 #DK: 80100 LB/K: 1 DB/K: 1 CLUF: 45647
Index: PO_ITEM_STATUS_UPDDT COL#: 3 6
TOTAL :: LVLS: 2 #LB: 507 #DK: 80100 LB/K: 1 DB/K: 1 CLUF: 80100
***************************************
SINGLE TABLE ACCESS PATH
COLUMN: POID(NUMBER) Col#: 2 Table: PO_ITEM Alias: PO_ITEM
Size: 5 NDV: 80100 Nulls: 0 Density: 1.2484e-05 Min: 40 Max: 88988
COLUMN: STATUS_RAN(CHARACTER) Col#: 4 Table: PO_ITEM Alias: PO_ITEM
Size: 3 NDV: 20 Nulls: 0 Density: 6.1505e-06
Histogram: Freq #Bkts: 20 UncompBkts: 3985 EndPtVals: 20
TABLE: PO_ITEM Alias: PO_ITEM
Original Card: 81294 Rounded: 1 Computed: 0.00 Non Adjusted: 0.00
Access Path: table-scan Resc: 393 Resp: 393
Access Path: index (equal)
Index: PO_ITEM_POID
rsc_cpu: 24314 rsc_io: 4
ix_sel: 1.2670e-05 ix_sel_with_filters: 1.2670e-05
Access Path: index (scan)
Index: PO_ITEM_STATUS_RANDOM_UPDDT
rsc_cpu: 23778 rsc_io: 4
ix_sel: 6.2422e-06 ix_sel_with_filters: 6.2422e-06
Access Path: index (equal)
Index: PO_ITEM_POID
rsc_cpu: 17832 rsc_io: 3
ix_sel: 1.2670e-05 ix_sel_with_filters: 1.2670e-05
Access Path: index (index-only)
Index: PO_ITEM_STATUS_RANDOM_UPDDT
rsc_cpu: 17632 rsc_io: 3
ix_sel: 6.2422e-06 ix_sel_with_filters: 6.2422e-06
SORT resource Sort statistics
Sort width: 148 Area size: 524288 Max Area size: 26214400
Degree: 1
Blocks to Sort: 1 Row size: 21 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 15225730
Total Temp space used: 0
BEST_CST: 4.00 PATH: 4 Degree: 1
***************************************
GENERAL PLANS
***********************
Join order[1]: PO_ITEM[PO_ITEM]#0
ORDER BY sort
SORT resource Sort statistics
Sort width: 148 Area size: 524288 Max Area size: 26214400
Degree: 1
Blocks to Sort: 1 Row size: 156 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 15225730
Total Temp space used: 0
Best so far: TABLE#: 0 CST: 5 CDN: 1 BYTES: 132
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:80000
SORT resource Sort statistics
Sort width: 148 Area size: 524288 Max Area size: 26214400
Degree: 1
Blocks to Sort: 1 Row size: 156 Total Rows: 1
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 15225730
Total Temp space used: 0
Final - All Rows Plan:
JOIN ORDER: 1
CST: 5 CDN: 1 RSC: 5 RSP: 5 BYTES: 132
IO-RSC: 4 IO-RSP: 4 CPU-RSC: 15249507 CPU-RSP: 15249507


SQL> SELECT /*+ gather_plan_statistics */ * FROM po_item WHERE poid = :a AND STATUS_random = :b ORDER BY poitemid;

SQL> SELECT * FROM table (Cast(dbms_xplan.display_cursor(NULL,NULL, 'runstats_last') as SYS.DBMS_XPLAN_TYPE_TABLE));

PLAN_TABLE_OUTPUT
-----------------

SQL_ID cp44zcmqma34f, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM po_item WHERE poid = :a AND STATUS_random = :b ORDER BY poitemid

Plan hash value: 3970202189

------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | E-Rows | A-Rows | Buffers | Reads | Writes | A-Time |
------------------------------------------------------------------------------------------------------------------------------
| 1 | SORT ORDER BY | | 1 | 0 | 2377 | 0 | 0 |00:00:00.01 |
|* 2 | TABLE ACCESS BY INDEX ROWID| PO_ITEM | 1 | 0 | 2377 | 0 | 0 |00:00:00.01 |
|* 3 | INDEX RANGE SCAN | PO_ITEM_STATUS_RANDOM_UPDDT | 1 | 4153 | 29 | 0 | 0 |00:00:00.02 |
------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
2 - SEL$1 / PO_ITEM@SEL$1
3 - SEL$1 / PO_ITEM@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("POID"=:A)
3 - access("STATUS_RANDOM"=:B)


27 rows selected.

SQL> set autot on
SQL>
SELECT /* */ * FROM po_item WHERE poid = :a AND STATUS_random = :b ORDER BY poitemid;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 3115544074

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 132 | 5 (20)| 00:00:01 |
| 1 | SORT ORDER BY | | 1 | 132 | 5 (20)| 00:00:01 |
|* 2 | TABLE ACCESS BY INDEX ROWID| PO_ITEM | 1 | 132 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | PO_ITEM_POID | 1 | | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("STATUS_RANDOM"=:B)
3 - access("POID"=TO_NUMBER(:A))