jump to navigation

tuning an Oracle insert SQL March 4, 2010

Posted by sendtoshailesh in Uncategorized.

My one of ETL team member reported that its insert statement is taking very long time to execute. Normally It execute in 5 min which is part of ETL job. But now it is taking 2 hrs. Now there is an opportunity of DML tuning and insert have its own way of execution. While searching I found a good note. Here I am blogging it for my ready reference.

There are many factors that effect Oracle insert SQL performance and many things that you can do to tune an insert statement. There are many types or Oracle inserts, each with distinct performance characteristics

The fastest Oracle table insert rate I’ve ever seen was 400,000 rows per second, about 24 millions rows per minute, using super-fast RAM disk (SSD), but Greg Rahn of Oracle notes SQL insert rates of upwards of 6 million rows per second using the Exadata firmware:

“One of the faster bulk (parallel nologging direct path from external table using direct path compression) load rates I’ve seen is just over 7.7 billion rows in under 20 minutes which equates to around 385,000,000 per minute or about 6,416,666 per second.

All the CPUs are running at around 99% user CPU during that load. That was loading to spinning rust (Exadata Storage). It would be even faster had compression not been used. That was on a HP Oracle DB Machine (64 Intel Harpertown CPU cores). “

Here are some general guidelines for tuning inserts statements.

a – Manage segment header contention for parallel inserts – Make sure to define multiple freelist (or freelist groups) to remove contention for the table header. Multiple freelists add additional segment header blocks, removing the bottleneck. You can also use Automatic Segment Space Management (bitmap freelists) to support parallel DML.

b – Parallelize the load – You can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL “APPEND” option. If you submit parallel jobs to insert against the table at the same time, using the APPEND hint may cause serialization, removing the benefit of parallel jobstreams.

c – APPEND into tables – By using the APPEND hint, you ensure that Oracle always grabs “fresh” data blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don’t need to specify an APPEND hint. If you’re going w/ APPEND, consider putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging.

insert /*+ append */ into customer values (‘hello’,’;there’);

d – Use a large blocksize – By defining large (i.e. 32k) blocksizes for the target table, you reduce I/O because more rows fit onto a block before a “block full” condition (as set by PCTFREE) unlinks the block from the freelist.

e – Disable/drop indexes – It’s far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.

f – RAM disk – You can use high-speed solid-state disk (RAM-SAN) to make Oracle inserts run up to 300x faster than platter disk.

Blocksize and insert performance

Here is my small single-CPU, single-user benchmark showing the performance of loads into a larger blocksize:

alter system set db_2k_cache_size=64m scope=spfile;
alter system set db_16k_cache_size=64m scope=spfile;
startup force
create tablespace twok blocksize 2k; <– using ASM defaults to 100m
create tablespace sixteenk blocksize 16k;
create table load2k tablespace twok as select * from dba_objects; < creates 8k rows
drop table load2k; <- first create was to preload buffers

set timing on;
create table load2k tablespace twok as select * from dba_objects;
create table load16k tablespace sixteenk as select * from dba_objects;

For a larger sample, I re-issued the create processes with:

select * from dba_source; — (80k rows)

Even with this super-tiny sample on Linux using Oracle10g (with ASM) the results were impressive, with a significant performance improvement using large blocksizes.

 2k 16k
 blksze blksze
8k table size 4.33 secs 4.16 secs 
80k table size 8.74 secs 8.31 secs 


No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: