Oracle

게시글 보기
작성자 유건데이타 등록일 2015-05-18
제목 Subject: Faster through put on Full table scans
Subject: Faster through put on Full table scans
------------------------------------------------
db_file_multiblock_read only affects the performance of full table scans.
Oracle has a maximum I/O size of 64KBytes hence db_blocksize *
db_file_multiblock_read must be less than or equal to 64KBytes.

If your query is really doing an index range scan then the performance
of full scans is irrelevant. In order to improve the performance of this
type of query it is important to reduce the number of blocks that
the 'interesting' part of the index is contained within.
Obviously the db_blocksize has the most impact here.

Historically Informix has not been able to modify their database block size,
and has had a fixed 2KB block.

On most Unix platforms Oracle can use up to 8KBytes.
(Some eg: Sequent allow 16KB).
This means that for the same size of B-Tree index Oracle with
an 8KB blocksize can read it's contents in 1/4 of the time that
Informix with a 2KB block could do.

You should also consider whether the PCTFREE value used for your index is
appropriate. If it is too large then you will be wasting space
in each index block. (It's too large IF you are not going to get any
entry size extension OR you are not going to get any new rows for existing
index values. NB: this is usually only a real consideration for large indexes - 10,000 entries is small.)

db_file_simultaneous_writes has no direct relevance to index re-balancing.
(PS: In the U.K. we benchmarked against Informix, Sybase, Unify and
HP/Allbase for the database server application that HP uses internally to
monitor and control it's Tape drive manufacturing lines. They chose
Oracle because: We outperformed Informix.
Sybase was too slow AND too
unreliable.
Unify was short on functionality
and SLOW.
HP/Allbase couldn't match the
availability
requirements and wasn't as
functional.
Informix had problems demonstrating the ability to do hot backups without
severely affecting the system throughput.
HP benchmarked all DB vendors on both 9000/800 and 9000/700 machines with
different disks (ie: HP-IB and SCSI). Oracle came out ahead in all
configurations.

NNB: It's always worth throwing in a simulated system failure whilst the
benchmark is in progress. Informix has a history of not coping gracefully.
That is they usually need some manual intervention to perform the database
recovery.)


I have a perspective client who is running a stripped down souped version of
informix with no catalytic converter. One of their queries boils down to an
Index Range Scan on 10000 records. How can I achieve better throughput
on a single drive single CPU machine (HP/UX) without using raw devices.
I had heard rebuilding the database with a block size factor greater than
the OS block size would yield better performance. Also I tried changing
the db_file_multiblock_read_count to 32 without much improvement.
Adjusting the db_writers to two did not help either.
Also will the adjustment of the db_file_simultaneous_writes help on
the maintenance of a index during rebalancing operations.
600 보기 태그:
Comment
등록된 코멘트가 없습니다.