Feeds:
Posts
Comments

Archive for the ‘Performance Tunning’ Category

Note: One of our visitors and my friend Kavita Yadav  asked this question by posting a comment. Thanks KAvita for your contribution. Keep visiting/commenting!

As there are over 800 wait events but but frequently you may come across very few. As working on performance tuning since more than 4 yrs there are very few wait events. In this post I try to cover most popular of them.

db file sequential reads

Possible Causes :
· Use of an unselective index 
· Fragmented Indexes
· High I/O on a particular disk or mount point
· Bad application design 
· Index reads performance can be affected by  slow I/O subsystem and/or poor database  files layout, which result in a higher average  wait time

Actions :
· Check indexes on the table to ensure that the right index is being used

· Check the column order of the index  with the WHERE clause of the Top SQL statements

· Rebuild indexes with a high clustering factor

· Use partitioning to reduce the amount of blocks being visited

· Make sure optimizer statistics are up to date

· Relocate ‘hot’ datafiles

· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool

· Inspect the execution plans of the SQL statements that access data through indexes

· Is it appropriate for the SQL statements to access data through index lookups?

· Would full table scans be more efficient?

· Do the statements use the right driving  table?

· The optimization goal is to minimize  both the number of logical and physical I/Os.

Remarks:
· The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
· Significant db file sequential read wait time is most likely an application issue.
· If the DBA_INDEXES.CLUSTERING_FACTOR of the index approaches the number of blocks in the table, then most of the rows in the table are ordered. This is desirable.

· However, if the clustering factor approaches the number of rows in the table, it means the rows in the table are randomly ordered and thus it requires more I/Os to complete the operation. You can improve the index’s clustering factor by rebuilding the table so that rows are ordered according to the index key and rebuilding the index thereafter.

· The OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING initialization parameters can influence the optimizer to favour the nested loops operation and choose an index access path over a full table scan.

db file scattered reads

Possible Causes :
· The Oracle session has requested and is waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT)  to be  read into the SGA from disk.
· Full Table scans

· Fast Full Index Scans

Actions :
· Optimize multi-block I/O by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT

· Partition pruning to reduce number of blocks visited

· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool
· Optimize the SQL statement that initiated most of the waits. The goal is to minimize the number of physical
and logical reads.
· Should the statement access the data by a full table scan or index FFS? Would an index range or unique scan
 be more efficient? Does the query use the right driving table?
· Are the SQL predicates appropriate for hash or merge join?
· If full scans are appropriate, can  parallel query improve the response time?
· The objective is to reduce the demands for both the logical and physical I/Os, and this is best
achieved through SQL and application tuning.
· Make sure all statistics are representative of the actual data. Check the LAST_ANALYZED date

Remarks:
· If an application that has been running fine for a while suddenly clocks a lot of time on the db file scattered read event and there hasn’t been a code change, you might want to check to see if one or more indexes has been dropped or become unusable.
· Or  whether the stats has been stale.

log file parallel write

Possible Causes :
· LGWR waits while writing contents of the redo log buffer cache to the online log files on disk
· I/O wait on sub system holding the online  redo log files

Actions :
· Reduce the amount of redo being generated

· Do not leave tablespaces in hot backup mode for longer than necessary

· Do not use RAID 5 for redo log files

· Use faster disks for redo log files

· Ensure that the disks holding the archived redo log files and the online redo log files are separate so as to avoid contention

· Consider using NOLOGGING or UNRECOVERABLE options in SQL statements

log file sync:


Possible Causes :
· Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
Actions :
· Tune LGWR to get good throughput to  disk eg: Do not put redo logs on  RAID5

· Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations

Actions :

  • Tune LGWR to get good throughput to  disk eg: Do not put redo logs on RAID5
  •  Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations

buffer busy waits:

Possible Causes :
· Buffer busy waits are common in an I/O-bound Oracle system.
· The two main cases where this can occur are:
· Another session is reading the block into the buffer
· Another session holds the buffer in an incompatible mode to our request
· These waits indicate read/read, read/write, or write/write contention.
· The Oracle session is waiting to pin a buffer .A buffer must be pinned before it can be read or modified. Only one process can pin a
buffer at any one time.

· This wait can be intensified by a large block  size as more rows can be contained within the block

· This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is “busy

· It is also often due to several processes repeatedly reading the same blocks (eg: i lots of people scan the same index or data block)

Actions :
· The main way to reduce buffer busy waits is to reduce the total I/O on the system

· Depending on the block type, the actions will differ

Data Blocks

· Eliminate HOT blocks from the application. Check for repeatedly scanned / unselective indexes.

· Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
· 
 Check for ‘right- hand-indexes’ (indexes that get inserted into at the same point by many processes).

· Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .

· Reduce the number of rows per block

Segment Header

· Increase of number of FREELISTs   and FREELIST GROUPs

Undo Header

· Increase the number of Rollback Segments

free buffer waits:

Possible Causes :
· This means we are waiting for a free buffer but there are none available in the cache because there are too many dirty buffers in  the cache

· Either the buffer cache is too small or the DBWR is slow in writing modified buffers to disk

· DBWR is unable to keep up to the write  requests

· Checkpoints happening too fast – maybe due  to high database activity and under-sized  online redo log files

· Large sorts and full table scans are filling the cache with modified blocks faster than the  DBWR is able to write to disk
· If the  number of dirty buffers that need to be  written to disk is larger than the number that DBWR can write per batch, then these waits  can be observed

Actions :
Reduce checkpoint frequency  – increase the size of the online redo log files

Examine the size of the buffer cache – consider increasing the size of the buffer cache in the SGA

Set disk_asynch_io = true set

If not using asynchronous I/O increase the number of db writer processes or dbwr slaves

Ensure hot spots do not exist by spreading datafiles over disks and disk controllers

Pre-sorting or reorganizing data can help

enqueue waits

Possible Causes :
· This wait event indicates a wait for a lock  that is held by another session (or sessions) in an incompatible mode to the requested mode.

 TX Transaction Lock

· Generally due to table or application set up issues

· This indicates contention for row-level lock. This wait occurs when a transaction tries to update or delete rows that are currently
 locked by another transaction.

· This usually is an application issue.

TM DML enqueue lock

· Generally due to application issues, particularly if foreign key constraints have not been indexed.

ST lock

· Database actions that modify the UET$ (used extent) and FET$ (free extent) tables require the ST lock, which includes actions such as drop, truncate, and coalesce.

· Contention for the ST lock indicates there are multiple sessions actively performing

· dynamic disk space allocation or deallocation

· in dictionary managed tablespaces

Actions :
· Reduce waits and wait times

· The action to take depends on the lock  type which is causing the most problems

· Whenever you see an enqueue wait event for the TX enqueue, the first step is to find out who the blocker is and if there are multiple waiters for the same resource

· Waits for TM enqueue in Mode 3 are primarily due to unindexed foreign key columns.

· Create indexes on foreign keys  < 10g

· Following are some of the things you can do to minimize ST lock contention in your database:

· Use locally managed tablespaces
· Recreate all temporary tablespaces  using the CREATE TEMPORARY TABLESPACE TEMPFILE… command.

Cache buffer chain latch

Possible Causes :
· Processes need to get this latch when they  need to move buffers based on the LRU block replacement policy in the buffer cache
· The cache buffer lru chain latch is acquired in order to introduce a new block into the buffer cache and when writing a buffer
back to disk, specifically when trying  to scan the LRU (least recently used) chain containing all the dirty blocks in the buffer
cache. Competition for the cache buffers lru chain .

· latch is symptomatic of intense buffer cache  activity caused by inefficient SQL  statements. Statements that repeatedly scan

· large unselective indexes or perform full table scans are the prime culprits. 

· Heavy contention for this latch is generally  due to heavy buffer cache activity which  can be caused, for example, by:
 Repeatedly scanning large unselective indexes

Actions :
 Contention in this latch can be avoided implementing multiple buffer pools or increasing the number of LRU latches with the  parameter DB_BLOCK_LRU_LATCHES (The default value is generally  sufficient for most systems).

Its possible to reduce contention for the cache buffer lru chain latch by increasing the  size of the buffer cache and  thereby reducing the rate at which new blocks are  introduced into the buffer cache.

 Direct Path Reads

Possible Causes :
· These waits are associated with direct read operations which read data directly into the sessions PGA bypassing the SGA

· The “direct path read” and “direct path write” wait events are related to operations that are performed in PGA like sorting, group by operation, hash join

· In DSS type systems, or during heavy batch periods, waits on “direct path read” are quite normal However, for an OLTP system these waits are significant
· These wait events can occur during sorting operations which is not surprising as direct path reads and writes usually occur in connection with temporary tsegments
· SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION, DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is larger than the work area in the PGA
Actions :
Ensure the OS asynchronous IO is configured correctly.
Check for IO heavy sessions / SQL and see if the amount of IO can be reduced.
Ensure no disks are IO bound.
Set your PGA_AGGREGATE_TARGET to appropriate value (if the parameter WORKAREA_SIZE_POLICY = AUTO) Or set *_area_size manually (like sort_area_size and then you have to set WORKAREA_SIZE_POLICY = MANUAL
Whenever possible use UNION ALL instead of UNION, and where applicable use HASH JOIN instead of SORT MERGE and NESTED LOOPS instead of HASH JOIN.
 Make sure the optimizer selects the right driving table. Check to see if the composite index’s columns can be rearranged to match the ORDER BY clause to avoid sort entirely.

Also, consider automating the SQL work areas using PGA_AGGREGATE_TARGET in Oracle9i Database.

Query V$SESSTAT> to identify sessions with high “physical reads direct”

Remark:
· Default size of HASH_AREA_SIZE  is twice that of SORT_AREA_SIZE

· Larger HASH_AREA_SIZE will influence optimizer to go for hash joins instead of nested loops

· Hidden parameter DB_FILE_DIRECT_IO_COUNT can impact the direct path read performance.It sets the maximum I/O buffer size of direct read and write operations. Default is 1M in 9i
Direct Path  Writes:

Possible Causes :
· These are waits that are associated with direct write operations that write data from users’ PGAs to data files or temporary  tablespaces
· Direct load operations (eg: Create Table as  Select (CTAS) may use this)
· Parallel DML operations
· Sort IO (when a sort does not fit in memory

Actions :
If the file indicates a temporary  tablespace check for unexpected disk sort operations.
Ensure
<Parameter:DISK_ASYNCH_IO> is TRUE . This is unlikely to reduce wait times from the wait event timings but
may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings).
Ensure the OS asynchronous IO is configured correctly.
Ensure no disks are IO bound

Latch Free Waits
Possible Causes :
· This wait indicates that the process is waiting for a latch that is currently busy  (held by another process).
· When you see a latch free wait event in the V$SESSION_WAIT view, it means the process failed to obtain the latch in the
willing-to-wait mode after spinning  _SPIN_COUNT times and went to sleep. When processes compete heavily for  latches, they will also consume more CPU resources because of spinning. The result is a higher response time

Actions :
· If the TIME spent waiting for latches is significant then it is best to determine which latches are suffering from contention.
Remark:
· A latch is a kind of low level lock. Latches apply only to memory structures in the SGA. They do not apply to database objects. An Oracle SGA has many latches, and they exist to protect various memory structures from potential corruption  by concurrent access.

· The time spent on latch waits is an effect, not a cause; the cause is that you are doing too many block gets, and block gets require cache buffer chain latching

 Library cache latch

Possible Causes :
· The library cache latches protect the  cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache.

· Application is making heavy use of literal SQL- use of bind variables will reduce this latch considerably

Actions :
· Latch is to ensure that the application is reusing as much as possible SQL statement representation. Use bind variables whenever ossible in the application.

· You can reduce the library cache latch hold time by properly setting the SESSION_CACHED_CURSORS parameter.
· Consider increasing shared pool.
Remark:
· Larger shared pools tend to have  long free lists and processes that need to allocate space in them must  spend extra time scanning the long free lists while holding the shared pool latch

· if your database is not yet on  Oracle9i Database, an oversized shared pool can increase the contention for the shared pool latch..
Shared pool latch

Possible Causes :
The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool

Contentions for the shared pool and library cache latches are mainly due to intense hard  parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed

The cost of parsing a new SQL statement is expensive both in terms of CPU requirements and the number of times  the library cache and shared pool latches  may need to be acquired and released.

Actions :
· Ways to reduce the shared pool latch  are, avoid hard parses when possible, parse once, execute many.

· Eliminating literal SQL is also useful to avoid the shared pool latch. The size  of the shared_pool and use of MTS  (shared server option) also greatly  influences the shared pool latch.
· The workaround is to set the initialization parameter  CURSOR_SHARING to FORCE. This  allows statements that differ in literal
 values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and  hard parse.

Row cache objects latch

Possible Causes :
This latch comes into play when user processes are attempting to  access the cached data dictionary values.

Actions :
· It is not common to have contention in this latch and the only way to reduce contention for this latch is by increasing the size of the shared pool (SHARED_POOL_SIZE).

· Use Locally Managed tablespaces for your application objects especially indexes 

· Review and amend your database logical design , a good example is to merge or decrease the number of  indexes on tables with heavy inserts
Remark:
· Configuring the library cache to an acceptable size usually ensures that the data  dictionary cache is also properly sized. So tuning Library Cache will tune Row Cache indirectly.

 I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!

Read Full Post »

Note: One of our visitors and my friend Kavita Yadav  asked this question by posting a comment. Thanks KAvita for your contribution. Keep visiting/commenting!

Performance of the SQL queries of an application often play a big role in the overall performance of the underlying application. The response time may at times be really irritating for the end users if the application doesn’t have fine-tuned SQL queries. Sql Statements are used to retrieve data from the database. We can get same results by writing different sql queries. But use of the best query is important when performance is considered. So you need to sql query tuning based on the requirement. Here is the list of queries which we use reqularly and how these sql queries can be optimized for better performance.

Sql Statements are used to retrieve data from the database. We can get same results by writing different sql queries. But use of the best query is important when performance is considered. So you need to sql query tuning based on the requirement. Here is the list of queries which we use reqularly and how these sql queries can be optimized for better performance.

 There are sevaral ways of tuning SQl statements, few of which are:-

  • Understanding of the Data, Business, and Application – it’s almost impossible to fine-tune the SQl statements without having a proper understanding of the data managed by the application and the business handled by the application. The understanding of the application is of course of utmost importance. By knowing these things better, we may identify several instances where the data retrieval/modification by many SQL queries can simply be avoided as the same data might be available somewhere else, may be in the session of some other integrating application, and we can simply use that data in such cases. The better understanding will help you identify the queries which could be written better either by changing the tables involved or by establishing relationships among available tables.
  • Using realistic test data – if the application is not being tested in the development/testing environments with the volume and type of data, which the application will eventually face in the production environment, then we can’t be very sure about how the SQL queries of the application will really perform in actual business scenarios. Therefore, it’s important to have the realistic data for development/testing purposes as well.
  • Using Bind Variables, Stored Procs, and Packages – Using identical SQL statements (of course wherever applicable) will greatly improve the performance as the parsing step will get eliminated in such cases. So, we should use bind variables, stored procedures, and packages wherever possible to re-use the same parsed SQL statements.
  • Using the indexes carefully – Having indexes on columns is the most common method of enhancing performance, but having too many of them may degrade the performance as well. So, it’s very critical to decide wisely about which all columns of a table we should create indexes on. Few common guidelines are:- creating indexes on the columns which are frequently used either in WHERE clause or to join tables, avoid creating indexes on columns which are used only by functions or operators, avoid creating indexes on the columns which are required to changed quite frequently, etc.
  • Making available the access path – the optimizer will not use an access path that uses an index only because we have created that index. We need to explicitly make that access path available to the optimizer. We may use SQL hints to do that.
  • Using EXPLAIN PLAN – these tools can be used to fine tune SQL queries to a great extent. EXPLAIN PLAN explains the complete access path which will be used by the particular SQL statement during execution.
  • Optimizing the WHERE clause – there are many cases where index access path of a column of the WHERE clause is not used even if the index on that column has already been created. Avoid such cases to make best use of the indexes, which will ultimately improve the performance. Some of these cases are: COLUMN_NAME IS NOT NULL (ROWID for a null is not stored by an index), COLUMN_NAME NOT IN (value1, value2, value3, …), COLUMN_NAME != expression, COLUMN_NAME LIKE’%pattern’ (whereas COLUMN_NAME LIKE ‘pattern%’ uses the index access path), etc. Usage of expressions or functions on indexed columns will prevent the index access path to be used. So, use them wisely!
  • Using the leading index columns in WHERE clause – the WHERE clause may use the complex index access path in case we specify the leading index column(s) of a complex index otherwise the WHERE clause won’t use the indexed access path.
  • Indexed Scan vs Full Table Scan – Indexed scan is faster only if we are selcting only a few rows of a table otherwise full table scan should be preferred. It’s estimated that an indexed scan is slower than a full table scan if the SQL statement is selecting more than 15% of the rows of the table. So, in all such cases use the SQL hints to force full table scan and suppress the use of pre-defined indexes. Okay… any guesses why full table scan is faster when a large percentage of rows are accessed? Because an indexed scan causes multiple reads per row accessed whereas a full table scan can read all rows contained in a block in a single logical read operation.
  • Using ORDER BY for an indexed scan – the optimizer uses the indexed scan if the column specified in the ORDER BY clause has an index defined on it. It’ll use indexed scan even if the WHERE doesn’t contain that column (or even if the WHERE clause itself is missing). So, analyze if you really want an indexed scan or a full table scan and if the latter is preferred in a particular scenario then use ‘FULL’ SQL hint to force the full table scan.
  • Minimizing table passes – it normally results in a better performance for obvious reasons.
  • Joining tables in the proper order – the order in which tables are joined normally affects the number of rows processed by that JOIN operation and hence proper ordering of tables in a JOIN operation may result in the processing of fewer rows, which will in turn improve the performance. The key to decide the proper order is to have the most restrictive filtering condition in the early phases of a multiple table JOIN. For example, in case we are using a master table and a details table then it’s better to connect to the master table first to connecting to the details table first may result in more number of rows getting joined.
  • Using ROWID and ROWNUM wherever possible – these special columns can be used to improve the performance of many SQL queries. The ROWID search is the fastest for Oracle database and this luxury must be enjoyed wherever possible. ROWNUM comes really handy in the cases where we want to limit the number of rows returned.
  • Usage of explicit cursors is better – explicit cursors perform better as the implicit cursors result in an extra fetch operation. Implicit cursosrs are opened the Oracle Server for INSERT, UPDATE, DELETE, and SELECT statements whereas the explicit cursors are opened by the writers of the query by explicitly using DECLARE, OPEN, FETCH, and CLOSE statements.
  • Reducing network traffic – Arrays and PL/SQL blocks can be used effectively to reduce the network traffic especially in the scenarios where a huge amount of data requires processing. For example, a single INSERT statement can insert thousands of rows if arrays are used. This will obviously result into fewer DB passes and it’ll in turn improve performance by reducing the network traffic. Similarly, if we can club multiple SQL statements in a single PL/SQL block then the entire block can be sent to Oracle Server involving a single network communication only, which will eventually improve performance by reducing the network traffic.
  • Using Oracle parallel query option – Since Oracle 8, even the queries based on indexed range scans can use this parallel query option if the index is partitioned. This feature can result in an improved performance in certain scenarios.

 SQL Tuning/SQL Optimization Techniques:

  1. The sql query becomes faster if you use the actual columns names in SELECT statement instead of  ‘*’.

For Example: Write the query as

SELECT id, first_name, last_name, age, subject FROM student_details;

Instead of:

SELECT * FROM student_details;

 2.  Sometimes you may have more than one subqueries in your main query. Try to minimize the number of subquery block in your query.

 For Example: Write the query as

SELECT name
FROM employee
WHERE (salary, age ) = (SELECT MAX (salary), MAX (age)
FROM employee_details)
AND dept = ‘Electronics’;

Instead of:

SELECT name
FROM employee
WHERE salary = (SELECT MAX(salary) FROM employee_details)
AND age = (SELECT MAX(age) FROM employee_details)
AND emp_dept = ‘Electronics’;

 

3. Use operator EXISTS, IN and table joins appropriately in your query.
a) Usually IN has the slowest performance.
b) IN is efficient when most of the filter criteria is in the sub-query.
c) EXISTS is efficient when most of the filter criteria is in the main query.

For Example: Write the query as

Select * from product p
where EXISTS (select * from order_items o
where o.product_id = p.product_id)

Instead of:

Select * from product p
where product_id IN
(select product_id from order_items;

 

4. Be careful while using conditions in WHERE clause.
For Example: Write the query as

SELECT id, first_name, age FROM student_details WHERE age > 10;

Instead of:

SELECT id, first_name, age FROM student_details WHERE age != 10;

Write the query as

SELECT id, first_name, age
FROM student_details
WHERE first_name LIKE ‘Chan%’; —- pls try to

Instead of:

SELECT id, first_name, age
FROM student_details
WHERE SUBSTR(first_name,1,3) = ‘Cha’;

Write the query as

SELECT product_id, product_name
FROM product
WHERE unit_price BETWEEN MAX(unit_price) and MIN(unit_price)

Instead of:

SELECT product_id, product_name
FROM product
WHERE unit_price >= MAX(unit_price)
and unit_price <= MIN(unit_price)

Write the query as

SELECT id, name, salary
FROM employee
WHERE salary < 25000;

Instead of:

SELECT id, name, salary
FROM employee
WHERE salary + 10000 < 35000;

Write the query as

SELECT id, first_name, age
FROM student_details
WHERE age > 10;

Instead of:

SELECT id, first_name, age
FROM student_details
WHERE age NOT = 10; —- also instead of ‘> = 5’ try to use ‘> 6’ which is one and the same thing…. 🙂

 

5. To write queries which provide efficient performance follow the general SQL standard rules.

a)  Use single case for all SQL verbs
b) Begin all SQL verbs on a new line
c) Separate all words with a single space
d) Right or left aligning verbs within the initial SQL verb

  1. Use table aliasing whenever you are using more than one table and don’t forget to prefix the column names with alias name.
  2. Use EXISTS in place of DISTINCT(If possible)

Example:

 SELECT DISTINCT d.deptno , 2.d.dname , 3.

 FROM dept d , 4.emp e 5.WHERE d.deptno = e.deptno ;

 The following SQL statement is a better alternative.

SELECT d.deptno , 2.d.dname 3.FROM dept d 4.WHERE EXISTS ( SELECT e.deptno 5.FROM emp e 6.WHERE d.deptno = e.deptno ) ;

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”

Read Full Post »

Dear friends,
     Today I saw AWR report Top 5 time wait events. I saw one wait event “read by other session “.
Read by other session wait event & Buffer busy wait event are same.

Oracle 9i we called buffer busy wait event and oracle 10g/later we called “read by other session”

About “Read by other session wait event”

When a user issue the query in a database, oracle server processes will read the database blocks from disk to database buffer cache. When two or more session issue the same query/related query (access the same database blocks), the first session will read the data from database buffer cache while other sessions are in wait.

The resolution of a “buffer busy wait”  events is one of the most confounding problems with Oracle.  In an I/O-bound Oracle system, buffer busy waits are common, as evidenced by any system with read (sequential/scattered) waits in the top-five waits.

We simply say, several concurrent sessions will read the same blocks/same table or same index block.

How can we find the block contention?

AWR/Statspack report top 5 wait event shows the read by other session or Buffer busy wait event and also we can see wait event section.

SELECT p1 “file#”, p2 “block#”, p3 “class#”
FROM v$session_wait
WHERE event = ‘read by other session’;

SELECT p1 “file#”, p2 “block#”, p3 “wait class#”
FROM v$session_wait
WHERE event = ‘buffer busy waits’;

Also using v$segment_statistics or v$system_event we can see the buffer busy wait event.

How can we tune the Read by other session wait event?

Hot Objects/Blocks:

Number of concurrent session’s access single block in an object is known as hot object.

Using AWR report “Segment statistics” section shows the HOT objects list.

Or using below query we find the hot objects.

SELECT relative_fno, owner, segment_name, segment_type
FROM dba_extents
WHERE file_id = &file
AND &block BETWEEN block_id AND block_id + blocks – 1;

Why buffer busy wait/read by other session event happen? How we reduce the buffer busy waits?

Increasing INITRANS value method:

First we should know how concurrent sessions accessing a single block in an object?

Each db block having 3 layers.

1. Cache layer
2. Transaction layer
3. Data layer

Transaction layer is playing vital role for block contention.

Each block will have ITL (INTERESTED TRANSACTION LIST) slots. This ITL slots is required for any sessions that’s need to modify a block in an object.

INITRANS value for table having segment 1 & INITRANS for index segment having 2.

MAXTRANS value default is 255.
If there is no free ITL slot in a blocks, then transaction will waiting for serially for a free ITL slot. By increasing INITRANS value to avoid the serial transaction waiting. Concurrently number of session will perform the DML operation in single block.
Each ITL requires approximately 23 bytes in the block header.

Increasing PCTFREE method:

Suppose a single 8 KB block contains 1000 rows. We reducing the rows in a block can easily reduce the buffer busy wait.

PCTFREE space is used for future updates only. We have an 8 KB data block. Default PCTFREE value is 10%. If we increased the PCTFREE value is 20% automatically number of rows inserted in a block is automatically reduced.

Reducing database block size method:

It’s similar to PCTFREE method. Suppose a single 8 KB block contains 1000 rows. Using db multiblock size future we used 4 KB data block. Now 1000 rows should be stored two 4 KB blocks.

Our goal is to reduce the number of records stored in a block.

Tune the inefficient queries:

Reduce the number of blocks accessing for an objects in buffer cache. By tuning the query to minimize the number of blocks reads from disk to database buffer cache.

Example: I have a one huge table & it contains 10000 blocks. There is no index for this table.If we doing any operation against this table, it’s going full table scan & accessing all the blocks in a table (server process reads the 10000 blocks from disk to database buffer cache). We can put proper index for this table & avoid the full table scan.

Conclution:

Tune inefficient queries
Review the execution plan and make sure the plan chosen by Oracle is that read the fewest blocks possible. Optimize the SQL statement to reduce the number of physical and logical reads.

Adjust PCTFREE and PCTUSED
Adjusting the PCTFREE value downward for an object will reduce the number of rows physically stored in a block. Adjusting the PCTUSED value for an object keeps that object from getting prematurely put back on the freelist. Increase the number of FREELISTS and FREELIST GROUPS. Depending on the type of contention, adjusting these values could help distribute data among more blocks and reduce the hot block problem. Be careful to optimize these parameters so blocks do move in and out of the freelist too frequently.

Reduce the Block Size
This is very similar to adjusting the PCTFREE and PCTUSED parameters in that the goal is to reduce the amount of data stored within one block. In Oracle 9i and higher this can be achieved by storing the hot object in a tablespace with a smaller block size. In databases prior to Oracle 9i the entire database must be rebuilt with a smaller block size.

Optimize indexes
A low cardinality index has a relatively small number of unique values, e.g. a column containing state data with only 50 values. Similar to inefficient queries, the use of a low cardinality index could cause excessive number of blocks to be read into the buffer cache and cause premature aging out of “good” blocks.

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”

Read Full Post »

Checking PGA for each sessions

You can check session level PGA using V$SESSTAT and V$SESSION view and also you can check the username, who is using that memory.

SELECT
s.value,s.sid,a.username
FROM
V$SESSTAT S, V$STATNAME N, V$SESSION A
WHERE
n.STATISTIC# = s.STATISTIC# and
name = ‘session pga memory’
AND s.sid=a.sid
ORDER BY s.value;

VALUE SID USERNAME
———- ———- ——————————
487276 1070 APPS
552812 1068 SYS
552812 1088
618348 1009 APPS_READ_ONLY
683884 1091
749420 846 MOBILEADMIN
749420 1090
749420 1051 APPLSYSPUB
749420 1000 APPLSYSPUB
749420 929 APPLSYSPUB
790412 1093

To check the total PGA in use and hit ratio for PGA

SQL> SELECT * FROM V$PGASTAT;

NAME VALUE UNIT
————————————————– ———- ————
aggregate PGA target parameter 4294967296 bytes
aggregate PGA auto target 3674290176 bytes
global memory bound 201252864 bytes
total PGA inuse 218925056 bytes
total PGA allocated 433349632 bytes
maximum PGA allocated 1526665216 bytes
total freeable PGA memory 86835200 bytes
process count 113
max processes count 250
PGA memory freed back to OS 8.3910E+10 bytes
total PGA used for auto workareas 6505472 bytes

NAME VALUE UNIT
————————————————– ———- ————
maximum PGA used for auto workareas 70296576 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 4292608 bytes
over allocation count 0
bytes processed 2.1553E+11 bytes
extra bytes read/written 10403840 bytes
cache hit percentage 99.99 percent
recompute count (total) 205474

19 rows selected.

The ideal way to perform sorts is by doing the entire job in memory. A sort job that Oracle performs entirely in memory is said to be an optimal sort. If you set the PGA_AGGREGATE_TARGET too low, some of the sort data is written out directly to disk (temporary tablespace) because the sorts are too large to fit in memory. If only part of a sort job spills over to disk, it’s called a 1-pass sort. If the instance performs most of the sort on disk instead of in memory, the response time will be high. This is called multi pass sort.

Another method of checking the efficiency of PGA memory is to check V$SQL_WORKAREA_HISTOGRAM.

V$SQL_WORKAREA_HISTOGRAM displays the cumulative work area execution statistics (cumulated since instance startup) for different work area groups. The work areas are split into 33 groups based on their optimal memory requirements with the requirements increasing in powers of two. That is, work areas whose optimal requirement varies from 0 KB to 1 KB, 1 KB to 2 KB, 2 KB to 4 KB, … and 2 TB to 4 TB.

For each work area group, the V$SQL_WORKAREA_HISTOGRAM view shows how many work areas in that group were able to run in optimal mode, how many were able to run in one-pass mode, and finally how many ran in multi-pass mode. The DBA can take a snapshot at the beginning and the end of a desired time interval to derive the same statistics for that interval.

SELECT
low_optimal_size/1024 “Low (K)”,
(high_optimal_size + 1)/1024 “High (K)”,
optimal_executions “Optimal”,
onepass_executions “1-Pass”,
multipasses_executions “>1 Pass”
FROM v$sql_workarea_histogram
WHERE total_executions <> 0;

Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
2 4 6254487 0 0
64 128 110568 0 0
128 256 20041 0 0
256 512 86399 0 0
512 1024 145082 0 0
1024 2048 31207 0 0
2048 4096 104 0 0
4096 8192 79 2 0
8192 16384 116 0 0
16384 32768 30 0 0
32768 65536 4 0 0

Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
65536 131072 2 0 0

12 rows selected.

You can check the proportion of work areas since you started the Oracle instance, using optimal, 1-pass, and multipass PGA memory sizes.

SELECT name PROFILE, cnt COUNT,
DECODE(total, 0, 0, ROUND(cnt*100/total)) PERCENTAGE
FROM (SELECT name, value cnt, (sum(value) over ()) total
FROM V$SYSSTAT
WHERE name like ‘workarea exec%’);


PROFILE COUNT PERCENTAGE
————————————————– ———- ———-
workarea executions – optimal 6650608 100
workarea executions – onepass 2 0
workarea executions – multipass 0 0

Since almost all the sorting and temporary operation are carried out inder optimal catagory we can conclude that out PGA is sized correctly.

Read Full Post »

When a SQL is submitted by a user to Oracle database, it never happens that Oracle will execute the SQL continuously at one go. Oracle process never get to work on the execution of statement without any interruptions. Often the process has to pause or wait for some event or some other resource to be released. Thus active Oracle process is doing one of the following thing at any point of time.

The process is executing the SQL statement.

The process is waiting for something (for example, a resource such as a database buffer or a latch). It could be waiting for an action such as a write to the buffer cache to complete.
That’s why the response time—the total time taken by Oracle to finish work—is correctly
defined as follows:

response time = service time + wait time

So only part of the time is spend by oracle process to actually “do” some thing. Rest of the time process just wait for some resource to get freed up. It can be waiting log writter process or database writter process or any other resources.

The wait event may also be due to unavailable buffers or latches.

Four dynamic performance views contain wait information: V$SESSION, V$SYSTEM_EVENT, V$SESSION_EVENT, and V$SESSION_WAIT. These four views list just about all the events the instance was waiting for and the duration of these waits. Understanding these wait events is essential for resolving performance issues.

There are different wait classes defined in database. Each class will contain different wait events. There are around 860 wait events defined in Oracle database 10g and are classified under different wait classes.

Some of the main wait classes includes:

  • Administrative: Waits caused by administrative commands, such as rebuilding an index, for example.
  • Application: Waits due to the application code.
  • Cluster: Waits related to Real Application Cluster management.
  • Commit: Consists of the single wait event log file sync, which is a wait caused by commits in the database.
  • Concurrency: Waits for database resources that are used for locking; for example, latches.
  • Configuration: Waits caused by database or instance configuration problems, including a low shared-pool memory size, for example.
  • Idle: Idle wait events indicate waits that occur when a session isn’t active; for example, the ‘SQL*Net message from client’ wait event.

You can see the complete list of wait classes using V$SESSION_WAIT_CLASS dynamic performance view.

Analyzing Instance Performance

You can check the percentage of time spend by the database in waiting for resource and percentage of time spend by database in actual execution.

SQL> SELECT METRIC_NAME, VALUE
             FROM V$SYSMETRIC
            WHERE METRIC_NAME IN (‘Database CPU Time Ratio’,
           ‘Database Wait Time Ratio’) AND
           INTSIZE_CSEC =
           (select max(INTSIZE_CSEC) from V$SYSMETRIC);

METRIC_NAME                                                           VALUE
—————————————————————- ———-
Database Wait Time Ratio                                         15.6260647
Database CPU Time Ratio                                          84.3739353

If the database performance shows that ‘Database Wait Time Ratio’ is greater then ‘Database CPU Time Ratio’ or the value for ‘Database Wait Time Ratio’ is quite significant, then in that case you need to dig inside to get the information about where exactly oracle is waiting. You need to basically find the type of wait. This will give you root cause. Once you get the root cause you can work to fix the same.

you can determine the total waits and percentage of waits by wait class.

SELECT WAIT_CLASS,
TOTAL_WAITS,
round(100 * (TOTAL_WAITS / SUM_WAITS),2) PCT_TOTWAITS,
ROUND((TIME_WAITED / 100),2) TOT_TIME_WAITED,
round(100 * (TIME_WAITED / SUM_TIME),2) PCT_TIME
FROM
(select WAIT_CLASS,
TOTAL_WAITS,
TIME_WAITED
FROM V$SYSTEM_WAIT_CLASS
WHERE WAIT_CLASS != ‘Idle’),
(select sum(TOTAL_WAITS) SUM_WAITS,
sum(TIME_WAITED) SUM_TIME
from V$SYSTEM_WAIT_CLASS
where WAIT_CLASS != ‘Idle’)
ORDER BY PCT_TIME DESC;

WAIT_CLASS           TOTAL_WAITS PCT_TOTWAITS TOT_TIME_WAITED   PCT_TIME
——————– ———– ———— ————— ———-
System I/O                180300        19.96          3008.8      49.53
Commit                     67855         7.51         1302.46      21.44
User I/O                  291565        32.28         1056.55      17.39
Application                 3637           .4          596.66       9.82
Other                      15388          1.7            67.4       1.11
Concurrency                 1264          .14           38.12        .63
Network                   343169        37.99            3.86        .06
Configuration                 22            0               1        .02

8 rows selected.

In the above output percentage of time waited (last column) is more important and gives the correct picture of the impact due to wait. Example if we see total number of Network waits are large but the actual percentage of time contributed to the wait is very less (0.06%).

The key dynamic performance tables for finding wait information are the V$SYSTEM_EVENT, V$SESSION_EVENT, V$SESSION_WAIT, and the V$SESSION views. The first two views show the waiting time for different events.

V$SYSTEM_EVENT

The V$SYSTEM_EVENT view shows the total time waited for all the events for the entire system since the instance started up. The view doesn’t focus on the individual sessions experiencing waits, and therefore it gives you a high-level view of waits in the system. You can use this view to find out that the top instance-wide wait events are. You can calculate the top n waits in the system by dividing the event’s wait time by the total wait time for all events.

select EVENT, TOTAL_WAITS, TIME_WAITED, WAIT_CLASS from V$SYSTEM_EVENT
where wait_class != ‘Idle’
order by time_waited desc;

EVENT                          TOTAL_WAITS TIME_WAITED WAIT_CLASS
—————————— ———– ———– ——————–
log file parallel write             128953      210308 System I/O
log file sync                        67904      130313 Commit
db file sequential read             259065       73686 User I/O
enq: TX – row lock contention          226       59080 Application
control file parallel write          28282       57929 System I/O
db file parallel write               19155       32924 System I/O
db file scattered read               31841       30925 User I/O
os thread startup                       95        3262 Concurrency
rdbms ipc reply                        485        2116 Other
PX Deq: Signal ACK                    1971        1103 Other
local write wait                       245         864 User I/O

we can get the session level waits for each event using V$SESSION_EVENT view. In this view the TIME_WAITED is the wait time per session.

V$SESSION_EVENT

select sid, EVENT, TOTAL_WAITS, TIME_WAITED, WAIT_CLASS
from V$SESSION_EVENT
where WAIT_CLASS != ‘Idle’
order by TIME_WAITED;

  SID EVENT                          TOTAL_WAITS TIME_WAITED WAIT_CLASS
———- —————————— ———– ———– ————
390 os thread startup                       55      1918   Concurrency
393 db file sequential read              10334      4432   User I/O
396 db file parallel write                8637      14915  System I/O
397 db file parallel write               10535      18035  System I/O
394 control file parallel write          28294      57928  System I/O
395 log file parallel write             129020      210405 System I/O

As we can see from above output that session 395 is having maximum wait time because of system I/O. Here system I/O is the I/O because of background processes like DBWR and LGWR etc.

You can get all the database wait events from V$EVENT_NAME and the related meaning of all the wait events available in oracle 10g by checking this documentation link.

V$SESSION_WAIT

The third dynamic view is the V$SESSION_WAIT view, which shows the current waits or just completed waits for sessions. The information on waits in this view changes continuously based on the types of waits that are occurring in the system. The real-time information in this view provides you with tremendous insight into what’s holding up things in the database right now. The
V$SESSION_WAIT view provides detailed information on the wait event, including details such as file number, latch numbers, and block number. This detailed level of information provided by the V$SESSION_WAIT view enables you to probe into the exact bottleneck that’s slowing down the database. The low-level information helps you zoom in on the root cause of performance problems.

SQL> select sid, event, WAIT_CLASS, WAIT_TIME, SECONDS_IN_WAIT, STATE from v$session_wait
2  where wait_class != ‘Idle’;

       SID EVENT                          WAIT_CLASS            WAIT_TIME
———- —————————— ——————– ———-
SECONDS_IN_WAIT STATE
————— ——————-
337 SQL*Net message to client      Network                      -1
0 WAITED SHORT TIME

Here wait time -1 means that session has waited for less then 1/100th of a second.
You can get the complete wait information for a particular session using V$SESSION view. You can get SQLID of the sql, which is causing wait.

V$SESSION

For getting the wait statistics you can go with the following methodology.

  • First, look at the V$SYSTEM_EVENT view and rank the top wait events by the total amount of time waited, as well as the average wait time for that event.
  • Next, find out more details about the specific wait event that’s at the top of the list. You can check V$WAITSTAT view for the same. Check the type of wait this view is showing. If the wait is due to undo header or undo block then wait is related to undo segment.
  • Finally, use the V$SESSION view to find out the exact objects that may be the source of a problem. For example, if you have a high amount of db file scattered read-type waits, the V$SESSION view will give you the file number and block number involved in the wait events.

In V$SESSION we have a column called BLOCKING_SESSION_STATUS. IF this column value is ‘valid’, then we can presume that corresponding session is getting blocked.

V$SESSION_WAIT_HISTORY

The V$SESSION_WAIT_HISTORY view holds information about the last ten wait events for each active session. The other wait-related views, such as the V$SESSION and the V$SESSION_WAIT, show you only the wait information for the most recent wait. This may be a short wait, thus escaping your scrutiny.

SQL> select sid from v$session_wait_history
2  where wait_time = (select max(wait_time) from v$session_wait_history);

      SEQ# EVENT                            P1         P2         P3  WAIT_TIME
———- ———————— ———- ———- ———- ———-
1 rdbms ipc message            180000          0          0     175787
2 rdbms ipc message            180000          0          0     175787
3 rdbms ipc message            180000          0          0      60782
4 rdbms ipc message            180000          0          0     175787
5 rdbms ipc message            180000          0          0     138705
6 db file sequential read           1        368          1          0
7 rdbms ipc message            180000          0          0     158646
8 db file sequential read           1        368          1          0
9 db file sequential read           1         73          1          0
10 db file sequential read           1         30          1          0

Note that a zero value under the WAIT_TIME column means that the session is waiting for a specific wait event. A nonzero value represents the time waited for the last event.

V$ACTIVE_SESSION_HISTORY

The V$SESSION_WAIT view tells you what resource a session is waiting for. The V$SESSION view also provides significant wait information for active sessions. However, neither of these views provides you with historical information about the waits in your instance. Once the wait is over, you can no longer view the wait information using the V$SESSION_WAIT view. The waits are so fleeting that by the time you query the views, the wait in most times is over. The new Active Session History (ASH) feature, by recording session information, enables you to go back in time and review the history of a performance bottleneck in your database. Although the AWR provides hourly snapshots
of the instance by default, you won’t be able to analyze events that occurred five or ten minutes ago, based on AWR data. This is where the ASH information comes in handy. ASH samples the V$SESSION view every second and collects the wait information for all active sessions. An active session is defined as a session that’s on the CPU or waiting for a resource. You can view the ASH session statistics through the view  V$ACTIVE_SESSION_HISTORY, which contains a single row for each active session in your instance. ASH is a rolling buffer in memory, with older information being overwritten by new session data. Every 60 minutes,the MMON background process flushes filtered ASH data to disk, as part of the hourly AWR snapshots. If the ASH buffer is full, the MMNL background process performs the flushing of data. Once the ASH data is flushed to disk, you won’t be able to see it in the V$ACTIVE_SESSION_HISTORY view. You’ll now have to use the DBA_HIST_ACTIVE_SESS_HISTORY view to look at the historical data.

obtaining the objects with highest waits

SELECT a.current_obj#, o.object_name, o.object_type, a.event,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a,
dba_objects o
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.current_obj# = o.object_id
GROUP BY a.current_obj#, o.object_name, o.object_type, a.event
ORDER BY total_wait_time;

 OBJECT_NAME                OBJECT_TYPE         EVENT                  TOTAL_WAIT_TIME
——————–       ——————- ———————– —————-
FND_LOGINS                 TABLE               db file sequential read   47480
KOTTB$                     TABLE               db file sequential read   48077
SCHEDULER$_WINDOW          TABLE               db file sequential read   49205
ENG_CHANGE_ROUTE_STEPS_TL  TABLE               db file sequential read   52534
JDR_PATHS_N1               INDEX               db file sequential read   58888
MTL_ITEM_REVISIONS_B       TABLE               SQL*Net more data to client

select p1text, p1, p2text, p2, p3text, p3, a.event
from v$active_session_history a
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.current_obj# = 1938000

P1TEXT     P1          P2TEXT     P2   P3TEXT          P3  EVENT
——–   —         ——- ——- ———— —— ————————
file#      71          block#    4389  blocks           1  db file sequential read
file#      187         block#   89977  blocks           1  db file sequential read
file#      80          block#   79301  blocks           1  db file sequential read
driver id  675562835   #bytes       1  0
file#      11          block#     831  blocks           1  db file sequential read
driver id  675562835   #bytes       1  0

so we can see few history wait events for a particular object in database. We can get the segment stats for this object. Finally we can got to some conclusing and implementaing the solution to reduce the wait. For example if it is a ‘db file sequential read’ wait then

  • Increase buffer cache size, but this wont help much. For this to do, you need to check cache miss percentages.
  • Check the query and optimize it, so that it can read less number of blocks
  • Increase freelists for that segment

some important wait events

The following query lists the most important wait events in your database in the last 15 minutes:

SELECT a.event,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a
WHERE a.sample_time between
sysdate – 30/2880 and sysdate
GROUP BY a.event
ORDER BY total_wait_time DESC;

EVENT                                                            TOTAL_WAIT_TIME
—————————————————————- —————
enq: TX – row lock contention                                          877360289
816854999
TCP Socket (KGAS)                                                       13787430
SQL*Net break/reset to client                                            6675324
db file sequential read                                                  2318850
control file parallel write                                              1790011
log file parallel write                                                  1411201
db file scattered read                                                     62132
os thread startup                                                          39640
null event                                                                     0

Users with the Most Waits

The following query lists the users with the highest wait times within the last 15 minutes:

SELECT s.sid, s.username,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a,
v$session s
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.session_id=s.sid
GROUP BY s.sid, s.username
ORDER BY total_wait_time DESC;

       SID USERNAME                       TOTAL_WAIT_TIME
———- —————————— —————
773 APPS                                 877360543
670 APPS                                 374767126
797                                       98408003
713 APPS                                  97655307
638 APPS                                  53719218
726 APPS                                  39072236
673 APPS                                  29353667
762 APPS                                  29307261
746 APPS                                  29307183
653 APPS                                  14677170
675 APPS                                  14676426

Identifying SQL with the Highest Waits

Using the following query, you can identify the SQL that’s waiting the most in your instance with in last 15 mins

SELECT a.user_id,d.username,s.sql_text,
SUM(a.wait_time + a.time_waited) total_wait_time
FROM v$active_session_history a,
v$sqlarea s,
dba_users d
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.sql_id = s.sql_id
AND a.user_id = d.user_id
GROUP BY a.user_id,s.sql_text, d.username;

Read Full Post »

Automatic Memory Management was a new feature introduced in 10g. With 10g release oracle has come up with anew parameter called sga_target which was used to automatically manage the memeory inside SGA.
The components which were managed by sga_target are db_cache_size, shared_pool_size, large_pool_size, java_pool_size and streams_pool_size

With 11g, Oracle went a step further to manage both SGA as well as PGA automatically. Oracle database 11g introduced 2 new parameters – memory_target and memory_max_target

The memory_target Parameter

The memory_target parameter is somewhat a combination of the sga_target parameter value and the pga_aggregate_target parameter, representing the total amount of memory that Oracle has to allocate between the various SGA and PGA structures. The memory_target parameter is dynamic and can be changed up to and including the value of memory_max_target, which we discuss next.

The memory_max_target Parameter

The memory_max_target parameter allows you to dynamically change the value of the parameter memory_target within the confines of memory_max_target. Thus you can adjust the total amount of memory available to the database as a whole at any time.

NOTE: The memory_target and memory_max_target parameters cannot be used when LOCK_SGA has been set. Also memory_target and memory_max_target cannot be used in conjunction with huge pages on Linux.

Following fig show the hierarchy of memory parameters and components that it tunes. This fig is taken from Robert Freeman book – Oracle database 11g: New features for Administrator

Also below fig from same book show the effect of setting memroy_target, memroy_max_target and sga_target.

While setting these new parameters (memroy_target and max_memory_target), one has to be careful. A general advice would be to set sga_target and pga_aggregate_target to a minimum fixed value and set memrory_target. Oracle will automatically increase sga_target and pga_aggregate_target to the desired level.

If you are upgrading the existing 10g database to 11g and want to keep the current value of sga_target and pga_aggregate_target, than make sure you set the value of memroy_target >= (sga_target + pga_aggregate_target).

With new hirarchy in memeory management, Oracle has also introduced new memroy advisory. You can view V$MEMORY_TARGET_ADVICE view to get advice on the optimal value of memroy_target parameter. This view will show advisory data only after you set memory_target parameter.

Hope this helps !!

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

Yesterday my junior team member was confused about fragmentation and High water mark concepts.Also there was good comment on my fragmentation post, so it inspire me to write something about the High Watermark and the Oracle 10gR1 New Feature SEGMENT SHRINKING.About fragmentation I have already disscused in my previous post.

The High Watermark is the maximum fill-grade a table has ever reached.
Above the high watermark are only empty blocks.
These blocks can be formatted or unformatted.

First let’s have a look at the question when space is allocated

– when you create a table at least one extent (contiguous blocks) is allocated to the table
– if you have specified MINEXTENTS the number of MINEXTENTS extents
will be allocated immedaitely to the table
– if you have not specified MINEXTENTS then exactely one extent
will be allocated .

Immediately after creation of the segment (table) the high watermark will be at the first block of the first extent as long as there are no inserts made.

When you insert rows into the table the high watermark will be bumped up step by step.
This is done by the server process which makes the inserts.

Now let us take a look at when space is released again from a segment like a table or index:

Let’s asume that we have filled a table with 100’0000 rows.
And let’s asume that we deleted 50’000 rows afterwards.
In this case the high watermark will have reached the level of 100’000 and will have stayed there. Which means that we have empty blocks below the high watermark now.
Oracle has a good reason this: it might occur that you delete rows and immediately this you insert rows into the same table. In this case it is good that the space was not released with the deletes, because it had to be get reallocate again for the following inserts, which would mean permanent changes to the data dictionary
(=> dba_free_space, dba_extents, dba_segements …) .
Furthermore the physical addresses of the deleted row get recycled by new rows.

These empty blocks below the high watermark can get annoying in a number of situations because they are not used by DIRECT LOADs and DIRECT PATH LOADs:

1. seriell direct load:
INSERT /*+ APPEND */
INTO hr.employees
NOLOGGING
SELECT *
FROM oe.emps;

2. parallel direct load:
ALTER SESSION ENABLE PARALLEL DML;
INSERT /*+PARALLLEL(hr.employees,2)
INTO hr.employees
NOLOGGING
SELECT *
FROM oe.emps;

3. direct path loads:
sqlldr hr/hr control=lcaselutz.ctl … direct=y (default is direct=n)

All the above actions case that the SGA is not used for the inserts but the PGA:
there wil be temporary segements filled and dumped into newly formatted blocks above the high watermark.

So we might want to get high watermark down before we load data into the table in order to use the free empty blocks for the loading.

So how can we release unused space from a table?

There are a number of possible options which are already available before Oracle 10g:
– What we always could do is export and import the segment.
After an import the table will have only one extent.
The rows will have new physical addresses and
the high watermark will be adjusted.
– Another option would be to TRUNCATE the table.
With this we would loose all rows which are in the table.
So we cannot use this if we want to keep existing records.

With Oracle 9i another possibilty was implemented:
ALTER TABLE emp MOVE TABLESPACE users;
This statement will also cause that
– the rows will have new physical addresses and
– the high watermark will be adjusted.
But for this:
– we need a full (exclusive) table lock
– the indexes will be left with the status unusable (because they contain the old rowids) and must be rebuilt.

Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark,
it is called segment shrinking and is only possible for segments which use ASSM, in other words, which are located in tablespaces which use Automatic Segement Space Management.
In such a tablespace a table does not really have a High watermark!
It uses two watermarks instead:
– the High High Watermark referred to as HHWM, above which alle blocks ar unformatted.
– the Low High Watermark referred to as LHWM below which all blocks are formatted.
We now can have unformatted blocks in the middle of a segment!

ASSM was introduced in Oracle 9iR2 and it was made the default for tablespaces in Oracle 10gR2.
With the table shrinking feature we can get Oracle
to move rows which are located in the middle or at the end of a segment
further more down to the beginning of the segment and by
this make the segment more compact.
For this we must first allow ORACLE to change the ROWIDs of these rows by issuing
ALTER TABLE emp ENABLE ROW MOVEMENT;
ROWIDs are normally assigned to a row for the life time of the row at insert time.

After we have given Oracle the permission to change the ROWIDs
we can now issue a shrink statement.
ALTER TABLE emp SHRINK SPACE;

This statement will procede in two steps:
– The first step makes the segment compact
by moving rows further down to free blocks at the beginning of the segment.
– The second step adjusts the high watermark. For this Oracle needs an exclusive table lock,
but for a very short moment only.

Table shrinking…
– will adjust the high watermark
– can be done online
– will cause only rowlocks during the operation and just a very short full table lock at the end of the operation
– indexes will be maintained and remain usable
– can be made in one go
– can be made in two steps
(this can be usefull if you cannot get a full table lock during certain hours:
you only make the first step and adjust the high watermark later
when it is more conveniant:

– ALTER TABLE emp SHRINK SPACE; – only for the emp table
– ALTER TABLE emp SHRINK SPACE CASCADE; – for all dependent objects as well

– ALTER TABLE emp SHRINK SPACE COMPACT; – only makes the first step (moves the rows)

How are the indexes maintained?
In the first phase Oracle scans the segment from the back to find the position of the last row.
Afterwards it scan the segment from the beginning to find the position of the first free slot in a block in this segment. In case the two positions are the same, there is nothing to shrink. In case the two positions are different Oracle deletes the row from the back and inserts it into the free position at front of the segement. Now Oracle scan the segement from the back and front again and again until it finds that the two positions are the same.
Since it is DML statements performed to move the rows, the indexes are maintained at the same time. Only row level locks are used for these operations in the first pase of SHRINK TABLE statement.

The following restrictions apply to table shrinking:

1.) It is only possible in tablespaces with ASSM.
2.) You cannot shrink:
– UNDO segments
– temporary segments
– clustered tables
– tables with a colmn of datatype LONG
– LOB indexes
– IOT mapping tables and IOT overflow segments
– tables with MVIEWS with ON COMMIT
– tables with MVIEWS which are based on ROWIDs

The Oracle 10g Oracle comes with a Segment Advisor utility.
The Enterprise Manager, Database Control, even has a wizzard which can search for shrink candidates.

This advisor is run automatically by an autotask job on a regular basis in the default maintainance window.

You can use the built in package DBMS_SPACE to run the advisor manually as well.
I will blog about this later on some time.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/20/table-fragmentation-how-to-avoid-same/

Read Full Post »

In case of index scan a row is retrieved by traversing the index, using the indexed column values specified by the statement. An index scan retrieves data from an index based on the value of one or more columns in the index. To perform an index scan, Oracle searches the index for the indexed column values accessed by the statement. If the statement accesses only columns of the index, then Oracle reads the indexed column values directly from the index, rather than from the table.

The index contains not only the indexed value, but also the rowids of rows in the table having that value. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle can find the rows in the table by using either a table access by rowid or a cluster scan.

An index scan can be various types like

1)Index Unique Scans
2)Index Range Scans
3)Index Range Scans Descending
4)Index Skip Scans
5)Full Scans
6)Fast Full Index Scans
7)Index Joins
8)Bitmap Indexes


1)Index Unique Scans
———————————————-

This scan returns, at most, a single rowid. Oracle performs a unique scan if a statement contains a UNIQUE or a PRIMARY KEY constraint that guarantees that only a single row is accessed.

This access path is used when all columns of a unique (B-tree) index or an index created as a result of a primary key constraint are specified with equality conditions. There is an example in later of this section.

2)Index Range Scans
—————————————————-

•An index range scan is a common operation for accessing selective data.

•Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted in ascending order by rowid.

•If data must be sorted by order, then use the ORDER BY clause, and do not rely on an index. If an index can be used to satisfy an ORDER BY clause, then the optimizer uses this option and avoids a sort.

The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions like col1=1 or col1<1 or col1>1 or (col1=1 AND col1=99 AND ..)

•Range scans can use unique or non-unique indexes. Range scans avoid sorting when index columns constitute the ORDER BY/GROUP BY clause.

•The hint INDEX(table_alias index_name) instructs the optimizer to use a specific index.

•Note that leading wildcards like %text does not result range scan but text% might result range scan.
Look at following examples, 88% used range scans but %88 did not used range scans.

SQL> create table table_a(n number ,k varchar2(15));

Table created.

SQL> begin
for i in 1 .. 10000
loop
insert into table_a values(i,’pc-‘||round(dbms_random.value(1,20000),0));
end loop;
end;
/

2 3 4 5 6 7

PL/SQL procedure successfully completed.

SQL> create index table_a_I_K on table_a(k);

Index created.

SQL> select * from table_a where k like ‘88%’;

no rows selected

Execution Plan
———————————————————-
Plan hash value: 1124802227

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 22 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 22 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 1 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘88%’)
filter(“K” LIKE ‘88%’)

Note
—–
– dynamic sampling used for this statement

SQL> select * from table_a where k like ‘%88’;
102 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1923776651

—————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————–
| 0 | SELECT STATEMENT | | 102 | 2244 | 8 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TABLE_A | 102 | 2244 | 8 (0)| 00:00:01 |
—————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – filter(“K” LIKE ‘%88’)

Note
—–
– dynamic sampling used for this statement

3)Index Range Scans Descending
——————————————————

•An index range scan descending is identical to an index range scan, except that the data is returned in descending order.

•Indexes, by default, are stored in ascending order.

•The optimizer uses index range scan descending when an order by descending clause can be satisfied by an index.

•The hint INDEX_DESC(table_alias index_name) is used for index range scan descending.

Example:
————————————-
SQL> select * from table_a where k like ‘8888%’;

8 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1124802227

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘8888%’)
filter(“K” LIKE ‘8888%’)
SQL> select /*+index_desc(table_a)*/ * from table_a where k like ‘8888%’;

8 rows selected.

Execution Plan
———————————————————-
Plan hash value: 3364135956

——————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | TABLE_A | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN DESCENDING| TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 |
——————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘8888%’)
filter(“K” LIKE ‘8888%’ AND “K” LIKE ‘8888%’)

4)Index Skip Scans:
—————————–

Discussed in
5)Full Scans
——————————

Discussed in
6)Fast Full Index Scans
—————————————–

Discussed in
7)Index Joins
——————————————

Discussed in
8)Bitmap Indexes
———————————————

Discussed in
To illustrate an example create a table and make it’s column primary key. Now put the indexed column in the where clause with an equality operator. Note that index unique scan will be used.


SQL> create table test_tab2 as select level col1, level col2 from dual connect by level<=100;

Table created.
Case 1: No index, so full table scan will performed.
——————————————————————————–
SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 700767796

——————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TEST_TAB2 | 1 | 26 | 3 (0)| 00:00:01 |
——————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – filter(“COL1″=99)

Note
—–
– dynamic sampling used for this statement

Create non-unique index on the table.

SQL> create index test_tab2_I on test_tab2(col1);
Index created.

Case 2: As on col1 there is non-unique index so range scan will be performed.
————————————————————————————————-
SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 465564947

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_TAB2_I | 1 | | 1 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“COL1″=99)

Note
—–
– dynamic sampling used for this statement

Now drop the index and add primary key on the table.
SQL> drop index test_tab2_I;
Index dropped.

SQL> alter table
2 test_tab2 add primary key(col1);

Table altered.

Case 3: Adding primary key with equality operation on column causes to use index unique scan.
——————————————————————————

SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 1384425796

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | SYS_C006487 | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“COL1″=99)

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

Older Posts »