Feeds:
Posts
Comments

Today i face this issue after Oracle 11g upgrade and posting it for you as it is.

Problem:

The local_listener parameter has been set, the listener is running, but when attempting to start the instance an ORA-00119 is reported:

SQL*Plus: Release 11.2.0.2.0 Production on Fri Sep 28 11:34:29 2012

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name ‘LISTENER_SID

Reason :

Oracle only checks for listeners running on the default port (1521). It would have to spend all day trying every possible port number otherwise. You’ll need to give it some help to find your listener.

Solution:

Simply add an entry to the servers tnsnames.ora pointing at the listener. As mention below

LISTENER_SID.WORLD=
(DESCRIPTION =

tnsping LISTENER_SID.WORLD

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.2.0.2.0 – Production on 28-SEP-2012 13:13:36

Copyright (c) 1997, 2010, Oracle. All rights reserved.

Used parameter files:
/oracle/SID/112_64/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (SDU = 2768) (ADDRESS_LIST = (ADDRESS = (COMMUNITY = SAP.WORLD) (PROTOCOL = TCP) (HOST = sidb00) (PORT = 1527))) (CONNECT_DATA = (SID = SID) (GLOBAL_NAME = SID.WORLD)))
OK (20 msec)

Now it is working. Also wana add one more point.

make sure that parameter by name *.local_listener=’LISTENER_SID’ in pfile as it is mention.

 I Hope this article helped to you. I am expecting your suggestions/feedback. 
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!

Advertisements

The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 54,000 times in 2011. If it were a concert at Sydney Opera House, it would take about 20 sold-out performances for that many people to see it.

Click here to see the complete report.

Note: One of our visitors and my friend Kavita Yadav  asked this question by posting a comment. Thanks KAvita for your contribution. Keep visiting/commenting!

As there are over 800 wait events but but frequently you may come across very few. As working on performance tuning since more than 4 yrs there are very few wait events. In this post I try to cover most popular of them.

db file sequential reads

Possible Causes :
· Use of an unselective index 
· Fragmented Indexes
· High I/O on a particular disk or mount point
· Bad application design 
· Index reads performance can be affected by  slow I/O subsystem and/or poor database  files layout, which result in a higher average  wait time

Actions :
· Check indexes on the table to ensure that the right index is being used

· Check the column order of the index  with the WHERE clause of the Top SQL statements

· Rebuild indexes with a high clustering factor

· Use partitioning to reduce the amount of blocks being visited

· Make sure optimizer statistics are up to date

· Relocate ‘hot’ datafiles

· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool

· Inspect the execution plans of the SQL statements that access data through indexes

· Is it appropriate for the SQL statements to access data through index lookups?

· Would full table scans be more efficient?

· Do the statements use the right driving  table?

· The optimization goal is to minimize  both the number of logical and physical I/Os.

Remarks:
· The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
· Significant db file sequential read wait time is most likely an application issue.
· If the DBA_INDEXES.CLUSTERING_FACTOR of the index approaches the number of blocks in the table, then most of the rows in the table are ordered. This is desirable.

· However, if the clustering factor approaches the number of rows in the table, it means the rows in the table are randomly ordered and thus it requires more I/Os to complete the operation. You can improve the index’s clustering factor by rebuilding the table so that rows are ordered according to the index key and rebuilding the index thereafter.

· The OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING initialization parameters can influence the optimizer to favour the nested loops operation and choose an index access path over a full table scan.

db file scattered reads

Possible Causes :
· The Oracle session has requested and is waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT)  to be  read into the SGA from disk.
· Full Table scans

· Fast Full Index Scans

Actions :
· Optimize multi-block I/O by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT

· Partition pruning to reduce number of blocks visited

· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool
· Optimize the SQL statement that initiated most of the waits. The goal is to minimize the number of physical
and logical reads.
· Should the statement access the data by a full table scan or index FFS? Would an index range or unique scan
 be more efficient? Does the query use the right driving table?
· Are the SQL predicates appropriate for hash or merge join?
· If full scans are appropriate, can  parallel query improve the response time?
· The objective is to reduce the demands for both the logical and physical I/Os, and this is best
achieved through SQL and application tuning.
· Make sure all statistics are representative of the actual data. Check the LAST_ANALYZED date

Remarks:
· If an application that has been running fine for a while suddenly clocks a lot of time on the db file scattered read event and there hasn’t been a code change, you might want to check to see if one or more indexes has been dropped or become unusable.
· Or  whether the stats has been stale.

log file parallel write

Possible Causes :
· LGWR waits while writing contents of the redo log buffer cache to the online log files on disk
· I/O wait on sub system holding the online  redo log files

Actions :
· Reduce the amount of redo being generated

· Do not leave tablespaces in hot backup mode for longer than necessary

· Do not use RAID 5 for redo log files

· Use faster disks for redo log files

· Ensure that the disks holding the archived redo log files and the online redo log files are separate so as to avoid contention

· Consider using NOLOGGING or UNRECOVERABLE options in SQL statements

log file sync:


Possible Causes :
· Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
Actions :
· Tune LGWR to get good throughput to  disk eg: Do not put redo logs on  RAID5

· Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations

Actions :

  • Tune LGWR to get good throughput to  disk eg: Do not put redo logs on RAID5
  •  Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations

buffer busy waits:

Possible Causes :
· Buffer busy waits are common in an I/O-bound Oracle system.
· The two main cases where this can occur are:
· Another session is reading the block into the buffer
· Another session holds the buffer in an incompatible mode to our request
· These waits indicate read/read, read/write, or write/write contention.
· The Oracle session is waiting to pin a buffer .A buffer must be pinned before it can be read or modified. Only one process can pin a
buffer at any one time.

· This wait can be intensified by a large block  size as more rows can be contained within the block

· This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is “busy

· It is also often due to several processes repeatedly reading the same blocks (eg: i lots of people scan the same index or data block)

Actions :
· The main way to reduce buffer busy waits is to reduce the total I/O on the system

· Depending on the block type, the actions will differ

Data Blocks

· Eliminate HOT blocks from the application. Check for repeatedly scanned / unselective indexes.

· Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
· 
 Check for ‘right- hand-indexes’ (indexes that get inserted into at the same point by many processes).

· Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .

· Reduce the number of rows per block

Segment Header

· Increase of number of FREELISTs   and FREELIST GROUPs

Undo Header

· Increase the number of Rollback Segments

free buffer waits:

Possible Causes :
· This means we are waiting for a free buffer but there are none available in the cache because there are too many dirty buffers in  the cache

· Either the buffer cache is too small or the DBWR is slow in writing modified buffers to disk

· DBWR is unable to keep up to the write  requests

· Checkpoints happening too fast – maybe due  to high database activity and under-sized  online redo log files

· Large sorts and full table scans are filling the cache with modified blocks faster than the  DBWR is able to write to disk
· If the  number of dirty buffers that need to be  written to disk is larger than the number that DBWR can write per batch, then these waits  can be observed

Actions :
Reduce checkpoint frequency  – increase the size of the online redo log files

Examine the size of the buffer cache – consider increasing the size of the buffer cache in the SGA

Set disk_asynch_io = true set

If not using asynchronous I/O increase the number of db writer processes or dbwr slaves

Ensure hot spots do not exist by spreading datafiles over disks and disk controllers

Pre-sorting or reorganizing data can help

enqueue waits

Possible Causes :
· This wait event indicates a wait for a lock  that is held by another session (or sessions) in an incompatible mode to the requested mode.

 TX Transaction Lock

· Generally due to table or application set up issues

· This indicates contention for row-level lock. This wait occurs when a transaction tries to update or delete rows that are currently
 locked by another transaction.

· This usually is an application issue.

TM DML enqueue lock

· Generally due to application issues, particularly if foreign key constraints have not been indexed.

ST lock

· Database actions that modify the UET$ (used extent) and FET$ (free extent) tables require the ST lock, which includes actions such as drop, truncate, and coalesce.

· Contention for the ST lock indicates there are multiple sessions actively performing

· dynamic disk space allocation or deallocation

· in dictionary managed tablespaces

Actions :
· Reduce waits and wait times

· The action to take depends on the lock  type which is causing the most problems

· Whenever you see an enqueue wait event for the TX enqueue, the first step is to find out who the blocker is and if there are multiple waiters for the same resource

· Waits for TM enqueue in Mode 3 are primarily due to unindexed foreign key columns.

· Create indexes on foreign keys  < 10g

· Following are some of the things you can do to minimize ST lock contention in your database:

· Use locally managed tablespaces
· Recreate all temporary tablespaces  using the CREATE TEMPORARY TABLESPACE TEMPFILE… command.

Cache buffer chain latch

Possible Causes :
· Processes need to get this latch when they  need to move buffers based on the LRU block replacement policy in the buffer cache
· The cache buffer lru chain latch is acquired in order to introduce a new block into the buffer cache and when writing a buffer
back to disk, specifically when trying  to scan the LRU (least recently used) chain containing all the dirty blocks in the buffer
cache. Competition for the cache buffers lru chain .

· latch is symptomatic of intense buffer cache  activity caused by inefficient SQL  statements. Statements that repeatedly scan

· large unselective indexes or perform full table scans are the prime culprits. 

· Heavy contention for this latch is generally  due to heavy buffer cache activity which  can be caused, for example, by:
 Repeatedly scanning large unselective indexes

Actions :
 Contention in this latch can be avoided implementing multiple buffer pools or increasing the number of LRU latches with the  parameter DB_BLOCK_LRU_LATCHES (The default value is generally  sufficient for most systems).

Its possible to reduce contention for the cache buffer lru chain latch by increasing the  size of the buffer cache and  thereby reducing the rate at which new blocks are  introduced into the buffer cache.

 Direct Path Reads

Possible Causes :
· These waits are associated with direct read operations which read data directly into the sessions PGA bypassing the SGA

· The “direct path read” and “direct path write” wait events are related to operations that are performed in PGA like sorting, group by operation, hash join

· In DSS type systems, or during heavy batch periods, waits on “direct path read” are quite normal However, for an OLTP system these waits are significant
· These wait events can occur during sorting operations which is not surprising as direct path reads and writes usually occur in connection with temporary tsegments
· SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION, DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is larger than the work area in the PGA
Actions :
Ensure the OS asynchronous IO is configured correctly.
Check for IO heavy sessions / SQL and see if the amount of IO can be reduced.
Ensure no disks are IO bound.
Set your PGA_AGGREGATE_TARGET to appropriate value (if the parameter WORKAREA_SIZE_POLICY = AUTO) Or set *_area_size manually (like sort_area_size and then you have to set WORKAREA_SIZE_POLICY = MANUAL
Whenever possible use UNION ALL instead of UNION, and where applicable use HASH JOIN instead of SORT MERGE and NESTED LOOPS instead of HASH JOIN.
 Make sure the optimizer selects the right driving table. Check to see if the composite index’s columns can be rearranged to match the ORDER BY clause to avoid sort entirely.

Also, consider automating the SQL work areas using PGA_AGGREGATE_TARGET in Oracle9i Database.

Query V$SESSTAT> to identify sessions with high “physical reads direct”

Remark:
· Default size of HASH_AREA_SIZE  is twice that of SORT_AREA_SIZE

· Larger HASH_AREA_SIZE will influence optimizer to go for hash joins instead of nested loops

· Hidden parameter DB_FILE_DIRECT_IO_COUNT can impact the direct path read performance.It sets the maximum I/O buffer size of direct read and write operations. Default is 1M in 9i
Direct Path  Writes:

Possible Causes :
· These are waits that are associated with direct write operations that write data from users’ PGAs to data files or temporary  tablespaces
· Direct load operations (eg: Create Table as  Select (CTAS) may use this)
· Parallel DML operations
· Sort IO (when a sort does not fit in memory

Actions :
If the file indicates a temporary  tablespace check for unexpected disk sort operations.
Ensure
<Parameter:DISK_ASYNCH_IO> is TRUE . This is unlikely to reduce wait times from the wait event timings but
may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings).
Ensure the OS asynchronous IO is configured correctly.
Ensure no disks are IO bound

Latch Free Waits
Possible Causes :
· This wait indicates that the process is waiting for a latch that is currently busy  (held by another process).
· When you see a latch free wait event in the V$SESSION_WAIT view, it means the process failed to obtain the latch in the
willing-to-wait mode after spinning  _SPIN_COUNT times and went to sleep. When processes compete heavily for  latches, they will also consume more CPU resources because of spinning. The result is a higher response time

Actions :
· If the TIME spent waiting for latches is significant then it is best to determine which latches are suffering from contention.
Remark:
· A latch is a kind of low level lock. Latches apply only to memory structures in the SGA. They do not apply to database objects. An Oracle SGA has many latches, and they exist to protect various memory structures from potential corruption  by concurrent access.

· The time spent on latch waits is an effect, not a cause; the cause is that you are doing too many block gets, and block gets require cache buffer chain latching

 Library cache latch

Possible Causes :
· The library cache latches protect the  cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache.

· Application is making heavy use of literal SQL- use of bind variables will reduce this latch considerably

Actions :
· Latch is to ensure that the application is reusing as much as possible SQL statement representation. Use bind variables whenever ossible in the application.

· You can reduce the library cache latch hold time by properly setting the SESSION_CACHED_CURSORS parameter.
· Consider increasing shared pool.
Remark:
· Larger shared pools tend to have  long free lists and processes that need to allocate space in them must  spend extra time scanning the long free lists while holding the shared pool latch

· if your database is not yet on  Oracle9i Database, an oversized shared pool can increase the contention for the shared pool latch..
Shared pool latch

Possible Causes :
The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool

Contentions for the shared pool and library cache latches are mainly due to intense hard  parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed

The cost of parsing a new SQL statement is expensive both in terms of CPU requirements and the number of times  the library cache and shared pool latches  may need to be acquired and released.

Actions :
· Ways to reduce the shared pool latch  are, avoid hard parses when possible, parse once, execute many.

· Eliminating literal SQL is also useful to avoid the shared pool latch. The size  of the shared_pool and use of MTS  (shared server option) also greatly  influences the shared pool latch.
· The workaround is to set the initialization parameter  CURSOR_SHARING to FORCE. This  allows statements that differ in literal
 values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and  hard parse.

Row cache objects latch

Possible Causes :
This latch comes into play when user processes are attempting to  access the cached data dictionary values.

Actions :
· It is not common to have contention in this latch and the only way to reduce contention for this latch is by increasing the size of the shared pool (SHARED_POOL_SIZE).

· Use Locally Managed tablespaces for your application objects especially indexes 

· Review and amend your database logical design , a good example is to merge or decrease the number of  indexes on tables with heavy inserts
Remark:
· Configuring the library cache to an acceptable size usually ensures that the data  dictionary cache is also properly sized. So tuning Library Cache will tune Row Cache indirectly.

 I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!

Note: One of our visitors  Lokesh Suryawanshi asked this question by posting a comment. Thanks Lokesh for your contribution. As promised i did patch upgrade on yesterday and here is the details of it. Keep visiting/commenting!

Required Software and Installable are available in “ */*/Patch” directory

$HOME = /oracle/SAM

$ORACLE_HOME = /oracle/SAM/102_64

1. 10.2.0.4 Patchset (patchset file is already extracted)

2. OPatch version 10.2.0.5.0

 3. MOPatch version2.1.7

Note:- All Required Software are kept in $HOME/Patch Directory and extract files in following order

1) Mopatch-2-1-7.zip in $ORACLE_HOME directory Note:- Unzip After 10.2.0.4 patchset installation.

2) OPatch10205_Generic_v0.zip file in $ORACLE_HOME directory Note:- Unzip After 10.2.0.4 patchset installation.

3) Patchset_10204.zip file in current directory i.e. in Patch Path

Step 1.-> Check upgradation prerequisites:

1.1 Stop SAP. (No need while applying on BCV)

 1.2:- check oratab and orainventory location at “/var/opt/oracle” and entry of “SAM:/oracle/SAM/102_64: N” in oratab file, If not make an entry in oratab file.

1.3 :- To check invalid objects and Tablepsace prerequisites Sql> @?/rdbms/admin/utlu102i.sql (? Sign indicates ORACLE_HOME path)

Sql> @?/rdbms/admin/utlu102i.sql (? Sign indicates ORACLE_HOME path)

Important Note:- After running prerequisites script you seen some public objects are invalid in SYS schema and other schemas(SAPSR3). Here we are only concern about SYS objects. Which  is known and need to be compiled. Next step to run utlrp script to get compiled invalid objects.

1.2:- Run utlrp.sql to validate invalid objects.

Purpose:- To ensure there is no object invalid found in SYS schema otherwise probabilities patch up gradation failed.  Run below scripts

 Sql>@?/rdbms/admin/utlrp.sql

Result:- In most cases result should report 0 in SYS schemas.( invalid objects). IF NOT ABORT OR STOP INSTALLATION PROCESS

Again Run Prerequistes script

Sql>@?/rdbms/admin/utlu102i.sql (? Sign indicates ORACLE_HOME path)

Note:- No Objects are invalid in SYS and SAPSR3 Schemas and DB is Ready for Upgradation. Next step to Shutdown Database and related Components.

Step 2:- Shutdown database listener,OEM and all component related to Oracle

 2.1 Gracefully stop Database instance and Listener process (Don’t use abort or kill Oracle process)

2.2 OEM is not valid when you test in BCV server as this is not configured for same but in Production you have to gracefully stop.

 Sql>shutdown immediate;

Log off from sql prompt

Orap22>lsnrctl stop

 Check the oracle processes by executing command ps –ef |grep -i ora_

Check the listener process by executing command ps –ef |grep -i LIS

Step: 3 Run OPatch

 3.1 Run OPatch lsinventory to ensure ora inventory location

 Orap22>/oracle/SAM/102_64/OPatch/opatch lsinventory

Step:-4  start 10.2.0.4 Patch Binary upgradation process.

Go to Patch directory in Disk1 directory

1.1   Start XBrowser

1.2   Set Display Env type command “export DISPLAY=your Desktop/Laptop IP:0.0”

1.3   ./oracle/SAM/Patch/Disk1/runInstaller

4.4 Verified ORACLE_HOME

Purpose:- To Validate correct ORACLE_HOME where 10.2.0.4 binary is going to update.

Workaround:- If not seen correct ORACLE_HOME abort and verify oraInventory location

4.5 Installaler summary

4.6 sh /oracle/SAM/102_64/root.sh from another session as root user

a)      Go to ORACLE_HOME run command from $ Prompt sh root.sh

Step-5:-  Run MOPatch for CPU patches(SBP)

5.1) Unzip Mopatch-2-1-7.zip in $ORACLE_HOME directory from Patch location

Orap22>unzip /oracle/SAM/Patch/Mopatch-2-1-7.zip /oracle/SAM/102_64/.

5.2) Unzip OPatch 10205_Generic_v0.zip file in $ORACLE_HOME directory Patch location

Orap22>unzip /oracle/SAM/Patch/OPatch10205_Generic_v0.zip  /oracle/SAM/102_64/.

Note:-  Run OPatch version to confirm OPatch version 10.2.0.5.0 which is required to run SAP Supplied Bundle Patch

CAUTION:- Before running mopatch ensure OPatch version:10.2.0.5.0

Otherwise MOpatch run but some patch failed which required OUI version 10.2.0.5.0

Verified OPatch lsinventory

Purpose:-  To Ensure Inventory of Patches which we applied recently. Next Step to Upgrade Database.

Orap22>Sh /oracle/SAM/102_64/MOPatch/mopatch.sh –v –s SAP_102048_201105_SOL64.zip

Step6:- Upgrade Database

6.1 startup database in upgrade mode

Sql>startup upgrade

 check oracle version 10.2.0.4 after loggin into sql prompt.

6.2 Run catupgrd.sql

Execute SQL from sql command prompt

Sql>spool catupgrd.log

Sql> @?/rdbms/admin/catupgrd.sql

Step 7:- Shutdown Database and restart in normal mode.

Sql>shutdown immediate;

Sql>startup

Step 8:- Run utlrp to validate any invalid object while upgrading

Sql> @?/rdbms/admin/utlrp.sql

Step:-9 Shutdown and restart database to take new parameter in effects.

Step:-10 Start listener and OEM Test and release to user.

Coming post will be on Oracle most popular wait events and there solutions, till that enjoy the Oracle jou5rney….. 🙂

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!

Note: One of our visitors and my friend Kavita Yadav  asked this question by posting a comment. Thanks KAvita for your contribution. Keep visiting/commenting!

Performance of the SQL queries of an application often play a big role in the overall performance of the underlying application. The response time may at times be really irritating for the end users if the application doesn’t have fine-tuned SQL queries. Sql Statements are used to retrieve data from the database. We can get same results by writing different sql queries. But use of the best query is important when performance is considered. So you need to sql query tuning based on the requirement. Here is the list of queries which we use reqularly and how these sql queries can be optimized for better performance.

Sql Statements are used to retrieve data from the database. We can get same results by writing different sql queries. But use of the best query is important when performance is considered. So you need to sql query tuning based on the requirement. Here is the list of queries which we use reqularly and how these sql queries can be optimized for better performance.

 There are sevaral ways of tuning SQl statements, few of which are:-

  • Understanding of the Data, Business, and Application – it’s almost impossible to fine-tune the SQl statements without having a proper understanding of the data managed by the application and the business handled by the application. The understanding of the application is of course of utmost importance. By knowing these things better, we may identify several instances where the data retrieval/modification by many SQL queries can simply be avoided as the same data might be available somewhere else, may be in the session of some other integrating application, and we can simply use that data in such cases. The better understanding will help you identify the queries which could be written better either by changing the tables involved or by establishing relationships among available tables.
  • Using realistic test data – if the application is not being tested in the development/testing environments with the volume and type of data, which the application will eventually face in the production environment, then we can’t be very sure about how the SQL queries of the application will really perform in actual business scenarios. Therefore, it’s important to have the realistic data for development/testing purposes as well.
  • Using Bind Variables, Stored Procs, and Packages – Using identical SQL statements (of course wherever applicable) will greatly improve the performance as the parsing step will get eliminated in such cases. So, we should use bind variables, stored procedures, and packages wherever possible to re-use the same parsed SQL statements.
  • Using the indexes carefully – Having indexes on columns is the most common method of enhancing performance, but having too many of them may degrade the performance as well. So, it’s very critical to decide wisely about which all columns of a table we should create indexes on. Few common guidelines are:- creating indexes on the columns which are frequently used either in WHERE clause or to join tables, avoid creating indexes on columns which are used only by functions or operators, avoid creating indexes on the columns which are required to changed quite frequently, etc.
  • Making available the access path – the optimizer will not use an access path that uses an index only because we have created that index. We need to explicitly make that access path available to the optimizer. We may use SQL hints to do that.
  • Using EXPLAIN PLAN – these tools can be used to fine tune SQL queries to a great extent. EXPLAIN PLAN explains the complete access path which will be used by the particular SQL statement during execution.
  • Optimizing the WHERE clause – there are many cases where index access path of a column of the WHERE clause is not used even if the index on that column has already been created. Avoid such cases to make best use of the indexes, which will ultimately improve the performance. Some of these cases are: COLUMN_NAME IS NOT NULL (ROWID for a null is not stored by an index), COLUMN_NAME NOT IN (value1, value2, value3, …), COLUMN_NAME != expression, COLUMN_NAME LIKE’%pattern’ (whereas COLUMN_NAME LIKE ‘pattern%’ uses the index access path), etc. Usage of expressions or functions on indexed columns will prevent the index access path to be used. So, use them wisely!
  • Using the leading index columns in WHERE clause – the WHERE clause may use the complex index access path in case we specify the leading index column(s) of a complex index otherwise the WHERE clause won’t use the indexed access path.
  • Indexed Scan vs Full Table Scan – Indexed scan is faster only if we are selcting only a few rows of a table otherwise full table scan should be preferred. It’s estimated that an indexed scan is slower than a full table scan if the SQL statement is selecting more than 15% of the rows of the table. So, in all such cases use the SQL hints to force full table scan and suppress the use of pre-defined indexes. Okay… any guesses why full table scan is faster when a large percentage of rows are accessed? Because an indexed scan causes multiple reads per row accessed whereas a full table scan can read all rows contained in a block in a single logical read operation.
  • Using ORDER BY for an indexed scan – the optimizer uses the indexed scan if the column specified in the ORDER BY clause has an index defined on it. It’ll use indexed scan even if the WHERE doesn’t contain that column (or even if the WHERE clause itself is missing). So, analyze if you really want an indexed scan or a full table scan and if the latter is preferred in a particular scenario then use ‘FULL’ SQL hint to force the full table scan.
  • Minimizing table passes – it normally results in a better performance for obvious reasons.
  • Joining tables in the proper order – the order in which tables are joined normally affects the number of rows processed by that JOIN operation and hence proper ordering of tables in a JOIN operation may result in the processing of fewer rows, which will in turn improve the performance. The key to decide the proper order is to have the most restrictive filtering condition in the early phases of a multiple table JOIN. For example, in case we are using a master table and a details table then it’s better to connect to the master table first to connecting to the details table first may result in more number of rows getting joined.
  • Using ROWID and ROWNUM wherever possible – these special columns can be used to improve the performance of many SQL queries. The ROWID search is the fastest for Oracle database and this luxury must be enjoyed wherever possible. ROWNUM comes really handy in the cases where we want to limit the number of rows returned.
  • Usage of explicit cursors is better – explicit cursors perform better as the implicit cursors result in an extra fetch operation. Implicit cursosrs are opened the Oracle Server for INSERT, UPDATE, DELETE, and SELECT statements whereas the explicit cursors are opened by the writers of the query by explicitly using DECLARE, OPEN, FETCH, and CLOSE statements.
  • Reducing network traffic – Arrays and PL/SQL blocks can be used effectively to reduce the network traffic especially in the scenarios where a huge amount of data requires processing. For example, a single INSERT statement can insert thousands of rows if arrays are used. This will obviously result into fewer DB passes and it’ll in turn improve performance by reducing the network traffic. Similarly, if we can club multiple SQL statements in a single PL/SQL block then the entire block can be sent to Oracle Server involving a single network communication only, which will eventually improve performance by reducing the network traffic.
  • Using Oracle parallel query option – Since Oracle 8, even the queries based on indexed range scans can use this parallel query option if the index is partitioned. This feature can result in an improved performance in certain scenarios.

 SQL Tuning/SQL Optimization Techniques:

  1. The sql query becomes faster if you use the actual columns names in SELECT statement instead of  ‘*’.

For Example: Write the query as

SELECT id, first_name, last_name, age, subject FROM student_details;

Instead of:

SELECT * FROM student_details;

 2.  Sometimes you may have more than one subqueries in your main query. Try to minimize the number of subquery block in your query.

 For Example: Write the query as

SELECT name
FROM employee
WHERE (salary, age ) = (SELECT MAX (salary), MAX (age)
FROM employee_details)
AND dept = ‘Electronics’;

Instead of:

SELECT name
FROM employee
WHERE salary = (SELECT MAX(salary) FROM employee_details)
AND age = (SELECT MAX(age) FROM employee_details)
AND emp_dept = ‘Electronics’;

 

3. Use operator EXISTS, IN and table joins appropriately in your query.
a) Usually IN has the slowest performance.
b) IN is efficient when most of the filter criteria is in the sub-query.
c) EXISTS is efficient when most of the filter criteria is in the main query.

For Example: Write the query as

Select * from product p
where EXISTS (select * from order_items o
where o.product_id = p.product_id)

Instead of:

Select * from product p
where product_id IN
(select product_id from order_items;

 

4. Be careful while using conditions in WHERE clause.
For Example: Write the query as

SELECT id, first_name, age FROM student_details WHERE age > 10;

Instead of:

SELECT id, first_name, age FROM student_details WHERE age != 10;

Write the query as

SELECT id, first_name, age
FROM student_details
WHERE first_name LIKE ‘Chan%’; —- pls try to

Instead of:

SELECT id, first_name, age
FROM student_details
WHERE SUBSTR(first_name,1,3) = ‘Cha’;

Write the query as

SELECT product_id, product_name
FROM product
WHERE unit_price BETWEEN MAX(unit_price) and MIN(unit_price)

Instead of:

SELECT product_id, product_name
FROM product
WHERE unit_price >= MAX(unit_price)
and unit_price <= MIN(unit_price)

Write the query as

SELECT id, name, salary
FROM employee
WHERE salary < 25000;

Instead of:

SELECT id, name, salary
FROM employee
WHERE salary + 10000 < 35000;

Write the query as

SELECT id, first_name, age
FROM student_details
WHERE age > 10;

Instead of:

SELECT id, first_name, age
FROM student_details
WHERE age NOT = 10; —- also instead of ‘> = 5’ try to use ‘> 6’ which is one and the same thing…. 🙂

 

5. To write queries which provide efficient performance follow the general SQL standard rules.

a)  Use single case for all SQL verbs
b) Begin all SQL verbs on a new line
c) Separate all words with a single space
d) Right or left aligning verbs within the initial SQL verb

  1. Use table aliasing whenever you are using more than one table and don’t forget to prefix the column names with alias name.
  2. Use EXISTS in place of DISTINCT(If possible)

Example:

 SELECT DISTINCT d.deptno , 2.d.dname , 3.

 FROM dept d , 4.emp e 5.WHERE d.deptno = e.deptno ;

 The following SQL statement is a better alternative.

SELECT d.deptno , 2.d.dname 3.FROM dept d 4.WHERE EXISTS ( SELECT e.deptno 5.FROM emp e 6.WHERE d.deptno = e.deptno ) ;

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”

Dear Friends,

          Back to work after long time, was busy in trekking. Today we did database migration from 9i to 10g using Export / Import.

To upgrade a database using the Export/Import utilities kindly follow below mention steps.

  •  Keep database in restricted mode – Export data from the current database
    exp  FILE=exp_20092011.dmp FULL=y GRANTS=y BUFFER=4096  ROWS=y CONSISTENT=y

 

  • Now once export is done prerequisite for import :

1) Please verify for sysaux tablespace is available. And also the 10g parameter properly set.

2)Please create the required Users and Tablespace as per the requirement.

Query yo check  User with respective privilages.

select

  lpad(‘ ‘, 2*level) || granted_role “User, his roles and privileges”

from

  (

  /* THE USERS */

    select

      null     grantee,

      username granted_role

    from

      dba_users

    where

      username like upper(‘%&enter_username%’)

  /* THE ROLES TO ROLES RELATIONS */

  union

    select

      grantee,

      granted_role

    from

      dba_role_privs

  /* THE ROLES TO PRIVILEGE RELATIONS */

  union

    select

      grantee,

      privilege

    from

      dba_sys_privs

  )

start with grantee is null

connect by grantee = prior granted_role;

 
 Query to check Tablespaces and there sizes.

set line 120

col host_name for a20;

select w.name,Q.host_name,a.tablespace_name,round(b.total_mb/1024/1024,2) total_mb,

       round(x.maxbytes_mb/1024/1024,2) maxbytes_mb,

       round(nvl(c.used_mb/1024/1024,0),2) Used_Mb,

       round(nvl(d.free_mb/1024/1024,0),2) Free_mb,

       round(((nvl(Used_mb,1)/decode(maxbytes_mb,NULL,total_mb,0,total_mb,maxbytes_mb))*100),2) Used_percent

from   dba_tablespaces a, (select tablespace_name,bytes Total_Mb from sys.sm$ts_avail) b,

       (select tablespace_name,bytes Used_mb from sys.sm$ts_used) c,

       (select tablespace_name,bytes free_mb from sys.sm$ts_free) d, v$database w, v$instance Q,

       (select tablespace_name,sum(maxbytes) maxbytes_mb from dba_data_files group by tablespace_name) x

where  a.tablespace_name=b.tablespace_name(+)

and    a.tablespace_name=c.tablespace_name(+)

and    a.tablespace_name=x.tablespace_name(+)

and    a.tablespace_name=d.tablespace_name(+) order by 8 desc;

 

You can get the DDL of tablespace using below script.

set pagesize 0

set escape on

spool ‘\tablespace.sql’

select ‘create tablespace ‘ || df.tablespace_name || chr(10)

|| ‘ datafile ”’ || df.file_name || ”’ size ‘ || df.bytes

 || decode(autoextensible,’N’,null, chr(10) || ‘ autoextend on maxsize ‘

 || maxbytes)

 || chr(10)

 || ‘default storage ( initial ‘ || initial_extent

 || decode (next_extent, null, null, ‘ next ‘ || next_extent )

|| ‘ minextents ‘ || min_extents

|| ‘ maxextents ‘ ||  decode(max_extents,’2147483645′,’unlimited’,max_extents)

 || ‘) ;’

from dba_data_files df, dba_tablespaces t

where df.tablespace_name=t.tablespace_name;

set pagesize 100

set escape off

spool off

 

You can get the DDL of  Users using below script.

set pagesize 0

set escape on

spool users.sql

select ‘create user ‘ || U.username || ‘ identified ‘ ||

DECODE(password,

      NULL, ‘EXTERNALLY’,

      ‘ by values ‘ || ”” || password || ””

      )

|| chr(10) ||

‘default tablespace ‘ || default_tablespace || chr(10) ||

‘temporary tablespace ‘ || temporary_Tablespace || chr(10) ||

‘ profile ‘ || profile || chr(10) ||

‘quota ‘ ||

decode ( Q.max_bytes, -1, ‘UNLIMITED’, NULL, ‘UNLIMITED’, Q.max_bytes) ||

‘ on ‘ || default_tablespace ||

decode (account_status,’LOCKED’, ‘ account lock’,

                                                ‘EXPIRED’, ‘ password expire’,

                                                ‘EXPIRED \& LOCKED’, ‘ account lock password expire’,

                                                null)

||

‘;’

from dba_users U, dba_ts_quotas Q

— Comment this clause out to include system & default users

where U.username not in (‘SYS’,’SYSTEM’,

‘SCOTT’,’DBSNMP’,’OUTLN’,’WKPROXY’,’WMSYS’,’ORDSYS’,’ORDPLUGINS’,’MDSYS’,

‘CTXSYS’,’XDB’,’ANONYMOUS’,’OWNER’,’WKSYS’,’ODM_MTR’,’ODM’,’OLAPSYS’,

‘HR’,’OE’,’PM’,’SH’,’QS_ADM’,’QS’,’QS_WS’,’QS_ES’,’QS_OS’,’QS_CBADM’,

‘QS_CB’,’QS_CS’,’PERFSTAT’)

and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)

;

set pagesize 100

set escape off

spool off

 

Now run this  tablespace.sql  and users.sql  to create tablespace and users  in 10g.
[Note : Actually we need not create  user and tablespace while importing DB in 10g]
3)Make sure that 10g DB is in Non-Archive log mode.

 imp FILE=exp_20092011.dmp LOG=imp_20092011.log FULL=Y GRANTS=Y BUFFER=4096 ROWS=Y

  • Once import is done please follow the below steps:

 1)Once import is done please Check import logs for errors.
 
2)Make sure users are pointing to proper tablespaces.

 select USERNAME,ACCOUNT_STATUS,DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users;
 
3)Please check the total object count and respective schema object count and confirm with original.

 select OBJECT_TYPE,count(*)from dba_objects where owner=’CROSSLNK’ group by OBJECT_TYPE;

 4)Please check the roles and privilages for respective user and do the required changes for same if required..

 select

  lpad(‘ ‘, 2*level) || granted_role “User, his roles and privileges”

from

  (

  /* THE USERS */

    select

      null     grantee,

      username granted_role

    from

      dba_users

    where

      username like upper(‘%&enter_username%’)

  /* THE ROLES TO ROLES RELATIONS */

  union

    select

      grantee,

      granted_role

    from

      dba_role_privs

  /* THE ROLES TO PRIVILEGE RELATIONS */

  union

    select

      grantee,

      privilege

    from

      dba_sys_privs

  )

start with grantee is null

connect by grantee = prior granted_role;

 
 
5)Please check the invalid object count.Recompile for invalid objects (run utrp.sql)

   select count(*) from dba_objects where STATUS=’INVALID’;
 
6)Gather statistics for entire database
 
 DATABASE LEVEL

begin

 dbms_stats.gather_database_stats(

options=> ‘GATHER AUTO’);

 end;

7) Put the 10g DB back into ARCHIVELOG mode if required.

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup mount

ORACLE instance started.

 

Total System Global Area  272629760 bytes

Fixed Size                   788472 bytes

Variable Size             103806984 bytes

Database Buffers          167772160 bytes

Redo Buffers                 262144 bytes

Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

8) Verify the backup is configured properly.

9)Monitor the alert log and dump directories for possible issues.

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”

Dear Friends,

          Today we face new issue relate to listener, so wanted to share with you same. We imported the dump into fresh blank copy of tha database and just chages IP of th server so that application configaration will not impacted. But when we swap the IP of both server we get the error while starting the listener.

Problem:

LSNRCTL for Linux: Version 10.2.0.1.0 – Production on 21-SEP-2011 12:54:12

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> start
Starting /u01/app/oracle/oracle/product/10.2.0/db_1//bin/tnslsnr: please wait…

TNSLSNR for Linux: Version 10.2.0.2.0 – Production
System parameter file is /u01/app/oracle/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/oracle/product/10.2.0/db_1/network/log/listener.log
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
TNS-12555: TNS:permission denied
TNS-12560: TNS:protocol adapter error
TNS-00525: Insufficient privilege for operation
Linux Error: 1: Operation not permitted

Listener failed to start. See the error message(s) above…

LSNRCTL>

Cause:

1) Ensure that /tmp/.oracle or /var/tmp/.oracle directory exists.

2) Confirm that the DBA user who is trying to start the listener has adequate read and write permissions on the directory specified above. The permissions should be 777.

3) If the /tmp directory has reached full capacity, this would cause the listener to fail to write the socket files.

Solution

To implement the solution, please use the following example:

1. cd /var/tmp

2. Check the whether the .oracle directory exists:

cd .oracle

3. If the directory does not exist, request the System Administrator create the directory and set the ownership as root:root with the permissions set to 01777

mkdir /var/tmp/.oracle
chmod 01777 /var/tmp/.oracle
chown root /var/tmp/.oracle
chgrp root /var/tmp/.oracle

4. Next try starting the TNS Listener using the ‘lsnrctl start <listener_name>’ command.

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”