Feeds:
Posts
Comments

Archive for February, 2011

Recently we did database upgrade from 10g to Oracle 11g.I would like share that activity with you.

Pre-Requisite:

You should have the Oracle database 10g, which you want to migerate.
Also here we are upgrading to Oracle Database 11g – Beta 6 (11.1.0.6)

Step 1) Installing Oracle 11g Home

We cannot upgrade the existing Oracle Home, since 11g is not a patchset. We have to install 11g oracle home as a seperate ORACLE_HOME in parallel to 10g Oracle Home.

Example my 10g Oracle Home is : /u01/app/oracle/oracle/product/10.2.0

then my 11g Oracel Home is : /u01/app/oracle/oracle/product/11.1.0

Just a parallel 11.1.0 directory can be created and we can install oracle home in this location.

Start the installation using the below command

./runInstaller -invPtrLoc /u01/app/oracle/oracle/product/11.1.0/oraInst

Screen 1 – Select Product Install
select “Oracle Database 11g”

Screen 2 – Select Installation Method
Choose “Advanced Installation”

Screen 3 – Specify Inventory directory and creditials
Note: We are providing local inventory here inside the corresponding ORACLE_HOME location.

Screen 4 – Select Installation Type
Choose “Enterprise Edition”

Screen 5 – Installation Location
Oracle Base as parent directory of ORACLE HOME

Screen 6 – Product Specific Pre-requisite Checks
It may gives below warning, we can ignore and proceed further

Screen 7 – Upgrade an Existsing Database
Choose “No”

Screen 8 – Select Configuration Option
Choose “Install Software Only”

Screen 9 – Privileged system groups

Based on the group of oracle user, this value has to be set.

Screen 10 – Summary
Click on “Install”

At the end of installation, installer will ask to run root.sh script. Do not press OK button.
Run root.sh as a root user and once done, press OK button. This will complete the software installation for Oracle Database 11g.

Step 2) Pre-Upgrade Utility

In 11g Home you installed, go to $ORACLE_HOME/rdbms/admin and copy the file utlu111i.sql to some temp location.

[oracle]$ cd $ORACLE_HOME
[oracle]$ cd rdbms/admin/
[oracle]$ pwd
/u01/app/oracle/oracle/product/product/11.1.0/db_1/rdbms/admin
[oracle]$ cp utlu111i.sql /tmp

The utility will give the output in the form of recommendations to be implemented before starting the upgrade. Unless these requirements are met, the upgrade will fail.

Most of the time issue use to come up with time zone….

Then login to the 10g oracle database and run the above sql you copied.

Oracle Database 11.1 Pre-Upgrade Information Tool 23-02-2011 01:34:07
.
**********************************************************************
Database:
**********************************************************************
–> name: ORCL
–> version: 10.2.0.1.0
–> compatible: 10.2.0.1.0
–> blocksize: 8192
–> platform: Linux IA (32-bit)
–> timezone file: V2
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
–> SYSTEM tablespace is adequate for the upgrade.
.
.
.
.
WARNING: –> Database contains schemas with objects dependent on network
packages.
…. Refer to the 11g Upgrade Guide for instructions to configure Network ACLs.
…. USER SYSMAN has dependent objects.
WARNING: –> EM Database Control Repository exists in the database.
…. Direct downgrade of EM Database Control is not supported. Refer to the
…. 11g Upgrade Guide for instructions to save the EM data prior to upgrade.
.

PL/SQL procedure successfully completed.

The utility will give the output in the form of recommendations to be implemented before starting the upgrade. Unless these requirements are met, the upgrade will fail.

Step 3) Executing the recommended steps

Following are the critical steps to be executed based on above warnings. These commands are to be executed while connecting to database from 10g Oracle Home

WARNING: –> Database is using an old timezone file version.
…. Patch the 10.2.0.1.0 database to timezone file version 4
…. BEFORE upgrading the database. Re-run utlu111i.sql after
…. patching the database to record the new timezone file version.

Finding the Version of existing timezone files:

SQL> select * from v$timezone_file;

FILENAME VERSION
———— ———-
timezlrg.dat 2

SQL> SELECT CASE COUNT(DISTINCT(tzname))
WHEN 183 then 1
WHEN 355 then 1
WHEN 347 then 1
WHEN 377 then 2
WHEN 186 then case COUNT(tzname) WHEN 636 then 2 WHEN 626 then 3 ELSE 0 end
WHEN 185 then 3
WHEN 386 then 3
WHEN 387 then case COUNT(tzname) WHEN 1438 then 3 ELSE 0 end
WHEN 391 then case COUNT(tzname) WHEN 1457 then 4 ELSE 0 end
WHEN 392 then case COUNT(tzname) WHEN 1458 then 4 ELSE 0 end
WHEN 188 then case COUNT(tzname) WHEN 637 then 4 ELSE 0 end
WHEN 189 then case COUNT(tzname) WHEN 638 then 4 ELSE 0 end
ELSE 0 end VERSION
FROM v$timezone_names;


VERSION
———-
2

If the Version of the existing timezone is less than 4, then apply the patch for Version 4 timezone files.

Check the database version

SQL> select banner from v$version;

BANNER
—————————————————————-
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Prod
PL/SQL Release 10.2.0.1.0 – Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 – Production
NLSRTL Version 10.2.0.1.0 – Production

For 10.2.0.1 check the metalink note ID 413671.1. We have a table which defines the patch to be applied.

Always try to use the official patch
The script (and on 10g also the csv file) are normally delivered through installation of a patch in the Oracle home. Please note that before using this note you are advised to double check that the time zone patches are not available for your patchset. Applying the “correct” patch through opatch is always preferable to the manual method described in this note.

If there is no official patchset for the version you are currently having then you can Identify the utltzuv2.sql & timezdif.csv combination patch for a different patchset, but same release.

For example if you run 10.2.0.1 and you are trying to find the utltzuv2.sql script & timezdif.csv file you can find the correct patch 5632264 for 10.2.0.2 and this will be applicable to 10.2.0.1 as well.

Please follow the metalink note ID 396387.1

Once you identify the correct patchset(5632264 for 10.2.X), download the same and unzip it.
[oracle]$ unzip p5632264_10202_LINUX.zip
[oracle]$ ls
etc files README.txt
[oracle]$ cd files/oracore/zoneinfo
[oracle]$ ls

readme.txt timezlrg.dat timezone.dat

Backup $ORACLE_HOME/oracore/zoneinfo directory

[oracle]$ cp -R $ORACLE_HOME/oracore/zoneinfo $ORACLE_HOME/oracore/zoneinfo_backup

Copy the .dat files

[oracle]$ cp timezone.dat timezlrg.dat $ORACLE_HOME/oracore/zoneinfo

Bounce the database and check the TIMEZONE version again

SQL> select * from v$timezone_file;

FILENAME VERSION
———— ———-
timezlrg.dat 4

SQL> SELECT CASE COUNT(DISTINCT(tzname))
WHEN 183 then 1
WHEN 355 then 1
WHEN 347 then 1
WHEN 377 then 2
WHEN 186 then case COUNT(tzname) WHEN 636 then 2 WHEN 626 then 3 ELSE 0 end
WHEN 185 then 3
WHEN 386 then 3
WHEN 387 then case COUNT(tzname) WHEN 1438 then 3 ELSE 0 end
WHEN 391 then case COUNT(tzname) WHEN 1457 then 4 ELSE 0 end
WHEN 392 then case COUNT(tzname) WHEN 1458 then 4 ELSE 0 end
WHEN 188 then case COUNT(tzname) WHEN 637 then 4 ELSE 0 end
WHEN 189 then case COUNT(tzname) WHEN 638 then 4 ELSE 0 end
ELSE 0 end VERSION
FROM v$timezone_names;

VERSION
———-
4

WARNING: –> Database contains stale optimizer statistics.
…. Refer to the 11g Upgrade Guide for instructions to update
…. statistics prior to upgrading the database.
…. Component Schemas with stale statistics:
…. SYS
…. SYSMAN

Gather Dictionary stats:

Connect as sys user and gather statistics
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SYS’);

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SYSMAN’);

PL/SQL procedure successfully completed.

Step 4) Run Pre-Upgrade Utility again

After executing the recommended steps, run the pre-upgrade utility once again to make sure, you don’t get any critical warnings.

Run the pre-upgrade utility script on 10g database while connecting from 10g oracle home.

If every thing looks fine, Shut down the database from 10g Oracle Home

This time make sure you dont have the critical warnings like the one with TIMEZONE version.

Step 5) Starting Upgrade

Source the following variables for 11g Oracle Home

[oracle]$ export ORACLE_HOME=/u01/app/oracle/oracle/product/product/11.1.0/db_1
[oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle]$ export ORACLE_SID=orcl
[oracle]$ export TNS_ADMIN=$ORACLE_HOME/network/admin

connected to the database sys as sysdba

sqlplus “/ as sysdba” –> will be connected to idle instance

SQL> startup upgrade
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.

Total System Global Area 611000320 bytes
Fixed Size 1301588 bytes
Variable Size 201327532 bytes
Database Buffers 402653184 bytes
Redo Buffers 5718016 bytes
Database mounted.
Database opened.

SQL> SPOOL upgrade.log
SQL> @catupgrd.sql

Once the upgrades finishes. It will shut down the database automatically.
Login again as sysdba and startup in normal mode.

Check the dba_registry for the components and its status

Step 6) Post-Upgrade Steps

Once the upgrade completes, restart the instance to reinitialize the system parameters for normal operation.

SQL> STARTUP

Run utlu111s.sql to display the results of the upgrade:

SQL> @?/rdbms/admin/utlu111s.sql
.
Oracle Database 11.1 Post-Upgrade Status Tool 23-02-2011 05:22:40
.
Component Status Version HH:MM:SS
.
Oracle Server
. VALID 11.1.0.6.0 00:19:02
JServer JAVA Virtual Machine
. VALID 11.1.0.6.0 00:02:55
Oracle Workspace Manager
. VALID 11.1.0.6.0 00:00:54
OLAP Analytic Workspace
. VALID 11.1.0.6.0 00:00:26
OLAP Catalog
. VALID 11.1.0.6.0 00:00:58
Oracle OLAP API
. VALID 11.1.0.6.0 00:00:25
Oracle Enterprise Manager
. VALID 11.1.0.6.0 00:11:00
Oracle XDK
. VALID 11.1.0.6.0 00:00:53
Oracle Text
. VALID 11.1.0.6.0 00:00:50
Oracle XML Database
. VALID 11.1.0.6.0 00:03:52
Oracle Database Java Packages
. VALID 11.1.0.6.0 00:00:21
Oracle Multimedia
. VALID 11.1.0.6.0 00:04:25
Spatial
. VALID 11.1.0.6.0 00:05:18
Oracle Expression Filter
. VALID 11.1.0.6.0 00:00:13
Oracle Rules Manager
. VALID 11.1.0.6.0 00:00:12
Gathering Statistics
. 00:04:03
Total Upgrade Time: 00:55:57

PL/SQL procedure successfully completed.

Run catuppst.sql, located in the ORACLE_HOME/rdbms/admin directory, to perform upgrade actions that do not require the database to be in UPGRADE mode:

SQL> @?/rdbms/admin/catuppst.sql

Run utlrp.sql to recompile

SQL> select count(*) from dba_objects
2 where status = ‘INVALID’;

COUNT(*)
———-
1576

SQL> @?/rdbms/admin/utlrp.sql

SQL> select count(*) from dba_objects
2 where status = ‘INVALID’;

COUNT(*)
———-
0

This completes the upgrade.

Read Full Post »

Introduction

This is the basic architecture of Oracle 9i . This is the basic concept about Oracle and this is the backbone of Oracle DB. I that this would be the first which related to Oracle.

An Oracle database is a combination of oracle Instance and data files on the file system.

Oracle Database = Oracle Instance + Datafiles

Again Oracle Instance is nothing but Memory architecture and Background processes. Lets start the discussion with Memory architecture first.

Memory Architecture

Oracle database uses memory for its operation. The total memory allocated to the Oracle database can be broadly categorized into SGA (System Global Area) and PGA (Program Global Area).

SGA Contains following data structure

  • Database buffer cache
  • Redo log buffer
  • Shared pool
  • Java pool
  • Large pool (optional)
  • Data dictionary cache
  • Other miscellaneous information

We can also categorized SGA into fixed SGA and variable SGA. fixed SGA is a component of the SGA that varies in size from platform to platform and  release to release.  It is compiled into the database.  The fixed SGA contains a set of variables that point to the other components of the SGA and variables that contain the values of various parameters.  The size of the fixed SGA is something over which we have no control and it is generally very small.  Think of this area as a bootstrap section of the SGA, something Oracle uses internally to find the other bits and pieces of the
SGA.

Variable SGA contains 4 main components as listed above, those are “Database Buffer Cache”, “Redo Log Buffer”, “Shared Pool” and “Large Pool”. We call it variable SGA because we can alter the size of each of these components manually using ALTER SYSTEM command. The size of each of the components of variable SGA is determined by INIT.ORA parameters. Following are the INIT.ORA parameter for each of the component.

  • Database Buffer Cache – db_block_buffers
  • Redo Log Buffer – log_buffer
  • Shared Pool – shared_pool_size
  • Large Pool – Large_pool_size

We cannot however alter the size of Fixed SGA.

Database Buffer Cache This is used to hold the data into the memory. When ever a user access the data, it gets fetched into database buffer cache and it will be managed according to LRU (Least recently used) algorithm. Advantages – If a user is requesting data, which gets fetched into the buffer cache, then next time if he ask for same data with in a short period of time, the data will be read from buffer cache and Oracle process does not have to fetch data again from disk. Reading data from buffer cache is a faster operation. Another advantage is that if a user is modifying the data, it can be modified in the buffer cache which is a faster operation then modifying the data directly on the disk.

Redo Log Buffer This memory block hold the data which is going to be written to redo log file. Why do we need this data? To rollback the changes if the need be. But instead of writing the data directly to the redo log files, it is first written to log buffer which improves performance and then with the occurrence of certain event it will be written to redo log file.

Shared Pool This contains 2 memory section, 1) Library Cache 2) Dictionary Cache. Library cache hold the parsed SQL statement and execution plans and parsed PLSQL codes. Dictionary cache hold the information about user privileges, tables and column definitions, passwords etc. These 2 memory components are included in the size of shared pool.

Large Pool (Optional) If defined then used for heavy operations such as bulk copy during backup or during restore operation.

The total size of SGA is determined by a parameter SGA_MAX_SIZE. Below is the simple calculation of memory sizes.

SQL> show sga

Total System Global Area  577574308 bytes
Fixed Size                   452004 bytes
Variable Size             402653184 bytes
Database Buffers          163840000 bytes
Redo Buffers               10629120 bytes

This will show fixed and variable size SGA. Fixed size SGA, as I said is not in our control. However we can verify the size of variable SGA and other memory values shown above.

Database Buffers          163840000 bytes

SQL> show parameters db_block_buffer

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_block_buffers                     integer     20000

This value is in terms of blocks. we can find the size of a block using DB_BLOCK_SIZE parameter

SQL> show parameters db_block_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_block_size                        integer     8192

So Database Buffers = db_block_buffers X db_block_size = 20000 X 8192 = 163840000 bytes

Also Variable size = “Shared Pool Size” + “Large Pool Size” + “Java Pool size” (some times defined)

SQL> SELECT pool, sum(bytes) from v$sgastat group by pool;

POOL        SUM(BYTES)
———– ———-
java pool     50331648
shared pool  352321536
11069860
Variable size = 352321536 + 50331648 = 402653184 bytes

Program Global Area-

PGA contains information about bind variables, sort areas, and other aspect of cursor handling. This is not a shared area and every user has its own PGA. But why PGA is required for every user? The reason being that even though the parse information for SQL or PLSQL may be available in library cache of shared pool, the value upon which the user want to execute the select or update statement cannot be shared. These values are stored in PGA. This is also called Private Global Area.This PGA  plays critical role in sort operation to reduce the  burden on temporary tablespace.

Going still deeper into the memory structure…

Database buffer cache is again divided into 3 different types of cache.

  1. Default Cache
  2. Keep Cache
  3. Recycle Cache

If we define the cache size using DB_CACHE_SIZE (or DB_BLOCK_BUFFER and specify the block size) then this will be default cache. The cache has a limited size, so not all the data on disk can fit in the cache. When the cache is full, subsequent cache misses cause Oracle to write dirty data already in the cache to disk to make room for the new data.

You can configure the database buffer cache with separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks.

  • The KEEP buffer pool retains the schema object’s data blocks in memory. This is defined using the INIT.ORA parameter DB_KEEP_CACHE_SIZE
  • The RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed. This is defined using the INIT.ORA parameter DB_RECYCLE_CACHE_SIZE

You can also define multiple DB block sizes using following parameters. Example if you have defined standard default block size of 4K, then following parameters can be used to define a size of 2K, 8K, 16K and 32K.

DB_2K_CACHE_SIZE
DB_8K_CACHE_SIZE
DB_16K_CACHE_SIZE
DB_32K_CACHE_SIZE

Note that you can define the Keep and Recycle cache only on standard block size and buffer cache size is the sum of sizes of each of these pools.

Shared Pool Reserved Size

Shared Pool, as we have seen previously contains the parsed SQL statements and execution plans. With continuous use of database, after a period of time the shared pool will get fragmented. New parsed SQL and execution plans comes and old one gets aged out and hence overwritten. This will also lead to larger packages being aged out with new entries going into shared pool. Hence access to such larger packages will take time to parse and create execution plan. This might cause performance issues.

To avoid such situation, you can define a parameter SHARED_POOL_RESERVED_SIZE. This will reserve some additional space other then shared_pool_size. If an object (either parsed SQL statement or execution plan) is stored in reserved shared pool area then it will not age out.

For large allocations, the order in which Oracle attempts to allocate space in the shared pool is the following:

    1. From the unreserved part of the shared pool.
    2. If there is not enough space in the unreserved part of the shared pool, and if the allocation is large, then Oracle checks whether the reserved pool has enough space.
    3. If there is not enough space in the unreserved and reserved parts of the shared pool, then Oracle attempts to free enough memory for the allocation. It then retries the unreserved and reserved parts of the shared pool.

Process Architecture

Oracle has several process running in the background for proper functioning of database. Following are the main categories of process.

    1. Server Process
    2. Background Process

Server Process – to handle the requests of user processes connected to the instance. Server processes (or the server portion of combined user/server processes) created on behalf of each user’s application can perform one or more of the following:

    • Parse and execute SQL statements issued through the application
    • Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA
    • Return results in such a way that the application can process the information

Background Process – An Oracle instance can have many background processes; not all are always present. The background processes in an Oracle instance include the following:

  • Database Writer (DBW0 or DBWn)
  • Log Writer (LGWR)
  • Checkpoint (CKPT)
  • System Monitor (SMON)
  • Process Monitor (PMON)
  • Archiver (ARCn)
  • Recoverer (RECO)
  • Lock Manager Server (LMS) – Real Application Clusters only
  • Queue Monitor (QMNn)
  • Dispatcher (Dnnn)
  • Server (Snnn)

On many operating systems, background processes are created automatically when an instance is started.

Database writer (DBWn) – The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes (DBW1 through DBW9) to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems.

Log Writer (LGWR) – The log writer process (LGWR) is responsible for redo log buffer management–writing the redo log buffer to a redo log file on disk. LGWR writes all redo entries that have been copied into the buffer since the last time it wrote.

Checkpoint (CKPT) – When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

System Monitor (SMON) –  The system monitor process (SMON) performs crash recovery, if necessary, at instance startup.  SMON is also responsible for cleaning up temporary segments that are no longer in use and for coalescing contiguous free extents within dictionary-managed tablespaces. If any dead transactions were skipped during crash and instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON wakes up regularly to check whether it is needed.

Process Monitor (PMON) –

The process monitor (PMON) performs process recovery when a user process fails. PMON is responsible for cleaning up the database buffer cache and freeing resources that the user process was using. For example, it resets the status of the active transaction table, releases locks, and removes the process ID from the list of active processes.

PMON periodically checks the status of dispatcher and server processes, and restarts any that have died (but not any that Oracle has terminated intentionally). PMON also registers information about the instance and dispatcher processes with the network listener.

Archiver Process (ARCn) –

The archiver process (ARCn) copies online redo log files to a designated storage device after a log switch  has occurred. ARCn processes are present only when the database is in ARCHIVELOG mode, and automatic archiving is enabled.

An Oracle instance can have up to 10 ARCn processes (ARC0 to ARC9). The LGWR process starts a new ARCn process whenever the current number of ARCn processes is insufficient to handle the workload. The ALERT file keeps a record of when LGWR starts a new ARCn process.

Recoverer (RECO) – The recoverer process (RECO) is a background process used with the distributed database configuration that automatically resolves failures involving distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When the RECO process reestablishes a connection between involved database servers, it automatically resolves all in-doubt transactions, removing from each database’s pending transaction table any rows that correspond to the resolved in-doubt transactions.

The RECO process is present only if the instance permits distributed transactions and if the DISTRIBUTED_TRANSACTIONS parameter is greater than zero. If this initialization parameter is zero, RECO is not created during instance startup.

Lock Manager Server (LMS) – In Oracle9i Real Application Clusters, a Lock Manager Server process (LMS) provides inter-instance resource management.

Queue Monitor (QMNn) – The queue monitor process is an optional background process for Oracle Advanced Queuing, which monitors the message queues. You can configure up to 10 queue monitor processes. These processes, like the Jnnn processes, are different from other Oracle background processes in that process failure does not cause the instance to fail.

The above once explained are the mail background processes. Please refer to the Oracle documentation for detailed Oracle 9i Architecture.

Hope this helps you guys….!!!! Very soon I will come up with 10g Architecture and how SQL statement is processed in Oracle!!!!

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/05/16/sql-statement-execution-pathin-oracle/

Read Full Post »

Yesterday my junior team member was confused about fragmentation and High water mark concepts.Also there was good comment on my fragmentation post, so it inspire me to write something about the High Watermark and the Oracle 10gR1 New Feature SEGMENT SHRINKING.About fragmentation I have already disscused in my previous post.

The High Watermark is the maximum fill-grade a table has ever reached.
Above the high watermark are only empty blocks.
These blocks can be formatted or unformatted.

First let’s have a look at the question when space is allocated

– when you create a table at least one extent (contiguous blocks) is allocated to the table
– if you have specified MINEXTENTS the number of MINEXTENTS extents
will be allocated immedaitely to the table
– if you have not specified MINEXTENTS then exactely one extent
will be allocated .

Immediately after creation of the segment (table) the high watermark will be at the first block of the first extent as long as there are no inserts made.

When you insert rows into the table the high watermark will be bumped up step by step.
This is done by the server process which makes the inserts.

Now let us take a look at when space is released again from a segment like a table or index:

Let’s asume that we have filled a table with 100’0000 rows.
And let’s asume that we deleted 50’000 rows afterwards.
In this case the high watermark will have reached the level of 100’000 and will have stayed there. Which means that we have empty blocks below the high watermark now.
Oracle has a good reason this: it might occur that you delete rows and immediately this you insert rows into the same table. In this case it is good that the space was not released with the deletes, because it had to be get reallocate again for the following inserts, which would mean permanent changes to the data dictionary
(=> dba_free_space, dba_extents, dba_segements …) .
Furthermore the physical addresses of the deleted row get recycled by new rows.

These empty blocks below the high watermark can get annoying in a number of situations because they are not used by DIRECT LOADs and DIRECT PATH LOADs:

1. seriell direct load:
INSERT /*+ APPEND */
INTO hr.employees
NOLOGGING
SELECT *
FROM oe.emps;

2. parallel direct load:
ALTER SESSION ENABLE PARALLEL DML;
INSERT /*+PARALLLEL(hr.employees,2)
INTO hr.employees
NOLOGGING
SELECT *
FROM oe.emps;

3. direct path loads:
sqlldr hr/hr control=lcaselutz.ctl … direct=y (default is direct=n)

All the above actions case that the SGA is not used for the inserts but the PGA:
there wil be temporary segements filled and dumped into newly formatted blocks above the high watermark.

So we might want to get high watermark down before we load data into the table in order to use the free empty blocks for the loading.

So how can we release unused space from a table?

There are a number of possible options which are already available before Oracle 10g:
– What we always could do is export and import the segment.
After an import the table will have only one extent.
The rows will have new physical addresses and
the high watermark will be adjusted.
– Another option would be to TRUNCATE the table.
With this we would loose all rows which are in the table.
So we cannot use this if we want to keep existing records.

With Oracle 9i another possibilty was implemented:
ALTER TABLE emp MOVE TABLESPACE users;
This statement will also cause that
– the rows will have new physical addresses and
– the high watermark will be adjusted.
But for this:
– we need a full (exclusive) table lock
– the indexes will be left with the status unusable (because they contain the old rowids) and must be rebuilt.

Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark,
it is called segment shrinking and is only possible for segments which use ASSM, in other words, which are located in tablespaces which use Automatic Segement Space Management.
In such a tablespace a table does not really have a High watermark!
It uses two watermarks instead:
– the High High Watermark referred to as HHWM, above which alle blocks ar unformatted.
– the Low High Watermark referred to as LHWM below which all blocks are formatted.
We now can have unformatted blocks in the middle of a segment!

ASSM was introduced in Oracle 9iR2 and it was made the default for tablespaces in Oracle 10gR2.
With the table shrinking feature we can get Oracle
to move rows which are located in the middle or at the end of a segment
further more down to the beginning of the segment and by
this make the segment more compact.
For this we must first allow ORACLE to change the ROWIDs of these rows by issuing
ALTER TABLE emp ENABLE ROW MOVEMENT;
ROWIDs are normally assigned to a row for the life time of the row at insert time.

After we have given Oracle the permission to change the ROWIDs
we can now issue a shrink statement.
ALTER TABLE emp SHRINK SPACE;

This statement will procede in two steps:
– The first step makes the segment compact
by moving rows further down to free blocks at the beginning of the segment.
– The second step adjusts the high watermark. For this Oracle needs an exclusive table lock,
but for a very short moment only.

Table shrinking…
– will adjust the high watermark
– can be done online
– will cause only rowlocks during the operation and just a very short full table lock at the end of the operation
– indexes will be maintained and remain usable
– can be made in one go
– can be made in two steps
(this can be usefull if you cannot get a full table lock during certain hours:
you only make the first step and adjust the high watermark later
when it is more conveniant:

– ALTER TABLE emp SHRINK SPACE; – only for the emp table
– ALTER TABLE emp SHRINK SPACE CASCADE; – for all dependent objects as well

– ALTER TABLE emp SHRINK SPACE COMPACT; – only makes the first step (moves the rows)

How are the indexes maintained?
In the first phase Oracle scans the segment from the back to find the position of the last row.
Afterwards it scan the segment from the beginning to find the position of the first free slot in a block in this segment. In case the two positions are the same, there is nothing to shrink. In case the two positions are different Oracle deletes the row from the back and inserts it into the free position at front of the segement. Now Oracle scan the segement from the back and front again and again until it finds that the two positions are the same.
Since it is DML statements performed to move the rows, the indexes are maintained at the same time. Only row level locks are used for these operations in the first pase of SHRINK TABLE statement.

The following restrictions apply to table shrinking:

1.) It is only possible in tablespaces with ASSM.
2.) You cannot shrink:
– UNDO segments
– temporary segments
– clustered tables
– tables with a colmn of datatype LONG
– LOB indexes
– IOT mapping tables and IOT overflow segments
– tables with MVIEWS with ON COMMIT
– tables with MVIEWS which are based on ROWIDs

The Oracle 10g Oracle comes with a Segment Advisor utility.
The Enterprise Manager, Database Control, even has a wizzard which can search for shrink candidates.

This advisor is run automatically by an autotask job on a regular basis in the default maintainance window.

You can use the built in package DBMS_SPACE to run the advisor manually as well.
I will blog about this later on some time.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/20/table-fragmentation-how-to-avoid-same/

Read Full Post »

When implementing a backup and recovery strategy, you have the following solutions available:

– Recovery Manager (RMAN)

This tool integrates with sessions running on an Oracle database to perform a
range of backup and recovery activities, including maintaining an RMAN
repository of historical data about backups. You can access RMAN through the command line or through Oracle Enterprise Manager.

– User-managed backup and recovery

In this solution, you perform backup and recovery with a mixture of host
operating system commands and SQL*Plus recovery commands.
Both of the preceding solutions are supported by Oracle and are fully documented, but RMAN is the preferred solution for database backup and recovery.

RMAN performs the same types of backup and recovery available through user-managed techniques more easily, provides a common interface for backup tasks across different host operating systems, and offers a number of backup techniques not available through user-managed methods.
Most of this manual focuses on RMAN-based backup and recovery.

User-managed backup and recovery techniques are covered in Section VIII, “Performing User-Managed Backup and Recovery.” RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:

– Incremental backups

An incremental backup stores only blocks changed since a previous backup.
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery. If you enable block change tracking, then you can improve performance by avoiding full scans of every input datafile. You use the BACKUP INCREMENTAL command to perform incremental backups.

– Block media recovery

You an repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup. You use the RECOVER command to perform block media recovery.

– Unused block compression

In unused block compression, RMAN can skip data blocks that have never been used and, in some cases, used blocks that are currently unused.

– Binary compression

A binary compression mechanism integrated into Oracle Database reduces the size of backups.

– Encrypted backups

RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format. To create encrypted backups on disk, the database must use the Advanced Security Option. To create encrypted backups directly on tape, RMAN must use the Oracle Secure Backup SBT interface, but does not require the Advanced Security Option.
Whether you use RMAN or user-managed methods, you can supplement physical backups with logical backups of schema objects made with Data Pump Export utility.

You can later use Data Pump Import to re-create data after restore and recovery. Logical backups are for the most part beyond the scope of the backup and recovery documentation.

– Oracle Flashback Technology

As explained in Oracle Database Concepts, Oracle Flashback Technology complements your physical backup and recovery strategy. This set of features provides an additional layer of data protection. Specifically, you can use flashback features to view past states of data and rewind your database without restoring backups or performing point-in-time recovery. In general, flashback features are more efficient and less disruptive than media recovery in most situations in which they apply.

– Logical Flashback Features

Most of the flashback features of Oracle operate at the logical level, enabling you to view and manipulate database objects. The logical-level flashback features of Oracle do not depend on RMAN and are available whether or not RMAN is part of your backup strategy. With the exception of Flashback Drop, the logical flashback features rely on undo data, which are records of the effects of each database update and the values overwritten in the update.

Oracle Database includes the following logical flashback features:

– Oracle Flashback Query

You can specify a target time and run queries against a database, viewing results as they would have appeared at the target time. To recover from an unwanted change like an update to a table, you could choose a target time before the error and run a query to retrieve the contents of the lost rows.

Oracle Database Advanced Application Developer’s Guide explains how to use this feature.

– Oracle Flashback Version Query

You can view all versions of all rows that ever existed in one or more tables in a specified time interval. You can also retrieve metadata about the differing versions of the rows, including start and end time, operation, and transaction ID of the transaction that created the version. You can use this feature to recover lost data values and to audit changes to the tables queried.

Oracle Database Advanced Aplication Developer’s Guide explains how to use this feature.

– Oracle Flashback Transaction Query

You can view changes made by a single transaction, or by all the transactions uring a period of time. Oracle Database Advanced Application Developer’s Guide xplains how to use this feature.

– Oracle Flashback Transaction

You can reverse a transaction. Oracle Database determines the dependencies between transactions and in effect creates a compensating transaction that reverses the unwanted changes. The database rewinds to a state as if the transaction, and any transactions that could be dependent on it, had never happened. Oracle
Database Advanced Application Developer’s Guide explains how to use this feature.

– Oracle Flashback Table

You can recover a table or set of tables to a specified point in time in the past
without taking any part of the database offline. In many cases, Flashback Table eliminates the need to perform more complicated point-in-time recovery operations. Flashback Table restores tables while automatically maintaining
associated attributes such as current indexes, triggers, and constraints, and in this way enabling you to avoid finding and restoring database-specific properties.

– Oracle Flashback Drop

You can reverse the effects of a DROP TABLE statement. “Rewinding a DROP
TABLE Operation with Flashback Drop” on page 16-7 explains how to use this
feature.

A flashback data archive enables you to use some of the logical flashback features to access data from far back in the past. A flashback data archive consists of one or more tablespaces or parts of tablespaces. When you create a flashback data archive, you specify the name, retention period, and tablespace. You can also specify a default flashback data archive. The database automatically purges old historical data the day after the retention period expires.

You can turn flashback archiving on and off for individual tables. By default, flashback archiving is turned off for every table.

– Flashback Database

At the physical level, Oracle Flashback Database provides a more efficient data protection alternative to database point-in-time recovery (DBPITR). If the current datafiles have unwanted changes, then you can use the RMAN command FLASHBACK DATABASE to revert the datafiles to their contents at a past time. The end product is much like the result of a DBPITR, but is generally much faster because it does not require restoring datafiles from backup and requires less redo than media recovery.

Flashback Database uses flashback logs to access past versions of data blocks and some information from archived redo logs. Flashback Database requires that you configure a flash recovery area for a database because the flashback logs can only be stored there. Flashback logging is not enabled by default. Space used for flashback logs is managed automatically by the database and balanced against space required for other files in the flash recovery area.

Oracle Database also supports restore points in conjunction with Flashback Database and backup and recovery. A restore point is an alias corresponding to a system change number (SCN). You can create a restore point at any time if you anticipate needing to return part or all of a database to its contents at that time. A guaranteed restore point ensures that you can use Flashback Database to return a database to the time of the restore point.

– Data Recovery Advisor

Oracle Database includes a Data Recovery Advisor tool that automatically diagnoses persistent data failures, presents appropriate repair options, and executes repairs at your request. Data Recovery Advisor provides a single point of entry for Oracle backup and recovery solutions. You can use Data Recovery Advisor through the Enterprise Manager Database Control or Grid Control console or through the RMAN command-line client.

A database failure usually manifests itself as a set of symptoms: error messages, alerts, trace files and dumps, and failed data integrity checks. Data Recovery Advisor automatically diagnoses and informs you of these failures. Within the context of Data Recovery Advisor, a failure is a persistent data corruption that can be directly mapped to a set of repair actions. Each failure has a status of open or closed. Each failure also has a priority of critical, high, or low.

Failures are detected by data integrity checks, which are diagnostic procedures executed to assess the health of the database or its components. If a data integrity check reveals a failure, then Data Recovery Advisor automatically assesses the effect of a set of failures and maps it to a set of repair options. In most cases, Data Recovery Advisor presents both automated and manual repair options.

Data Recovery Advisor determines the best automated repair option and its effect on the database. The repair option may include repairs such as datafile restore and recovery, media recovery, Flashback Database, and so on. Before presenting an automated repair option, Data Recovery Advisor validates it with respect to the specific environment and the availability of media components required to complete the proposed repair.

If you choose an automated repair option, then RMAN coordinates sessions on the Oracle database to perform the repair for you. The Data Recovery Advisor tool verifies the repair success and closes the appropriate failures.

Read Full Post »

In case of index scan a row is retrieved by traversing the index, using the indexed column values specified by the statement. An index scan retrieves data from an index based on the value of one or more columns in the index. To perform an index scan, Oracle searches the index for the indexed column values accessed by the statement. If the statement accesses only columns of the index, then Oracle reads the indexed column values directly from the index, rather than from the table.

The index contains not only the indexed value, but also the rowids of rows in the table having that value. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle can find the rows in the table by using either a table access by rowid or a cluster scan.

An index scan can be various types like

1)Index Unique Scans
2)Index Range Scans
3)Index Range Scans Descending
4)Index Skip Scans
5)Full Scans
6)Fast Full Index Scans
7)Index Joins
8)Bitmap Indexes


1)Index Unique Scans
———————————————-

This scan returns, at most, a single rowid. Oracle performs a unique scan if a statement contains a UNIQUE or a PRIMARY KEY constraint that guarantees that only a single row is accessed.

This access path is used when all columns of a unique (B-tree) index or an index created as a result of a primary key constraint are specified with equality conditions. There is an example in later of this section.

2)Index Range Scans
—————————————————-

•An index range scan is a common operation for accessing selective data.

•Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted in ascending order by rowid.

•If data must be sorted by order, then use the ORDER BY clause, and do not rely on an index. If an index can be used to satisfy an ORDER BY clause, then the optimizer uses this option and avoids a sort.

The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions like col1=1 or col1<1 or col1>1 or (col1=1 AND col1=99 AND ..)

•Range scans can use unique or non-unique indexes. Range scans avoid sorting when index columns constitute the ORDER BY/GROUP BY clause.

•The hint INDEX(table_alias index_name) instructs the optimizer to use a specific index.

•Note that leading wildcards like %text does not result range scan but text% might result range scan.
Look at following examples, 88% used range scans but %88 did not used range scans.

SQL> create table table_a(n number ,k varchar2(15));

Table created.

SQL> begin
for i in 1 .. 10000
loop
insert into table_a values(i,’pc-‘||round(dbms_random.value(1,20000),0));
end loop;
end;
/

2 3 4 5 6 7

PL/SQL procedure successfully completed.

SQL> create index table_a_I_K on table_a(k);

Index created.

SQL> select * from table_a where k like ‘88%’;

no rows selected

Execution Plan
———————————————————-
Plan hash value: 1124802227

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 22 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 22 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 1 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘88%’)
filter(“K” LIKE ‘88%’)

Note
—–
– dynamic sampling used for this statement

SQL> select * from table_a where k like ‘%88’;
102 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1923776651

—————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————–
| 0 | SELECT STATEMENT | | 102 | 2244 | 8 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TABLE_A | 102 | 2244 | 8 (0)| 00:00:01 |
—————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – filter(“K” LIKE ‘%88’)

Note
—–
– dynamic sampling used for this statement

3)Index Range Scans Descending
——————————————————

•An index range scan descending is identical to an index range scan, except that the data is returned in descending order.

•Indexes, by default, are stored in ascending order.

•The optimizer uses index range scan descending when an order by descending clause can be satisfied by an index.

•The hint INDEX_DESC(table_alias index_name) is used for index range scan descending.

Example:
————————————-
SQL> select * from table_a where k like ‘8888%’;

8 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1124802227

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘8888%’)
filter(“K” LIKE ‘8888%’)
SQL> select /*+index_desc(table_a)*/ * from table_a where k like ‘8888%’;

8 rows selected.

Execution Plan
———————————————————-
Plan hash value: 3364135956

——————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | TABLE_A | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN DESCENDING| TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 |
——————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘8888%’)
filter(“K” LIKE ‘8888%’ AND “K” LIKE ‘8888%’)

4)Index Skip Scans:
—————————–

Discussed in
5)Full Scans
——————————

Discussed in
6)Fast Full Index Scans
—————————————–

Discussed in
7)Index Joins
——————————————

Discussed in
8)Bitmap Indexes
———————————————

Discussed in
To illustrate an example create a table and make it’s column primary key. Now put the indexed column in the where clause with an equality operator. Note that index unique scan will be used.


SQL> create table test_tab2 as select level col1, level col2 from dual connect by level<=100;

Table created.
Case 1: No index, so full table scan will performed.
——————————————————————————–
SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 700767796

——————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TEST_TAB2 | 1 | 26 | 3 (0)| 00:00:01 |
——————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – filter(“COL1″=99)

Note
—–
– dynamic sampling used for this statement

Create non-unique index on the table.

SQL> create index test_tab2_I on test_tab2(col1);
Index created.

Case 2: As on col1 there is non-unique index so range scan will be performed.
————————————————————————————————-
SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 465564947

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_TAB2_I | 1 | | 1 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“COL1″=99)

Note
—–
– dynamic sampling used for this statement

Now drop the index and add primary key on the table.
SQL> drop index test_tab2_I;
Index dropped.

SQL> alter table
2 test_tab2 add primary key(col1);

Table altered.

Case 3: Adding primary key with equality operation on column causes to use index unique scan.
——————————————————————————

SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 1384425796

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | SYS_C006487 | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“COL1″=99)

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

A)Index Skip Scan
—————————————-

As the name suggest index skip scan does not scan complete index. But it scan of the subindexes.

Index skip scan lets a composite index be split logically into smaller subindexes. In skip scanning, the initial column of the composite index is not specified in the query. In other words, it is skipped.

The number of logical subindexes is determined by the number of distinct values in the initial column. Skip scanning is advantageous if there are few distinct values in the leading column of the composite index and many distinct values in the nonleading key of the index.

Suppose if I make a make a composite index with two columns sex and id. The leading column sex contains only two distinct columns. Now if I query with non-leading column that is with id column then index skip scan will be used.

Example:
——————

SQL> create table test_skip_scan (sex varchar2(1), id number, address varchar2(20));
Table created.

SQL> create index test_skip_scan_I on test_skip_scan(sex,id);

Index created.

SQL> begin
for i in 1 .. 10000
loop
insert into test_skip_scan values(decode(remainder(abs(round(dbms_random.value(2,20),0)),2),0,’M’,’F’),i,null);
end loop;
end;
/

PL/SQL procedure successfully completed.

SQL> analyze table test_skip_scan estimate statistics;
Table analyzed.

SQL> select * from test_skip_scan where id=1;

Execution Plan
———————————————————-
Plan hash value: 2410156502

————————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————————
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_SKIP_SCAN | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX SKIP SCAN | TEST_SKIP_SCAN_I | 1 | | 3 (0)| 00:00:01 |
————————————————————————————————

Predicate Information (identified by operation id):
—————————————————

2 – access(“ID”=1)
filter(“ID”=1)

B)Index Fast Full Scan
—————————————

Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint.

A fast full scan accesses the data in the index itself, without accessing the table.

It cannot be used to eliminate a sort operation, because the data is not ordered by the index key.

It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized.

Fast full index scans cannot be performed against bitmap indexes.

You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint.

Example:
—————–
SQL> select /*+INDEX_FFS(test_skip_scan)*/ sex,id from test_skip_scan;

10000 rows selected.

Execution Plan
———————————————————-
Plan hash value: 4280781105

—————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————–
| 0 | SELECT STATEMENT | | 10000 | 40000 | 7 (0)| 00:00:01 |
| 1 | INDEX FAST FULL SCAN| TEST_SKIP_SCAN_I | 10000 | 40000 | 7 (0)| 00:00:01 |
—————————————————————————————–


C)Index Joins
————————-

An index join is a hash join of several indexes that together contain all the table columns that are referenced in the query. If an index join is used, then no table access is needed, because all the relevant column values can be retrieved from the indexes. An index join cannot be used to eliminate a sort operation.

You can specify an index join with the INDEX_JOIN hint. For more information on the INDEX_JOIN hint.

SQL> select sex,id from test_skip_scan where id in (select col1 from test_tab);

1000 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1059662925

—————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————-
| 0 | SELECT STATEMENT | | 999 | 6993 | 9 (12)| 00:00:01 |
|* 1 | HASH JOIN RIGHT SEMI | | 999 | 6993 | 9 (12)| 00:00:01 |
| 2 | INDEX FAST FULL SCAN| TEST_TAB_I | 1000 | 3000 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | TEST_SKIP_SCAN | 10000 | 40000 | 6 (0)| 00:00:01 |
—————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – access(“ID”=”COL1”)

D)Bitmap Indexes
———————————–

A bitmap join uses a bitmap for key values and a mapping function that converts each bit position to a rowid. Bitmaps can efficiently merge indexes that correspond to several conditions in a WHERE clause, using Boolean operations to resolve AND and OR conditions.

Bitmap indexes and bitmap join indexes are available only if you have purchased the Oracle Enterprise Edition.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

•A sample table scan retrieves a random sample of data from a simple table or a complex SELECT statement, such as a statement involving joins and views.

•This access path is used when a statement’s FROM clause includes the SAMPLE clause or the SAMPLE BLOCK clause.

•To perform a sample table scan when sampling by rows with the SAMPLE clause, Oracle reads a specified percentage of rows in the table.

•To perform a sample table scan when sampling by blocks with the SAMPLE BLOCK clause, Oracle reads a specified percentage of table blocks.

Example:
——————-


SQL> select * from test_skip_scan sample(.2);

20 rows selected.

Execution Plan
———————————————————-
Plan hash value: 571935661

————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————–
| 0 | SELECT STATEMENT | | 20 | 180 | 6 (0)| 00:00:01 |
| 1 | TABLE ACCESS SAMPLE| TEST_SKIP_SCAN | 20 | 180 | 6 (0)| 00:00:01 |

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

* Create an index if you frequently want to retrieve less than 15% of the rows in a large table.
* To improve performance on joins of multiple tables, index columns used for joins.

* Small tables do not require indexes.

Some columns are strong candidates for indexing. Columns with one or more of the following characteristics are candidates for indexing:

* Values are relatively unique in the column.
* There is a wide range of values (good for regular indexes).
* There is a small range of values (good for bitmap indexes).
* The column contains many nulls, but queries often select all rows having a value. In this case, use the following phrase:

WHERE COL_X > -9.99 * power(10,125)

Using the preceding phrase is preferable to:

WHERE COL_X IS NOT NULL

This is because the first uses an index on COL_X (assuming that COL_X is a numeric column).

Columns with the following characteristics are less suitable for indexing:

* There are many nulls in the column and you do not search on the not null values.

The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block.

Other Considerations:

1. The order of columns in the CREATE INDEX statement can affect query performance. In general, specify the most frequently used columns first.If you create a single index across columns to speed up queries that access, for example, col1, col2, and col3; then queries that access just col1, or that access just col1 and col2, are also speeded up. But a query that accessed just col2, just col3, or just col2 and col3 does not use the index.

2. There is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable.

3. Drop Index that are no longer required.

4. Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

Older Posts »