Feeds:
Posts
Comments

Posts Tagged ‘Unique key’

In case of index scan a row is retrieved by traversing the index, using the indexed column values specified by the statement. An index scan retrieves data from an index based on the value of one or more columns in the index. To perform an index scan, Oracle searches the index for the indexed column values accessed by the statement. If the statement accesses only columns of the index, then Oracle reads the indexed column values directly from the index, rather than from the table.

The index contains not only the indexed value, but also the rowids of rows in the table having that value. Therefore, if the statement accesses other columns in addition to the indexed columns, then Oracle can find the rows in the table by using either a table access by rowid or a cluster scan.

An index scan can be various types like

1)Index Unique Scans
2)Index Range Scans
3)Index Range Scans Descending
4)Index Skip Scans
5)Full Scans
6)Fast Full Index Scans
7)Index Joins
8)Bitmap Indexes


1)Index Unique Scans
———————————————-

This scan returns, at most, a single rowid. Oracle performs a unique scan if a statement contains a UNIQUE or a PRIMARY KEY constraint that guarantees that only a single row is accessed.

This access path is used when all columns of a unique (B-tree) index or an index created as a result of a primary key constraint are specified with equality conditions. There is an example in later of this section.

2)Index Range Scans
—————————————————-

•An index range scan is a common operation for accessing selective data.

•Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted in ascending order by rowid.

•If data must be sorted by order, then use the ORDER BY clause, and do not rely on an index. If an index can be used to satisfy an ORDER BY clause, then the optimizer uses this option and avoids a sort.

The optimizer uses a range scan when it finds one or more leading columns of an index specified in conditions like col1=1 or col1<1 or col1>1 or (col1=1 AND col1=99 AND ..)

•Range scans can use unique or non-unique indexes. Range scans avoid sorting when index columns constitute the ORDER BY/GROUP BY clause.

•The hint INDEX(table_alias index_name) instructs the optimizer to use a specific index.

•Note that leading wildcards like %text does not result range scan but text% might result range scan.
Look at following examples, 88% used range scans but %88 did not used range scans.

SQL> create table table_a(n number ,k varchar2(15));

Table created.

SQL> begin
for i in 1 .. 10000
loop
insert into table_a values(i,’pc-‘||round(dbms_random.value(1,20000),0));
end loop;
end;
/

2 3 4 5 6 7

PL/SQL procedure successfully completed.

SQL> create index table_a_I_K on table_a(k);

Index created.

SQL> select * from table_a where k like ‘88%’;

no rows selected

Execution Plan
———————————————————-
Plan hash value: 1124802227

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 22 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 22 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 1 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘88%’)
filter(“K” LIKE ‘88%’)

Note
—–
– dynamic sampling used for this statement

SQL> select * from table_a where k like ‘%88’;
102 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1923776651

—————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————–
| 0 | SELECT STATEMENT | | 102 | 2244 | 8 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TABLE_A | 102 | 2244 | 8 (0)| 00:00:01 |
—————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – filter(“K” LIKE ‘%88’)

Note
—–
– dynamic sampling used for this statement

3)Index Range Scans Descending
——————————————————

•An index range scan descending is identical to an index range scan, except that the data is returned in descending order.

•Indexes, by default, are stored in ascending order.

•The optimizer uses index range scan descending when an order by descending clause can be satisfied by an index.

•The hint INDEX_DESC(table_alias index_name) is used for index range scan descending.

Example:
————————————-
SQL> select * from table_a where k like ‘8888%’;

8 rows selected.

Execution Plan
———————————————————-
Plan hash value: 1124802227

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_A | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘8888%’)
filter(“K” LIKE ‘8888%’)
SQL> select /*+index_desc(table_a)*/ * from table_a where k like ‘8888%’;

8 rows selected.

Execution Plan
———————————————————-
Plan hash value: 3364135956

——————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 9 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | TABLE_A | 1 | 9 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN DESCENDING| TABLE_A_I_K | 1 | | 2 (0)| 00:00:01 |
——————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

2 – access(“K” LIKE ‘8888%’)
filter(“K” LIKE ‘8888%’ AND “K” LIKE ‘8888%’)

4)Index Skip Scans:
—————————–

Discussed in
5)Full Scans
——————————

Discussed in
6)Fast Full Index Scans
—————————————–

Discussed in
7)Index Joins
——————————————

Discussed in
8)Bitmap Indexes
———————————————

Discussed in
To illustrate an example create a table and make it’s column primary key. Now put the indexed column in the where clause with an equality operator. Note that index unique scan will be used.


SQL> create table test_tab2 as select level col1, level col2 from dual connect by level<=100;

Table created.
Case 1: No index, so full table scan will performed.
——————————————————————————–
SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 700767796

——————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TEST_TAB2 | 1 | 26 | 3 (0)| 00:00:01 |
——————————————————————————-

Predicate Information (identified by operation id):
—————————————————

1 – filter(“COL1″=99)

Note
—–
– dynamic sampling used for this statement

Create non-unique index on the table.

SQL> create index test_tab2_I on test_tab2(col1);
Index created.

Case 2: As on col1 there is non-unique index so range scan will be performed.
————————————————————————————————-
SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 465564947

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TEST_TAB2_I | 1 | | 1 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“COL1″=99)

Note
—–
– dynamic sampling used for this statement

Now drop the index and add primary key on the table.
SQL> drop index test_tab2_I;
Index dropped.

SQL> alter table
2 test_tab2 add primary key(col1);

Table altered.

Case 3: Adding primary key with equality operation on column causes to use index unique scan.
——————————————————————————

SQL> select * from test_tab2 where col1=99;

Execution Plan
———————————————————-
Plan hash value: 1384425796

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 26 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TEST_TAB2 | 1 | 26 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | SYS_C006487 | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“COL1″=99)

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Advertisements

Read Full Post »

* Create an index if you frequently want to retrieve less than 15% of the rows in a large table.
* To improve performance on joins of multiple tables, index columns used for joins.

* Small tables do not require indexes.

Some columns are strong candidates for indexing. Columns with one or more of the following characteristics are candidates for indexing:

* Values are relatively unique in the column.
* There is a wide range of values (good for regular indexes).
* There is a small range of values (good for bitmap indexes).
* The column contains many nulls, but queries often select all rows having a value. In this case, use the following phrase:

WHERE COL_X > -9.99 * power(10,125)

Using the preceding phrase is preferable to:

WHERE COL_X IS NOT NULL

This is because the first uses an index on COL_X (assuming that COL_X is a numeric column).

Columns with the following characteristics are less suitable for indexing:

* There are many nulls in the column and you do not search on the not null values.

The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block.

Other Considerations:

1. The order of columns in the CREATE INDEX statement can affect query performance. In general, specify the most frequently used columns first.If you create a single index across columns to speed up queries that access, for example, col1, col2, and col3; then queries that access just col1, or that access just col1 and col2, are also speeded up. But a query that accessed just col2, just col3, or just col2 and col3 does not use the index.

2. There is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable.

3. Drop Index that are no longer required.

4. Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

If a procedural object, such as a stored PL/SQL function or a view, becomes invalid, the DBA does not necessarily have to do anything: the first time it is accessed, Oracle will attempt to recompile it, and this may well succeed.

But if an index becomes unusable for any reason, it must always be repaired explicitly before it can be used.

This is because an index consists of the index key values, sorted into order, each with the relevant
rowid. The rowid is the physical pointer to the location of the row to which the index key refers.

If the rowids of the table are changed, then the index will be marked as unusable. This could occur for a number of reasons. Perhaps the most common is that the table has been moved, with the ALTER TABLE…MOVE command. This will change the physical placement of all the rows, and therefore the index entries will be pointing to the wrong place. Oracle will be aware of this and will therefore not permit use of the index.

Identifying Unusable Indexes
———————————

In earlier releases of the Oracle database, when executing SQL statements, if the session attempted to use an unusable index it would immediately return an error, and the statement would fail. Release 10g of the database changes this behavior. If a statement attempts to use an unusable index, the statement will revert to an execution plan that does not require the index.

Thus, statements will always succeed—but perhaps at the cost of greatly reduced performance. This behavior is controlled by the instance parameter SKIP_UNUSABLE_INDEXES, which defaults to TRUE. The exception to this is if the index is necessary to enforce a constraint: if the index on a primary key column becomes unusable, the table will be locked for DML.

To detect indexes which have become unusable, query the DBA_INDEXES view:
SQL> select owner, index_name from dba_indexes where status=’UNUSABLE’;

Repairing Unusable Indexes
—————————————-

To repair the index, it must be re-created with the ALTER INDEX…REBUILD command.

This will make a pass through the table, generating a new index with correct rowid pointers
for each index key. When the new index is completed, the original unusable index is dropped.

The syntax of the REBUILD command has several options. The more important ones are TABLESPACE, ONLINE, and NOLOGGING. By default, the index will be rebuilt within its current tablespace, but by specifying a table space with the TABLESPACE keyword, it can be moved to a different one.

Also by default, during the course of the rebuild the table will be locked for DML. This can be avoided by using the ONLINE keyword.

1)Create Table and insert row in it:
—————————————-
SQL> create table test ( a number primary key);

Table created.

SQL> insert into test values(1);
1 row created.

SQL> commit;
Commit complete.

2)Check the Index Status
————————–
SQL> select index_name, status from user_indexes;

INDEX_NAME STATUS
—————————— ——–
SYS_C0044513 VALID

3)Move the Table and Check Status:
————————————
SQL> alter table test move;

Table altered.

SQL> select index_name, status from user_indexes;

INDEX_NAME STATUS
—————————— ——–
SYS_C0044513 UNUSABLE

4)Rebuild The Index:
———————–
SQL> alter index SYS_C0044514 rebuild online;

Index altered.

SQL> select index_name, status from user_indexes;
INDEX_NAME STATUS
—————————— ——–
SYS_C0044514 VALID

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

I would like to share my database design experiance for one of our project with you.My previous company I worked on development side.We were working on fresh proect where we need to design database for one of our bank client. 
Approach for designing should be
• Establish the Scope of the Database
• Identify the ‘Things of Interest’ that are within the Scope, (called Entities or Tables in a     Database).
• Determine the Relationships between them.

1. Define the Scope as the Area of Interest.
2. Define the “Things of Interest”,(e.g. Customer), in the Area of Interest.
3. Analyze the Things of Interest and identify the corresponding Tables.For this go through the requirement document of client.
4. Produce the list of Things of Interest. Also same time decide the primary key & respective for foreign key.
5. Establish the relationships between the Tables.
For example, “A Customer can place many Orders”, and “A Product can be purchased many times and appear in many Orders.”
6. Determine the characteristics of each Table,(e.g. an Employee has a Date-of-Birth).  Maintain the Data dictionary of all the objects which we need to provide to the client while the finale release.
7. Identify the Static and Reference Data, such as Country Codes or Customer Types.
8. Obtain a small set of Sample Data.
9. Review Code or Type Data which is (more or less) constant, which can be classified as Reference Data.
10. Look for ‘has a’ relationships. These can become Foreign Keys, or ‘Parent-Child’ relationships.
11. You need to define a Primary Key for all Tables.
For Reference Tables, use the’Code’ as the Key, often with only one other field, which is the Description field. For all non-Reference Data Tables, I suggest that you simply assign an Auto-increment Integer for each Primary Key.
12. Same time collect the required procedure, function, packages etc. from developer.
13. Once we decide all the tables and indexes we need to segregate those objects in Master, Transaction, History etc. We segregate those objects so that we can plan different logical segment (Tablespace) for respective objects. Depends upon the objects category we will calculate required size for the logical segment (….I will discuss this size calculation for logical segment in another post).While segregating naming convention should be specific. Such as for master table name should be start as MST_, History as HST_ etc.
14. At lat combine all those objects scripts, procedure, function, package in single .sql script. Will implement this script in given schema with required privileges.
15. Confirm the first draft of the Database design against the Sample Data.
16. Review the Business Rules with Users,(if you can find any Users).
17. Obtain from the Users some representative enquiries for the Database.
18. Review the Results of Steps 1 to 8 with appropriate people, such as Users, Managers,
Development staff, etc. and repeat until the final Database design is reached.
19. Define User Scenarios and step through them with some sample data to check that that Database supports the required functionality.
Test the project from performance point of view till the requirements meet. Review the procedure, function for sql query tuning for performance issue.
Once the requirements meet release the product to the client as version 1.Same time provide them the Data Dictionary for all the objects and ER-Diagram(….We will discuss how to design the  ER-diagram in another post).
In above discussion designing the ER –Diagram is another topic which we will discuss in next post.
But this short & sweet way to design the database.

Read Full Post »