Feeds:
Posts
Comments

Posts Tagged ‘Tables’

•A sample table scan retrieves a random sample of data from a simple table or a complex SELECT statement, such as a statement involving joins and views.

•This access path is used when a statement’s FROM clause includes the SAMPLE clause or the SAMPLE BLOCK clause.

•To perform a sample table scan when sampling by rows with the SAMPLE clause, Oracle reads a specified percentage of rows in the table.

•To perform a sample table scan when sampling by blocks with the SAMPLE BLOCK clause, Oracle reads a specified percentage of table blocks.

Example:
——————-


SQL> select * from test_skip_scan sample(.2);

20 rows selected.

Execution Plan
———————————————————-
Plan hash value: 571935661

————————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————–
| 0 | SELECT STATEMENT | | 20 | 180 | 6 (0)| 00:00:01 |
| 1 | TABLE ACCESS SAMPLE| TEST_SKIP_SCAN | 20 | 180 | 6 (0)| 00:00:01 |

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

* Create an index if you frequently want to retrieve less than 15% of the rows in a large table.
* To improve performance on joins of multiple tables, index columns used for joins.

* Small tables do not require indexes.

Some columns are strong candidates for indexing. Columns with one or more of the following characteristics are candidates for indexing:

* Values are relatively unique in the column.
* There is a wide range of values (good for regular indexes).
* There is a small range of values (good for bitmap indexes).
* The column contains many nulls, but queries often select all rows having a value. In this case, use the following phrase:

WHERE COL_X > -9.99 * power(10,125)

Using the preceding phrase is preferable to:

WHERE COL_X IS NOT NULL

This is because the first uses an index on COL_X (assuming that COL_X is a numeric column).

Columns with the following characteristics are less suitable for indexing:

* There are many nulls in the column and you do not search on the not null values.

The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block.

Other Considerations:

1. The order of columns in the CREATE INDEX statement can affect query performance. In general, specify the most frequently used columns first.If you create a single index across columns to speed up queries that access, for example, col1, col2, and col3; then queries that access just col1, or that access just col1 and col2, are also speeded up. But a query that accessed just col2, just col3, or just col2 and col3 does not use the index.

2. There is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable.

3. Drop Index that are no longer required.

4. Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:

https://samadhandba.wordpress.com/2011/01/04/oversize-of-datatype-varchar2-causes-performance-issue/

https://samadhandba.wordpress.com/2011/02/16/what-and-when-index-scans-is-used/

https://samadhandba.wordpress.com/2011/02/16/index-skip-full-fast-full-index-index-joins-bitmap-indexes-scan/

https://samadhandba.wordpress.com/2011/02/14/sample-table-scans-in-oracle/

https://samadhandba.wordpress.com/2011/02/14/when-you-would-make-index-and-when-not/

https://samadhandba.wordpress.com/2011/02/14/optimize-data-access-path-in-oracle-2/

https://samadhandba.wordpress.com/2011/02/13/troubleshoot-unusable-index-in-oracle/

https://samadhandba.wordpress.com/2011/02/13/the-possible-causes-for-excessive-undo-generation-2/

https://samadhandba.wordpress.com/2011/02/13/three-basic-steps-of-sql-tuning/

https://samadhandba.wordpress.com/2011/02/13/goals-for-tuning-2/

https://samadhandba.wordpress.com/2011/02/04/ora-12054-cannot-set-the-on-commit-refresh-attribute-for-the-materialized-view/

Read Full Post »

One day we had an issue to increase the speed of insert statements. This activity was carried out at our one of the bank client. We had to insert about 500K record to database as quickly as possible. We were inserting at a rate of 10 records/sec. Well, I was thinking the following approaches to gain speed-

1. Use a large blocksize – By defining large (i.e. 16k or 32k) blocksizes for the target tables, we can reduce I/O because more rows fit onto a block before a “block full” condition (as set by PCTFREE) unlinks the block from the freelist.

>DROP TABLESPACE sam_tbs INCLUDING CONTENTS AND DATAFILES;
>CREATE TABLESPACE sam_tbs DATAFILE ‘/mnt/extra/test1.dbf’ SIZE 1024M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED blocksize 16K;

2. Increase the size of UNDO tablespace –

>CREATE UNDO TABLESPACE UNDOTBS DATAFILE ‘/mnt/extra/test_undo1.dbf’ SIZE 200M BLOCKSIZE 16K;
>alter system set undo_tablespace=’UNDOTBS‘ scope=both;
>DROP TABLESPACE UNDOTBSP1 INCLUDING CONTENTS AND DATAFILES;

3. APPEND into tables – By using the APPEND hint, it ensures that Oracle always grabs fresh data blocks by raising the high-water-mark for the table.

>insert /*+ append */ into customer values (’hello’,’;there’);

4. Table into NOLOGGING mode – Putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging.

>SELECT logging FROM user_tables WHERE table_name = ‘LOGIN’;
>ALTER TABLE login NOLOGGING;

Again, to enable logging –
>ALTER TABLE login LOGGING;

5. Disable/drop indexes – It’s far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.

6. Parallelize the load – We can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL “APPEND” option.

Read Full Post »

Automatic statistics gathering should be sufficient for most database objects which are being modified at a moderate speed. However, there are cases where automatic statistics gathering may not be adequate. Because the automatic statistics gathering runs during an overnight batch window, the statistics on tables which are significantly modified during the day may become stale.

There may be two scenarios in this case.
•Volatile tables that are being deleted or truncated and rebuilt during the course of the day.
•Objects which are the target of large bulk loads which add 10% or more to the object’s total size.

So you may wish to manually gather statistics of those objects in order to choose the optimizer the best execution plan. There are two ways to gather statistics.

1)Using DBMS_STATS package.
2)Using ANALYZE command.

A)Using DBMS_STATS package
——————————————–
The DBMS_STATS package have several procedures which help to generate statistics.

1)GATHER_DATABASE_STATS Procedures-Gathers statistics for all objects in the database

2)GATHER_DICTIONARY_STATS Procedure-Gathers statistics for dictionary schemas ‘SYS’, ‘SYSTEM’ and schemas of RDBMS components.

3)GATHER_FIXED_OBJECTS_STATS Procedure-Gathers statistics of fixed objects.

4)GATHER_INDEX_STATS Procedure-Gathers index statistics.

5)GATHER_SCHEMA_STATS Procedures-Gathers statistics for all objects in a schema.

6)GATHER_SYSTEM_STATS Procedure-Gathers system statistics.

7)GATHER_TABLE_STATS Procedure-Gathers table and column (and index) statistics.

8)GENERATE_STATS Procedure-Generates object statistics from previously collected statistics of related objects. The currently supported objects are only b-tree and bitmap indexes.

Example:
To gather statistics of all objects inside A schema use
SQL>EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘A’);
PL/SQL procedure successfully completed.

To gather statictics of table test in SAM schema use,
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(‘SAM’,’TEST’);
PL/SQL procedure successfully completed.

2)Using Analyze:
————————
Oracle strongly recommend not use ANALYZE command to estimate statistics. Yet it is supported for backward compatibility. To generate statistics of TEST table using ANALYZE use ANALYZE with estimate statistics keyword.,

In this example I verified that statistics of table column num_rows contain information after analyzing.

SQL> SELECT NUM_ROWS FROM USER_TABLES WHERE TABLE_NAME=’TEST’;
NUM_ROWS
———-
2
SQL> INSERT INTO TEST VALUES(‘before’);
1 row created.

SQL> SELECT NUM_ROWS FROM USER_TABLES WHERE TABLE_NAME=’TEST’;
NUM_ROWS
———-
2

SQL> ANALYZE TABLE TEST COMPUTE STATISTICS;
Table analyzed.

SQL> SELECT NUM_ROWS FROM USER_TABLES WHERE TABLE_NAME=’TEST’;
NUM_ROWS
———-
3

After gather statistics num_rows contain accurate information.

Read Full Post »

* Create an index if you frequently want to retrieve less than 15% of the rows in a large table.
* To improve performance on joins of multiple tables, index columns used for joins.

* Small tables do not require indexes.

Some columns are strong candidates for indexing. Columns with one or more of the following characteristics are candidates for indexing:

* Values are relatively unique in the column.
* There is a wide range of values (good for regular indexes).
* There is a small range of values (good for bitmap indexes).
* The column contains many nulls, but queries often select all rows having a value. In this case, use the following phrase:

WHERE COL_X > -9.99 * power(10,125)

Using the preceding phrase is preferable to:

WHERE COL_X IS NOT NULL

This is because the first uses an index on COL_X (assuming that COL_X is a numeric column).

Columns with the following characteristics are less suitable for indexing:

* There are many nulls in the column and you do not search on the not null values.

The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block.

Other Considerations:

1. The order of columns in the CREATE INDEX statement can affect query performance. In general, specify the most frequently used columns first.If you create a single index across columns to speed up queries that access, for example, col1, col2, and col3; then queries that access just col1, or that access just col1 and col2, are also speeded up. But a query that accessed just col2, just col3, or just col2 and col3 does not use the index.

2. There is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable.

3. Drop Index that are no longer required.

4. Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced.

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

Before Importing Create a user

SQL> create user visynapse identified by visynapse*******;

User created.

Grant privileges to the user

SQL> grant resource,connect,create view to visynapse;

Grant succeeded.

Then start the import process

SQL> $imp visynapse@test file=E:/uploaddb_19_10_09.dmp log=E:/implogvisyn.log;

Import: Release 10.1.0.2.0 – Production on Mon Oct 19 12:50:41 2009

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Password:*****

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.01.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing VISYNAPSE’s objects into VISYNAPSE
. . importing table “CALCULATION_SECURITY” 199 rows imported
. . importing table “CALCULATION_TABLE” 207 rows imported
. . importing table “COLOUR” 1 rows imported
. . importing table “CUBE” 8 rows imported
. . importing table “DASHBOARD” 2 rows imported
. . importing table “DASHBOARD_METRIC” 8 rows imported
. . importing table “DASH_TABLE” 3 rows imported
. . importing table “DATA_DICTIONARY” 48 rows imported
. . importing table “DIMENSION_26” 295 rows imported
. . importing table “DIMENSION_27” 432 rows imported
. . importing table “DIMENSION_28” 295 rows imported
. . importing table “DIMENSION_SECURITY” 19 rows imported
. . importing table “DOMAIN” 1 rows imported
. . importing table “FILTERS” 0 rows imported
. . importing table “GRAINS” 6148 rows imported
. . importing table “HEADER_FOOTER” 0 rows imported
. . importing table “KEY_PERFORMANCE_INDICATOR” 0 rows imported
. . importing table “KEY_PERFORMANCE_INDICATOR_CS” 0 rows imported
. . importing table “KPI_DIMENSION” 0 rows imported
. . importing table “KPI_DIMENSION_CS” 0 rows imported
. . importing table “LINKMETRIC” 0 rows imported
. . importing table “METRIC” 8 rows imported
. . importing table “METRIC_CALCULATION_SET” 207 rows imported
. . importing table “METRIC_CUBE_SET” 8 rows imported
. . importing table “REPORT_CHANNEL” 0 rows imported
. . importing table “REPORT_CONFIG” 23 rows imported
. . importing table “REPORT_HEADER” 0 rows imported
. . importing table “SCORECARD” 0 rows imported
. . importing table “SELECTION_DIMENSION” 0 rows imported
. . importing table “SELECTION_FILTERCRITERIA” 0 rows imported
. . importing table “SELECTION_FORMULA” 0 rows imported
. . importing table “SELECTION_SHAREDREPORT” 0 rows imported
. . importing table “SELECTION_TABLE” 0 rows imported
. . importing table “STATIC_REPORT_TABLE” 8 rows imported
. . importing table “SUMMARY_CONFIG” 0 rows imported
. . importing table “TIME_FREQUENCY” 6 rows imported
. . importing table “USER_DASHBOARD_SECURITY” 2 rows imported
. . importing table “USER_GROUP” 1 rows imported
. . importing table “USER_PROFILES” 1 rows imported
. . importing table “USER_TARGETS” 8 rows imported
. . importing table “VALUE_SECURITY” 0 rows imported
About to enable constraints…
Import terminated successfully without warnings.

SQL>

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

SQL> $exp visynapse@uploaddb file=E:/uploaddb_19_10_09.dmp log=E:/uploadlog.log;

Export: Release 10.1.0.2.0 – Production on Mon Oct 19 11:18:48 2009

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Password:******

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user VISYNAPSE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user VISYNAPSE
About to export VISYNAPSE’s objects …
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export VISYNAPSE’s tables via Conventional Path …
. . exporting table CALCULATION_SECURITY 199 rows exported
. . exporting table CALCULATION_TABLE 207 rows exported
. . exporting table COLOUR 1 rows exported
. . exporting table CUBE 8 rows exported
. . exporting table DASHBOARD 2 rows exported
. . exporting table DASHBOARD_METRIC 8 rows exported
. . exporting table DASH_TABLE 3 rows exported
. . exporting table DATA_DICTIONARY 48 rows exported
. . exporting table DEFINITIONS_21 21 rows exported
. . exporting table DEFINITIONS_22 12 rows exported
. . exporting table DEFINITIONS_23 17 rows exported
. . exporting table DEFINITIONS_24 30 rows exported
. . exporting table DEFINITIONS_25 22 rows exported
. . exporting table DEFINITIONS_26 25 rows exported
. . exporting table DEFINITIONS_27 23 rows exported
. . exporting table DEFINITIONS_28 28 rows exported
. . exporting table DIMENSION_SECURITY 19 rows exported
. . exporting table DOMAIN 1 rows exported
. . exporting table FILTERS 0 rows exported
. . exporting table GRAINS 6148 rows exported
. . exporting table HEADER_FOOTER 0 rows exported
. . exporting table KEY_PERFORMANCE_INDICATOR 0 rows exported
. . exporting table KEY_PERFORMANCE_INDICATOR_CS 0 rows exported
. . exporting table KPI_DIMENSION 0 rows exported
. . exporting table KPI_DIMENSION_CS 0 rows exported
. . exporting table LINKMETRIC 0 rows exported
. . exporting table METRIC 8 rows exported
. . exporting table METRIC_CALCULATION_SET 207 rows exported
. . exporting table METRIC_CUBE_SET 8 rows exported
. . exporting table REPORT_CHANNEL 0 rows exported
. . exporting table REPORT_CONFIG 23 rows exported
. . exporting table REPORT_HEADER 0 rows exported
. . exporting table SCORECARD 0 rows exported
. . exporting table SELECTION_DIMENSION 0 rows exported
. . exporting table SELECTION_FILTERCRITERIA 0 rows exported
. . exporting table SELECTION_FORMULA 0 rows exported
. . exporting table SELECTION_SHAREDREPORT 0 rows exported
. . exporting table SELECTION_TABLE 0 rows exported
. . exporting table STATIC_REPORT_TABLE 8 rows exported
. . exporting table SUMMARY_CONFIG 0 rows exported
. . exporting table TIME_FREQUENCY 6 rows exported
. . exporting table USER_DASHBOARD_SECURITY 2 rows exported
. . exporting table USER_GROUP 1 rows exported
. . exporting table USER_PROFILES 1 rows exported
. . exporting table USER_TARGETS 8 rows exported
. . exporting table VALUES_28_CONTROL 196 rows exported
. . exporting table VALUE_SECURITY 0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

SQL>

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »