Feeds:
Posts
Comments

Posts Tagged ‘Programming’

•Before 10g you enabled DBMS_STATS to automatically gather statistics for a table by specifying the MONITORING keyword in the CREATE (or ALTER) TABLE statement. It is good to remember that starting with Oracle Database 10g, the MONITORING and NOMONITORING keywords have been deprecated and statistics are collected automatically. If you do specify these keywords, they are ignored.

•The job GATHER_STATS_JOB automatically gather optimizer statistics.

•This job gathers statistics on all objects in the database which have either
-Missing statistics or
-Stale statistics

•This job is created automatically at database creation time and is managed by the Scheduler. By default GATHER_STATS_JOB runs every night from 10 P.M. to 6 A.M. and all day on weekends if missing statistics or stale statistics found.

•In fact the Scheduler runs this job when the maintenance window is opened. By default, the maintenance window opens every night from 10 P.M. to 6 A.M. and all day on weekends.

•The stop_on_window_close attribute controls whether the GATHER_STATS_JOB continues when the maintenance window closes. The default setting for the stop_on_window_close attribute is TRUE, causing Scheduler to terminate GATHER_STATS_JOB when the maintenance window closes. The remaining objects are then processed in the next maintenance window.

•Database automatically collects statistics on database objects when the object has no previously gathered statistics or the existing statistics are stale because the underlying object has been modified significantly (more than 10% of the rows).

•If the parameter STATISTICS_LEVEL is set to TYPICAL or ALL then database automatically gather statistics for the objects which has stale statistics. If it is set to BASIC then then the automatic statistics gathering job is not able to detect stale statistics.

•To know about job GATHER_JOB_STATS issue the following query,
SQL>select JOB_TYPE,SCHEDULE_TYPE,START_DATE,REPEAT_INTERVAL,END_DATE,ENABLED,STATE,RUN_COUNT FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = ‘GATHER_STATS_JOB’;

JOB_TYPE SCHEDULE_TYP START_DATE REPEA END_DATE ENABL STATE RUN_COUNT
—————- ———— ———- —– ———- —– ——— ———-
WINDOW_GROUP TRUE SCHEDULED 31

To know the database creation date issue,
SQL> select created, sysdate from v$database;
CREATED SYSDATE
——— ———
06-MAY-08 12-JUN-08

So between May 06 and Jun 08 this job ran 31 times.

•In order to determine whether or not a given database object needs new database statistics, Oracle provides a table monitoring facility. If STATISTICS_LEVEL is set to TYPICAL or ALL then monitoring is enabled. Monitoring tracks the approximate number of INSERTs, UPDATEs, and DELETEs for that table, as well as whether the table has been truncated, since the last time statistics were gathered. The information about changes of tables can be viewed in the USER_TAB_MODIFICATIONS view.

Like you can query,
select TABLE_NAME,INSERTS,UPDATES,DELETES from USER_TAB_MODIFICATIONS;

•Following a data-modification, there may be a few minutes delay while Oracle propagates the information to this view. Use the DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to immediately reflect the outstanding monitored information kept in the memory.

Example:
—————
SQL> insert into test values(‘hi’);
1 row created.

SQL> select TABLE_NAME,INSERTS,UPDATES,DELETES from USER_TAB_MODIFICATIONS where table_name=’TEST’;
no rows selected

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
PL/SQL procedure successfully completed.

SQL> select TABLE_NAME,INSERTS,UPDATES,DELETES from USER_TAB_MODIFICATIONS where table_name=’TEST’;
TABLE_NAME INSERTS UPDATES DELETES
—————————— ———- ———- ———-
TEST 1 0 0

After analyze they will disappear,
SQL> analyze table test estimate statistics;
Table analyzed.

SQL> select TABLE_NAME,INSERTS,UPDATES,DELETES from USER_TAB_MODIFICATIONS where table_name=’TEST’;
no rows selected

•If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.

Advertisements

Read Full Post »

* Create an index if you frequently want to retrieve less than 15% of the rows in a large table.
* To improve performance on joins of multiple tables, index columns used for joins.

* Small tables do not require indexes.

Some columns are strong candidates for indexing. Columns with one or more of the following characteristics are candidates for indexing:

* Values are relatively unique in the column.
* There is a wide range of values (good for regular indexes).
* There is a small range of values (good for bitmap indexes).
* The column contains many nulls, but queries often select all rows having a value. In this case, use the following phrase:

WHERE COL_X > -9.99 * power(10,125)

Using the preceding phrase is preferable to:

WHERE COL_X IS NOT NULL

This is because the first uses an index on COL_X (assuming that COL_X is a numeric column).

Columns with the following characteristics are less suitable for indexing:

* There are many nulls in the column and you do not search on the not null values.

The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block.

Other Considerations:

1. The order of columns in the CREATE INDEX statement can affect query performance. In general, specify the most frequently used columns first.If you create a single index across columns to speed up queries that access, for example, col1, col2, and col3; then queries that access just col1, or that access just col1 and col2, are also speeded up. But a query that accessed just col2, just col3, or just col2 and col3 does not use the index.

2. There is a trade-off between the speed of retrieving data from a table and the speed of updating the table. For example, if a table is primarily read-only, having more indexes can be useful; but if a table is heavily updated, having fewer indexes could be preferable.

3. Drop Index that are no longer required.

4. Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced.

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

1)BFILE:
Maximum size: 4 GB-1 which is power(2,32)-1 bytes.
Maximum size of the file name: 255 characters
Maximum size of the directory name: 30 characters
Maximum number of open BFILEs: Limited by the value of the SESSION_MAX_OPEN_FILES initialization parameter, which itself is limited by the maximum number of open files the OS will allow.

2)BLOB:
Maximum size: (4 GB – 1) * DB_BLOCK_SIZE initialization parameter/ LOB Chunk size.

So, if db_block_size=8024K then maximum size=32T
SQL> select 4*1024*1024*1024*8*1024/1024/1024/1024/1024 from dual;
4*1024*1024*1024*8*1024/1024/1024/1024/1024
——————————————-
32

As database block size vary from 2K to 32K so BLOB size can vary from 8TB to 128TB.
(8 TB to 128 TB)

Number of LOB columns per table: Limited by the maximum number of columns per table where maximum can be 1000.

3)CHAR:
Maximum size: 2000 bytes
Minimum and Default Size: 1 byte

4)CHAR VARYING
Maximum size: 4000 bytes

5)CLOB
Same as BLOB in the range of 4T to 128T.

6)Literals (characters or numbers in SQL or PL/SQL)
Maximum size: 4000 characters

7)LONG
Maximum size: 2 GB – 1

8)NCHAR
Maximum size: 2000 bytes

9)NCHAR VARYING
Maximum size: 4000 bytes

10)NCLOB
Same as BLOB in the range of 4T to 128T

11)NUMBER
Maximum size: 999…(in this way 38 9s) * power(10,125)
Minimum size: -999…(in this way 38 9s) *power(10,125)

12)RAW
Maximum size: 2000 bytes

13)VARCHAR
Maximum size: 4000 bytes
Minimum size: 1 byte.

14)VARCHAR2
Maximum size: 4000 bytes
Minimum size: 1 byte.

Read Full Post »