Feeds:
Posts
Comments

Archive for January, 2011

1)CURSOR_SHARING: •The optimizer generates the execution plan based on the presence of the bind variables but not the actual literal values.

•Based on the settings of this parameter -CURSOR_SHARING it converts literal values in SQL statements to bind variables and affect the execution plan of SQL statements.

•This parameter determines what kind of SQL statements can share the same cursors and can have any of either three values EXACT or SIMILAR or FORCE.

•If it is set to EXACT then it only allows statements with identical text to share the same cursor.

•SIMILAR settings causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized. SIMILAR is default.

•Forces specified that statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

2)DB_FILE_MULTIBLOCK_READ_COUNT:•This parameter specifies the number of blocks that are read in a single I/O during a full table scan or index fast full scan.

•The optimizer uses the value of DB_FILE_MULTIBLOCK_READ_COUNT to cost full table scans and index fast full scans.

•Larger values result in a cheaper cost for full table scans and can result in the optimizer choosing a full table scan over an index scan.

•If this parameter is not set explicitly (or is set is 0), the optimizer will use a default value of 8 when costing full table scans and index fast full scans.

3)OPTIMIZER_INDEX_CACHING:•This parameter controls the costing of an index probe in conjunction with a nested loop.

•The range of values 0 to 100 for OPTIMIZER_INDEX_CACHING indicates percentage of index blocks in the buffer cache, which modifies the optimizer’s assumptions about index caching for nested loops and IN-list iterators.

•A value of 100 infers that 100% of the index blocks are likely to be found in the buffer cache and the optimizer adjusts the cost of an index probe or nested loop accordingly.

•Use caution when using this parameter because execution plans can change in favor of index caching.

4)OPTIMIZER_INDEX_COST_ADJ:•OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be more or less index friendly—that is, to make the optimizer more or less prone to selecting an index access path over a full table scan.

•The default for this parameter is 100 percent, at which the optimizer evaluates index access paths at the regular cost.

•Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting of 50 makes the index access path look half as expensive as normal.

5)OPTIMIZER_MODE:This parameter is discussed on About Parameter OPTIMIZER_MODE
6)PGA_AGGREGATE_TARGET: This parameter automatically controls the amount of memory allocated for sorts and hash joins. Larger amounts of memory allocated for sorts or hash joins reduce the optimizer cost of these operations. This parameter is discussed on About PGA_AGGREGATE_TARGET parameter
7)STAR_TRANSFORMATION_ENABLED: •STAR_TRANSFORMATION_ENABLED determines whether a cost-based query transformation will be applied to star queries.

•If it is set to TRUE the optimizer will consider performing a cost-based query transformation on the star query.

•If it is set to FALSE the transformation will not be applied.

•If it is set to TEMP_DISABLE the optimizer will consider performing a cost-based query transformation on the star query but will not use temporary tables in the star transformation.

Read Full Post »

In order to make you understand About PGA_AGGREGATE_TARGET parameter let’s have a look at parameter *_AREA_SIZE.

SQL> SHOW PARAMETER _AREA_SIZE

NAME TYPE VALUE
———————————— ———– ——————————
bitmap_merge_area_size integer 1048576
create_bitmap_area_size integer 8388608
hash_area_size integer 131072
sort_area_size integer 65536
workarea_size_policy string AUTO

Here we see the parameter workarea_size_policy is set to AUTO because we have set non-zero value to pga_aggregate_target.

SQL> SHOW PARAMETER PGA_AGGREGATE_TARGET

NAME TYPE VALUE
———————————— ———– ——————————
pga_aggregate_target big integer 525M

Now we try to set pga_aggregate_target to a zero value.

SQL> ALTER SYSTEM SET pga_aggregate_target=0;
ALTER SYSTEM SET pga_aggregate_target=0
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00093: pga_aggregate_target must be between 10M and 4096G-1

Whenever we try to set to as scope=spfile it is not set because 0 is not valid value.
So, I set it to zero in the pfile.

SQL> CREATE PFILE=’/export/home/oracle/pfile.ora’ FROM SPFILE;
File created.

SQL> !vi /export/home/oracle/pfile.ora
*.pga_aggregate_target=0

Now I have started database with this pfile.

SQL> STARTUP FORCE PFILE=’/export/home/oracle/pfile.ora’;
ORACLE instance started.

Total System Global Area 1660944384 bytes
Fixed Size 2021216 bytes
Variable Size 218106016 bytes
Database Buffers 1426063360 bytes
Redo Buffers 14753792 bytes
Database mounted.
Database opened.

Now have a look at the values. We will see that workarea_size_policy parameter is set to MANUAL.

SQL> SHOW PARAMETER _AREA_SIZE

NAME TYPE VALUE
———————————— ———– ——————————
bitmap_merge_area_size integer 1048576
create_bitmap_area_size integer 8388608
hash_area_size integer 131072
sort_area_size integer 65536
workarea_size_policy string MANUAL
SQL> SHOW PARAMETER PGA_AGGREGATE_TARGET

NAME TYPE VALUE
———————————— ———– ——————————
pga_aggregate_target big integer 0

Now let me say about pga_aggregate_target.

• PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance.

• The default value for PGA_AGGREGATE_TARGET is non zero. Oracle sets it’s value to 20% of the SGA or 10 MB, whichever is greater.

• Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators such as sort, group-by, hash-join, bitmap merge, and bitmap create will be automatically sized. In that case we don’t have to bother about settings of sort_area_size , hash_area_size etc.

• If you set PGA_AGGREGATE_TARGET to 0 then oracle automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL workareas are sized using the *_AREA_SIZE parameters.

• Oracle attempts to keep the amount of private memory below the target specified by this parameter by adapting the size of the work areas to private memory.

• The memory allocated for PGA_AGGREGATE_TARGET has no relation with SGA_TARGET. The similarity is both is taken from total memory of the system.

• The minimum value of this parameter is 10 MB and maximum is 4096 GB – 1.

Read Full Post »

Checking the Network Setup with CVU
To verify node connectivity among all nodes in your cluster use following syntax as an oracle user,
/mount_point/crs/Disk1/cluvfy/runcluvfy.sh comp nodecon -n node_list [-verbose]

If you have two nodes node1 and node2 and your mountpoint is /dev/cdrom then enter following command.
/dev/cdrom/crs/Disk1/cluvfy/runcluvfy.sh comp nodecon -n node1,node2 -verbose

Checking the Hardware and Operating System Setup with CV
As an oracle user use the following command syntax to start Cluster Verification Utility (CVU) stage verification to check hardware and OS setup:

/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage –post hwos –n node_list [-verbose]
If you have two nodes node1 and node2 and your mountpoint is /dev/cdrom then enter following command,
/dev/cdrom/crs/Disk1/cluvfy/runcluvfy.sh stage –post hwos –n node1,node2

Checking the Operating System Requirements Setup with CVU
To check if your system meets the operating system requirement pre-installation tasks use the following syntax,
/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp sys -n node_list -p {crs|database}
-osdba osdba_group -orainv orainv_group -verbose

If you have two nodes node1 and node2 and your mountpoint is /dev/cdrom with the OSDBA dba and Oracle inventory group oinstall then enter following command,
/dev/cdrom/crs/Disk1/cluvfy/runcluvfy.sh comp sys -n node1,node2 -p crs -osdba crs -orainv oinstall

Read Full Post »

Configuring Operating System Users and Groups in All Nodes
On windows you don’t need to create separate user to install oracle or don’t need to create groups. OUI automatically do it. However on unix you must create the following operating system groups are required if you are installing Oracle RAC for the first time.

•The OSDBA group (typically, dba) -This is the OS user who has SYSDBA privilege.

•The Oracle Inventory group (typically, oinstall) -This group owns all Oracle software installed on the system.

The following operating system users are required

•A user that owns the Oracle software (typically, oracle).

•An unprivileged user (for example, the nobody user on Linux systems).

Create the required groups and users
Do this steps on all nodes of your cluster.
As a root user,

/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
useradd -u 200 -g oinstall -G dba -d /home/oracle -r oracle
Change the password by,
passwd oracle

Verify the user by,
id oracle

Note that in this example in stead of using node1 use your original node name and instead of using node2 use your original node name.

Configuring SSH on All Cluster Nodes
When installing Oracle RAC on UNIX and Linux platforms, the software is installed on one node, and OUI uses secure communication to copy the software binary files to the other cluster nodes. OUI uses the Secure Shell (SSH) for the communication.

So you must have configured SSH on all nodes.

•To configure SSH, you must first create Rivest-Shamir-Adleman (RSA) keys and Digital Signature Algorithm (DSA) keys on each cluster node. After you have created the private and public keys, you copy the keys from all cluster node members into an authorized keys file that is identical on each node.

Generating RSA and DSA Keys
1)Log on as oracle user.

2)See whether .ssh directory exist or not. If does not exist then create one.
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh

3)Create the RSA-type public and private encryption keys by,
/usr/bin/ssh-keygen -t rsa
This command creates the public key in the /home/oracle/.ssh/id_rsa.pub file and the private key in the /home/oracle/.ssh/id_rsa file.

4)Create the DSA type public and private keys.
/usr/bin/ssh-keygen -t dsa
This command creates the public key in the /home/oracle/.ssh/id_dsa.pub file and the private key in the /home/oracle/.ssh/id_dsa file.

5)Repeat step 1 through 4 in all the nodes.

Adding the Keys to an Authorized Key File
1)Go to .ssh directory
$ cd ~/.ssh

2)Add the RSA and DSA keys to the authorized_keys files.
$ cat id_rsa.pub >>authorized_keys
$ cat id_dsa.pub >>authorized_keys

3)Using SCP copy the authorized_keys file to the oracle user .ssh directory on a remote node.
scp authorized_keys node2:/home/oracle/.ssh/

4)Using SSH, log in to the node where you copied the authorized_keys file, using the passphrase you created. Then change to the .ssh directory, and using the cat command, add the RSA and DSA keys for the second node to authorized_keys file.
ssh node2
Enter passphrase for key ‘/home/oracle/.ssh/id_rsa’:
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys

5)If you have more than 2 nodes in your cluster, repeat step 3 and step 4 for each node you intend to add to your cluster. Copy the most recently updated authorized_keys file to the next node, then add the public keys for that node to the authorized_keys file.

6)After updating the authorized_keys file on all nodes, use SCP to copy the complete authorized_keys file from the last node to be updated to all the other cluster nodes, overwriting the existing version on the other nodes. For example,
scp authorized_keys node1:/home/oracle/.ssh/

Configure SSH User Equivalency on Cluster Member Nodes
1)Log on as a oracle user.

2)Start the SSH agent and load the SSH keys into memory.
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add

3)Complete the SSH configuration by using the ssh command to retrieve the date on each node in the cluster.
$ ssh node1 date
$ ssh node2 date

Read Full Post »

Now we are going to learn RAC environment.

In a database whenever you query from V$database to know the name of the database and want to know the name of the instance from v$thread they return similar result. Suppose,

SQL> select name from v$database;
NAME
———
DBASE

SQL> select instance from v$thread;

INSTANCE
——————————————————————————–
dbase

which indicates in the database there is only one instance that is database have a one-to-one relationship between datafiles and the instance. Oracle RAC environments, however, have a one-to-many relationship between datafiles and instances. In RAC environments multiple instances form a single database. The instances can reside on different servers which is referred as nodes in RAC environment.

• In RAC environment the set of interconnected instances that allows the servers to work together is called cluster. The physical structures that make up the database must reside on shared storage that is accessible from all servers that are part of the cluster. Each server in the cluster runs the Oracle RAC software.

• In RAC database each instance has its own memory and background processes but every instances share the same data files. So while any data need to write or retrieve they use the same datafiles but each instance use it’s own memory.

Now question is if one instance read data and modify it in the buffer cache and if another instance need that data then how oracle manages it?

• In RAC Oracle uses Cache Fusion Technology to synchronize the data stored in the buffer cache of each cluster database instance. Cache Fusion moves current data blocks (which reside in memory) between database instances, rather than having one database instance write the data blocks to disk and requiring another database instance to reread the data blocks from disk.

• When a data block is needed for one instance which resides in the buffer cache of another instance, Cache Fusion transfers the data block directly between the instances using the interconnect, enabling the Oracle RAC database to access and modify data as if the data resided in a single buffer cache.

Oracle RAC provides high availability and scalability.

If one instance fails in RAC environment still another instance can do the task and thus prevents the server from being a single point of failure.

In a RAC environment transactions are distributed over multiple instances. Thus minimize load from one instance.

• Never mix with oracle clusterware with Oracle RAC. The Oracle Clusterware is software that when installed on servers, enables the servers to be bound together to operate as if they were one server. Before you install oracle RAC you need to install clusterware. Without clusterware you can’t install oracle RAC.

Read Full Post »

When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.

HWM is an indicator of USED BLOCKS in the database. Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted. Since Oracle knows that blocks beyond the high water mark don’t have data, it only reads blocks up to the high water mark when doing a full table scan.

DDL statement always resets the HWM.

Small example to find the table fragmentation.

SQL> select count(*) from big1;

1000000 rows selected.

SQL> delete from big1 where rownum <= 300000;

300000 rows deleted.

SQL> commit;

Commit complete.

SQL> update big1 set object_id = 0 where rownum <=350000;

342226 rows updated.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’BIG1′);

PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2) tablesize, round((num_rows*avg_row_l
en/1024),2) actualsize from dba_tables  where table_name=’BIG1′;

TABLE_NAME TABLE SIZE ACTUAL DATA
————————— ——————————————- ——————
BIG1              72952          30604.2

Note = 72952 – 30604 = 42348 Kb is wasted space in table
 

The difference between two values is 60% and Pctfree 10% (default) – so, the table has 50% extra space which is wasted because there is no data.

How to reset HWM / remove fragemenation?

For that we need to reorganize the fragmented table.

We have four options to reorganize fragmented tables:

1. alter table … move + rebuild indexes
2. export / truncate / import
3. create table as select ( CTAS)
4. dbms_redefinition

Option: 1 “alter table … move + rebuild indexes”

SQL> alter table BIG1 move;

Table altered.

SQL> select status,index_name from user_indexes
2 where table_name = ‘BIG1’;

STATUS INDEX_NAME
——– ——————————
UNUSABLE BIGIDX

SQL> alter index bigidx rebuild;

Index altered.

SQL> select status,index_name from user_indexes
2 where table_name = ‘BIG1’;

STATUS INDEX_NAME
——– ——————————
VALID BIGIDX

SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’BIG1′);

PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2) tablesize, round((num_rows*avg_row_l
en/1024),2) actualsize from dba_tables  where table_name=’BIG1′;

TABLE_NAME TABLE SIZE ACTUAL DATA
————————— ——————————————- ——————
BIG1              38224           30727.37

Option: 2 “Create table as select”

SQL> create table big2 as select * from big1;

Table created.

SQL> drop table big1 purge;

Table dropped.

SQL> rename big2 to big1;

Table renamed.

SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’BIG1′);

PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2) tablesize, round((num_rows*avg_row_l
en/1024),2) actualsize from dba_tables  where table_name=’BIG1′;

TABLE_NAME TABLE SIZE ACTUAL DATA
————————— ——————————————- ——————
BIG1              85536            68986.97

SQL> select status from user_indexes
2 where table_name = ‘BIG1′;

no rows selected

SQL> –Note we need to create all indexes.

Option: 3 “export / truncate / import”

SQL> select table_name, round((blocks*8),2)||’kb’ “size”
2 from user_tables
3 where table_name = ‘BIG1′;

TABLE_NAME size
—————————— ——————————————
BIG1 85536kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||’kb’ “size”
2 from user_tables
3 where table_name = ‘BIG1’;

TABLE_NAME size
—————————— ——————————————
BIG1 42535.54kb

SQL> select status from user_indexes where table_name = ‘BIG1′;

STATUS
——–
VALID

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 – Pr
oduction
With the Partitioning, OLAP and Data Mining options

C:>exp scott/tiger@Orcl file=c:big1.dmp tables=big1

Export: Release 10.1.0.5.0 – Production on…..

Export terminated successfully without warnings.

SQL> truncate table big1;

Table truncated.

imp scott/tiger@Orcl file=c:big1.dmp ignore=y

Import: Release 10.1.0.5.0 – Production on….

Import terminated successfully without warnings.

SQL> select table_name, round((blocks*8),2)||’kb’ “size”
2 from user_tables
3 where table_name = ‘BIG1′;

TABLE_NAME size
—————————— ——————————————
BIG1 85536kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||’kb’ “size”
2 from user_tables
3 where table_name = ‘BIG1’;

TABLE_NAME size
—————————— ——————————————
BIG1 42535.54kb

SQL> exec dbms_stats.gather_table_stats(‘SCOTT’,’BIG1′);

PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2)||’kb’ “size”
2 from user_tables
3 where table_name = ‘BIG1′;

TABLE_NAME size
—————————— ——————————————
BIG1 51840kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||’kb’ “size”
2 from user_tables
3 where table_name = ‘BIG1’;

TABLE_NAME size
—————————— ——————————————
BIG1 42542.27kb

SQL> select status from user_indexes where table_name = ‘BIG1’;

STATUS
——–
VALID

SQL> exec dbms_redefinition.can_redef_table(‘SCOTT’,’BIG1′,-
> dbms_redefinition.cons_use_pk);

PL/SQL procedure successfully completed.

Option: 4 “dbms_redefinition”….Which we will disscus afterwords….

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

In previuos post we disscused how to find tablespace frgmantation.Here we will disscus how to avoide table fragmentation in oracle.

Once we identify tablespace fragmentation , what do you do about it?  Honeycomb fragmentation is easy to fix.  All that needs to be done is to combine adjacent free segments into one by issuing a coalesce  statement:

alter tablespace USERS coalesce;

 Bubble fragmentation  is more difficult to handle.  Of course, the best course of action is to prevent it in the first place.  The best weapon for this is to use locally-managed tablespaces.If you are using Oracle 8.1.6 or higher you can convert any current dictionary-managed  tablespaces to locally-managed tablespaces.

sys.dbms_space_admin .tablespace_migrate_to_local(‘USERS’)

If you are afraid of how long this procedure might take on large tablespaces, do not be.  It actually runs very fast.  If, for some reason, you would like to take a tablespace that is locally-managed back to dictionary management, you can issue this command:

sys.dbms_space_admin .tablespace_migrate_from_local(‘USERS’)

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

Older Posts »