Posts Tagged ‘Databases’

To achieve optimum performance for data-intensive queries, materialized views and indexes are essential when tuning SQL statements. The SQL Access Advisor enables to optimize data access paths of SQL queries by recommending the proper set of materialized views, materialized view logs, and indexes for a given workload.

A materialized view provides access to table data by storing the results of a query in a separate schema object. A materialized view contains the rows resulting from a query against one or more base tables or views.

A materialized view log is a schema object that records changes to a master table’s data, so that a materialized view defined on the master table can be refreshed incrementally.

The SQL Access Advisor also recommends bitmap, function-based, and B-tree indexes. A bitmap index provides a reduced response time for many types of ad hoc queries and reduced storage requirements compared to other indexing techniques. A functional index derives the indexed value from the table data. For example, to find character data in mixed cases, a functional index can be used to look for the values as if they were all in uppercase characters.

Expert are always welcome for their valuable comment or suggestion for the above post.

Related Post:













Read Full Post »

Checking the Network Setup with CVU
To verify node connectivity among all nodes in your cluster use following syntax as an oracle user,
/mount_point/crs/Disk1/cluvfy/runcluvfy.sh comp nodecon -n node_list [-verbose]

If you have two nodes node1 and node2 and your mountpoint is /dev/cdrom then enter following command.
/dev/cdrom/crs/Disk1/cluvfy/runcluvfy.sh comp nodecon -n node1,node2 -verbose

Checking the Hardware and Operating System Setup with CV
As an oracle user use the following command syntax to start Cluster Verification Utility (CVU) stage verification to check hardware and OS setup:

/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage –post hwos –n node_list [-verbose]
If you have two nodes node1 and node2 and your mountpoint is /dev/cdrom then enter following command,
/dev/cdrom/crs/Disk1/cluvfy/runcluvfy.sh stage –post hwos –n node1,node2

Checking the Operating System Requirements Setup with CVU
To check if your system meets the operating system requirement pre-installation tasks use the following syntax,
/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp sys -n node_list -p {crs|database}
-osdba osdba_group -orainv orainv_group -verbose

If you have two nodes node1 and node2 and your mountpoint is /dev/cdrom with the OSDBA dba and Oracle inventory group oinstall then enter following command,
/dev/cdrom/crs/Disk1/cluvfy/runcluvfy.sh comp sys -n node1,node2 -p crs -osdba crs -orainv oinstall

Read Full Post »

Objectives of Tuning the System:
The objectives of tuning a system can be either anyone of the two below.

1)To reduce the response time for end users of the system. Take it as a special care if your database response time is not optimal.
2)To reduce the resources used to process the same work. Take it as special care if you have limited hardware resource.

We can achieve both our objectives in three ways.

A)Reduce the Workload: SQL tuning commonly involves finding more efficient ways to process the same workload. It is possible to change the execution plan of the statement without altering the functionality to reduce the resource consumption.

B) Balance the Workload: Systems often tend to have peak usage in the daytime when real users are connected to the system, and low usage in the nighttime. If noncritical reports and batch jobs can be scheduled to run in the nighttime and their concurrency during day time reduced, then it frees up resources for the more critical programs in the day.

C) Parallelize the Workload: Queries that access large amounts of data (typical data warehouse queries) often can be parallelized. This is extremely useful for reducing the response time in low concurrency data warehouse. However, for OLTP environments, which tend to be high concurrency, this can adversely impact other users by increasing the overall resource usage of the program.

Read Full Post »

Oracle strongly recommended to use the DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION
provided with the DBMS_MGMT package in order to move audit trail table out of SYSTEM tablespace. But the procedure SET_AUDIT_TRAIL_LOCATION by default is not available until 11g. It is available in 11g.

However with manual script you can move SYS.AUD$ table out of SYSTEM tablespace. But you need to remember moving AUD$ out of SYSTEM tablespace is not a
supported procedure. Oracle does not support changing ownership of AUD$, or any
triggers on it.

Below is the scripts that you can do as your own risk,
Step 01: Connect to database as SYS user.
conn / as sysdba

Step 02: Create tablespace where audit file will reside.
create tablespace AUDIT_TBS
datafile ‘/oracle/TST/datafile/aud01.dbf’ size 500M;

Step 03: Create audit table inside AUDIT_TBS
create table aud_aux tablespace AUDIT_TBS as select * from aud$ where 1 = 2;
Note that no rows will be created in this state.

SQL> select count(*) from aud$; 


Step 04: Rename the original Audit table.
rename AUD$ to AUD$$;

SQL> select count(*) from aud$$; 

Step 05: Rename the aud_aux to AUD$
rename aud_aux to aud$;

SQL> select count(*) from aud$; 

Step 06: Create Index on the AUD$ table.
create index aud_i
on aud$(sessionid, ses$tid)
tablespace AUDIT_TBS;

Read Full Post »