Feeds:
Posts
Comments

Archive for March, 2011

This article will enable you to learn about some of new background processes in 11g. As per Oracle documentation there are 56 new background processes added in 11g release 1

Lets have a look at some important one’s….

MMAN – this process is responsible for ASMM in 10g and AMM in 11g  which manages memory allocation to SGA and PGA

RCBG – this background process is responsible for processing data into server result cache

DIAG – In 11g we have a single location for all the trace files, alert log and other diagnostic files. DIAG is the process which performs  diagnostic dumps and executes oradebug commands

DIA0 – responsible for hang detection and deadlock resoultion

DBRM – Database resource manager is responsible for setting plans to users and all other database resource management activities

EMNC – Event Monitor Coordinator will coordinate with event management and notification activity

FBDA – Flashback Data Archiver process is responsible for all flashback related actions in 11g database

GEN0 – General task execution process which performs required tasks

SMCo – Space management coordinator executes various space management tasks like space reclaiming, allocation etc. It uses slave processes Wnnn whenever required

VKTM – Virtual keeper of time is  responsible for keeping track of the wall-clock time and used as a reference-time counter

Read Full Post »

MMAN
======

SGA Background Process

The Automatic Shared Memory Management feature uses a new background process named Memory Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The SGA Memory Broker keeps track of the sizes of the components and pending resize operations

RVWR
======

Flashback database
—————————–
– This a new feature introduced in 10g.
– Flashbacking a database means going back to a previous database state.
– The Flashback Database feature provides a way to quickly revert an entire
Oracle database to the state it was in at a past point in time.
– This is different from traditional point in time recovery.
– A new background process Recovery Writer ( RVWR) introduced which is responsible for writing
flashback logs which stores pre-image(s) of data blocks
– One can use Flashback Database to back out changes that:
– Have resulted in logical data corruptions.
– Are a result of user error.
– This feature is not applicable for recovering the database in case of media
failure.
– The time required for flashbacking a database to a specific time in past is
DIRECTLY PROPORTIONAL to the number of changes made and not on the size
of the database.
Jnnn
=====
These are job queue processes which are spawned as needed by CJQ0 to complete scheduled jobs. This is not a new process.
CTWR
=====

This is a new process Change Tracking Writer (CTWR) which works with the new block changed tracking features in 10g for fast RMAN incremental backups.

MMNL
=====

The Memory Monitor Light (MMNL) process is a new process in 10g which works with the Automatic Workload Repository new features (AWR) to write out full statistics buffers to disk as needed.

MMON
======

The Manageability Monitor (MMON) process was introduced in 10g and is associated with the Automatic Workload Repository new features used for automatic problem detection and self-tuning. MMON writes out the required statistics for AWR on a scheduled basis.

M000
======
MMON background slave (m000) processes.

CJQn
=====
This is the Job Queue monitoring process which is initiated with the job_queue_processes parameter. This is not new.

RBAL
====
This is the ASM related process that performs rebalancing of disk resources controlled by ASM.

ARBx
====
These processes are managed by the RBAL process and are used to do the actual rebalancing of ASM
controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.

ASMB
=====
The ASMB process is used to provide information to and from the Cluster Synchronization Services used by ASM to manage the disk resources. It is also used to update statistics and provide a heartbeat mechanism.

Read Full Post »

Automatic Memory Management was a new feature introduced in 10g. With 10g release oracle has come up with anew parameter called sga_target which was used to automatically manage the memeory inside SGA.
The components which were managed by sga_target are db_cache_size, shared_pool_size, large_pool_size, java_pool_size and streams_pool_size

With 11g, Oracle went a step further to manage both SGA as well as PGA automatically. Oracle database 11g introduced 2 new parameters – memory_target and memory_max_target

The memory_target Parameter

The memory_target parameter is somewhat a combination of the sga_target parameter value and the pga_aggregate_target parameter, representing the total amount of memory that Oracle has to allocate between the various SGA and PGA structures. The memory_target parameter is dynamic and can be changed up to and including the value of memory_max_target, which we discuss next.

The memory_max_target Parameter

The memory_max_target parameter allows you to dynamically change the value of the parameter memory_target within the confines of memory_max_target. Thus you can adjust the total amount of memory available to the database as a whole at any time.

NOTE: The memory_target and memory_max_target parameters cannot be used when LOCK_SGA has been set. Also memory_target and memory_max_target cannot be used in conjunction with huge pages on Linux.

Following fig show the hierarchy of memory parameters and components that it tunes. This fig is taken from Robert Freeman book – Oracle database 11g: New features for Administrator

Also below fig from same book show the effect of setting memroy_target, memroy_max_target and sga_target.

While setting these new parameters (memroy_target and max_memory_target), one has to be careful. A general advice would be to set sga_target and pga_aggregate_target to a minimum fixed value and set memrory_target. Oracle will automatically increase sga_target and pga_aggregate_target to the desired level.

If you are upgrading the existing 10g database to 11g and want to keep the current value of sga_target and pga_aggregate_target, than make sure you set the value of memroy_target >= (sga_target + pga_aggregate_target).

With new hirarchy in memeory management, Oracle has also introduced new memroy advisory. You can view V$MEMORY_TARGET_ADVICE view to get advice on the optimal value of memroy_target parameter. This view will show advisory data only after you set memory_target parameter.

Hope this helps !!

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

Have you needed to find out what SQL was running in the database? Much of my time is spent on out data warehouse where long expensive queries may be running. When someone calls to ask why things are running slow one area to look is what SQL are they running. The database may not be running slow, but their SQL is.

When Oracle executes a query, it places it in memory. This allows Oracle to reuse the same SQL if needed by the executing session at a later date or by another user that may need the same SQL statement. Oracle assigns a unique SQL_HASH_VALUE and SQL_ADDRESS to each SQL statement. By Oracle doing this, it provides us a method to determine who is executing what SQL based on the join columns from the V$SESSION of SQL_ADDRESS & SQL_HASH_VALUE to the V$SQLAREA view and columns ADDRESS and HASH_VALUE.

Here is a small script to determine what SQL is currently executing.

select sesion.sid,
       sesion.username,
       optimizer_mode,
       hash_value,
       address,
       cpu_time,
       elapsed_time,
       sql_text
  from v$sqlarea sqlarea, v$session sesion
 where sesion.sql_hash_value = sqlarea.hash_value
   and sesion.sql_address    = sqlarea.address
   and sesion.username is not null 

 

Often you may have an active session and actually show a valid SQL statement through the V$SESSION and V$SQLAREA views that seems to be taking very long. Users may be complaining that their query is “stuck” or not responsive. You as a DBA can validate that the SQL they are executing is actually doing something in the database and not “stuck” be simply querying the V$SESS_IO view to determine if the query is in fact “stuck” or is actually doing work within the database. Granted, this does not mean there isn’t a tuning opportunity but you can at least show the SQL is working.

Here you can see the I/O being done by an active SQL statement.

select sess_io.sid,
       sess_io.block_gets,
       sess_io.consistent_gets,
       sess_io.physical_reads,
       sess_io.block_changes,
       sess_io.consistent_changes
  from v$sess_io sess_io, v$session sesion
 where sesion.sid = sess_io.sid
   and sesion.username is not null 

 

If by chance the query shown earlier in the V$SQLAREA view did not show your full SQL text because it was larger than 1000 characters, this V$SQLTEXT view should be queried to extract the full SQL. It is a piece by piece of 64 characters by line, that needs to be ordered by the column PIECE.

Here is the SQL to show the full SQL executing for active sessions.

select sesion.sid,
       sql_text
  from v$sqltext sqltext, v$session sesion
 where sesion.sql_hash_value = sqltext.hash_value
   and sesion.sql_address    = sqltext.address
   and sesion.username is not null
 order by sqltext.piece 

 

Read Full Post »

At time, we want to recover some space from database just to allocate the same to some other tablespace or to return it back to OS disk. This situation arises many times.
And many time we hit with error “ORA-03297: file contains used data beyond requested RESIZE value“.
The concept is simple and many of you must be knowing, but just putting in a simpler words.

Lets take an example of one of the datafile in a database.

Lets see the total free space in a datafile 194.

SQL> select sum(bytes)/1024/1024 from dba_free_space
2  where tablespace_name = ‘APPS_TS_TX_DATA’
3  and file_id = 194;

SUM(BYTES)/1024/1024
——————–
1844.125

Now lets see the distribution for the file.

SQL> select file_id, block_id, blocks, bytes,  ‘Free’ from dba_free_space
2  where tablespace_name = ‘APPS_TS_TX_DATA’
3  and file_id = 194
4  and rownum < 7
5  order by block_id desc;

   FILE_ID   BLOCK_ID     BLOCKS      BYTES ‘FRE
———- ———- ———- ———- —-
194      35001     220992 1810366464 Free
194      13433         16     131072 Free
194      13417         16     131072 Free
194      13401         16     131072 Free
194      13385         16     131072 Free
194      13369         16     131072 Free

We can see that there are so many blocks which are free.  Its divided into chunks of 16 Blocks and each of these chunks are given a block ID. Now we can see that this formating is done until block 35001 and after that there is no partitions. Beyond the block ID 35001, the whole space is available as a single large unit. This is because of high water mark. When a object is created, it will be created physically in the datafile and will be allocated a block. The number of blocks it will be allocated will depend on the parameter “INITIAL EXTENT” which can be given at the time of creating an object. If we dont give this parameter it will take a default value of 16. So 16 block gets assigned when you create any abject, example a table.

You might be wondering that after block 35001, we have all free space and also we have free space at blocks 13401, 13417, 13433 etc. But where are the blocks between 13433 and 35001??

The answer can be found from dba_extents. All the blocks between 13433 and 35001 are occupied by the objects and are not free. That why you are not able to see them in dba_free_space view. But you can find then in dba_extents. So in case of file 194, objects were getting created until block no 35001 (so we can see that block formating till block 35001) and then at later point of time some of the objects got dropped, so the space got freed, but the formating of blocks remain intact (example block IDs which got freed are 13401, 13417, 13433 etc.). This we call it as high water mark for a datafile.

As we saw earlier that we have around 1844.125 MB free space. Can we recover all of them?? I wish .. but no, we cannot. Reason being that, some of the blocks are free “in-between” and there is a fragmentation. To make my point clear, lets try to reduce the file size.

Lets try that !!

This file original size is 2000M

SQL> select bytes/1024/1024 from dba_data_files where file_id = 194;


BYTES/1024/1024
—————
2000

and as per dba_free_space its having 1844.125 MB free space. Lets reduce the file size by 1800 MB and it will definately give error.

SQL> alter database datafile 194 resize 200M;
alter database datafile 194 resize 200M
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

What happened??

The reason you are not able to free the space is because we have certain objects created in between and there is no chunk of 1800MB free. The structure of this file is as given below.

X -> Occupied Block
0 -> Free Block

So here block 35001, 13433 are free, but in-between blocks are occupied. When you are trying to reduce the size of datafile by 1800 MB, you are going beyond block ID 35001, where the objects do exits. After those objects there may be free space, but that doesnt matter, you can at max reduce the size of datafile until block ID 35001, and not beyond that.

So here if we see that there are around 220992 blocks free after block ID 35001. That makes a space of around (220992 X 8192 = 1810366464 bytes = 1726.5 MB free). Thats what the bytes column is showing.
So we can reduce the size of this datafile by maximum of 1810366464 bytes = 172.5 MB. If you try to reduce the size more than that, then you will hit with an error ORA-03297: file contains used data beyond requested RESIZE value.

Now try reducing by 1700 MB and it will succeed !!!

SQL> alter database datafile 194 resize 300M;

database altered

What if you want to still reduce the size beyond 1800 MB. i.e. what if you want to make the size to 200MB.

To do that, you need to do following steps.

– Find block ID at 1800th byte.

We know that last 1810366464 bytes are free.
1800 MB = 1887436800 Bytes
Bytes Remaining = 1887436800 – 1810366464 = 77070336 Bytes = 9408 Blocks
Block ID till we want to clear the objects id  35001 – 9408 = 25593

– Drop all the objects which are beyond this block. That will make 1800MB free in this datafile and you can reduce the size of datafile by 1800 MB.

Moral of story is dont get into these many calculations. Its good to know the logic and helps in crisis situation. But not good to go for such drop object, reduce the size if file and recreate the object again.

Read Full Post »

Recently i had to move a schema of size 70GB to another database[Oracle 10.2.0.1]. That was a readonly schema and never had any DML on any of the tables. In the new database i did compress the tables and the size was brought down from 70Gb to 22Gb.

1]Divided the tables based on the size.
2]Created the tablesapces.
3]Pre-created the tables on respective tablespaces.
4]Imported the tables.
5]Compressed and moved big tables from APP_BIG_TABLES_TEMP to APP_BIG_TABLES.
6]Rebuild the primary key indexes of the moved tables.
7]Droped APP_BIG_TABLES_TEMP.
8]Created the indexes.

The tables where create with PCTFREE 0 to take maximum advantage of the space in each extents as i was sure that never an UPDATE would happen on any of these tables.

Compress statement with “move”
======================
ALTER TABLE <table_name> MOVE TABLESPACE <to new tablespace> COMPRESS NOLOGGING PARALLEL (degree N);
eg: ALTER TABLE SALES_INFORMATION MOVE TABLESPACE APP_BIG_TABLES COMPRESS NOLOGGING PARALLEL (degree 8);

COMPRESS alone will only compress the tables, to recalim the space use MOVE in the statement.
My Server had 32 CPUs, but i just used 8. I should leave the rest for other applications.

To move tables with LOB segments:
ALTER TABLE <TABLE_NAME> MOVE TABLESPACE <new_tablespace> LOB(<lob_column_1) STORE AS (TABLESPACE <new_tablespace) LOB(<lob_column_2) STORE AS (TABLESPACE <new_tablespace);

Why did not i compress and move the table in the same tablespace?
COMPRESS and MOVE of tables in the same tablespace will not bring down the datafiles high water mark and i will not be able to resize the datafiles to a smaller size. So i found it easy to drop the very big APP_BIG_TABLES_TEMP once the tables were compressed and moved to APP_BIG_TABLES.

BEFORE COMPRESS
================
OBJECT_TYPE COUNT SIZE(in GB)
—————— ———- ———-
INDEX 110 7.73681641
TABLE 55 62.6555786
———-
sum 70.392395
AFTER COMPRESS
===============
OBJECT_TYPE COUNT SIZE(in GB)
—————— ———- ———-
INDEX 110 2.375
TABLE 55 19.8398438
———-
sum 22.2148438

Expert are always welcome for their valuable comment or suggestion for the above post.

 

Read Full Post »

The listener.log file grows and its not possible to just move the file whilst the listener is up. The tnslsnr process always hold this log file. This can be confirmed using the command fuser.

Test $> cd /oracle/product/10.2.0/network/log
Test $> ls -lrt
total 1522
-rw-r—–   1 oracle   dba       763075 Mar  7 22:40 listener_old.log
-rw-r—–   1 oracle   dba          259 Mar  7 23:08 listener.log

Test $> fuser listener.log
listener.log:    13389o
Test $> ps -ef |grep 13389
  oracle 19417 18886   0 23:15:30 pts/1       0:00 grep 13389
  oracle 13389     1   0   Nov 19 ?           2:03 /oracle/product/10.2.0/bin/tnslsnr LISTENER -inherit

Test $> lsnrctl

LSNRCTL for Solaris: Version 10.2.0.4.0 – Production on 07-MAR-2011 22:46:03

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> set Log_status off
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter “log_status” set to OFF
The command completed successfully
LSNRCTL> exit

Test $> mv listener.log  listener_old.log
Test $> ls -lrt
total 1520
-rw-r—–   1 oracle   dba       763075 Mar  7 22:40 listener_old.log
Test $>  touch listener.log
Test $>
Test $> ls -lrt
total 1520
-rw-r—–   1 oracle   dba       763075 Mar  7 22:40 listener_old.log
-rw-r–r–   1 oracle   dba            0 Mar  7 22:47 listener.log
Test $> lsnrctl

LSNRCTL for Solaris: Version 10.2.0.4.0 – Production on 07-MAR-2011 22:47:13

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> set Log_Status on
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
LISTENER parameter “log_status” set to ON
The command completed successfully
LSNRCTL> exit

2.The below will empty your listener.log file. ” > ” is part of the command. It just redirect null into the file

Test $> > listener.log

3.set LOGGING_LISTENER=OFF in the listener.ora file to stop the listener logging.

Expert are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »