Feeds:
Posts
Comments

Archive for the ‘Administration’ Category

Hi All,

Recently when one of my office colleagues while upgrading from 11.2.0.2 to 11.2.0.4 64 bit he came across the below error.

“The Upgrade Assistant fails in bringing up the database XXX. Oracle Home /oracle/XXX/11204 obtained from file /etc/oratab was used to connect to the database. Either the database is not running from Oracle Home /oracle/XXX/11204 or the correct Initialization Parameter file (pfile) was not found”  

Then it ask us to locate the pfile.

Actually DB is up and running on ORACLE_HOME:/oracle/XXX/112_64

Where as 11204 is not there in /etc/oratab and this still pointing to 112_64

And also dbua.sap.sh -q runs without any error.

Here I checked env as well and confirmed its correct one.

 

Solution:

Here after lot of analysis we realize  we have to check the log directories from dbua and make sure they are cleared out.

ORACLE_BASE/cfgtoollogs/dbua/logs/directory , here if this file exist already then DBUA it is rerun and will act differently when it is executed.

So presence of this file says that database is opened up with new oracle Home Binaries.

So here is issue , once you remove the Welcome_<SID>.txt file from log dir and then everything work fine and upgrade finished smoothly.

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

 

 

 

 

Read Full Post »

Hi Guys recently we had issue while starting the DB of our Hyperion application.  Just wanted to share this with you guys which will help you.

Background:

Since 2 weeks while stopping DB it was taking more than 40 mins. This time it took more than 40 min. While checking the pending oracle process only one it was showing so I killed that one.

$ ps -ef | grep -i hyq

  orasid  3342396        1   0   Sep 16      –  0:15 /oracle/SID/112_64/bin/tnslsnr LISTENER -inherit

  orasid 36372574 51970254   1 06:14:07      –  0:07 oracleSID (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

$ kill -9 51970254

 

Problem:

Now while starting the DB instance.

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 25 07:23:45 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected.

SQL> startup

ORA-01012: not logged on

SQL> conn / as sysdba

Connected to an idle instance.

SQL> startup

ORA-01012: not logged on

So whenever we forcefully shut down the DB this issue may occurs

“SYSRESV”  shows a shared memory segment for a non-existing instance

 

Solution:

At OS level remove the orphaned shared memory segment using this utility

$ sysresv

IPC Resources for ORACLE_SID “SID” :

Shared Memory:

ID              KEY

1048579         0xffffffff

4               0xffffffff

5               0x6767020c

Oracle Instance not alive for sid “SID”

$ ipcrm -m 1048579

$ ipcrm -m 4

$ ipcrm -m 5

 

Now tried to start DB

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 25 07:29:17 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup

ORACLE instance started.

Total System Global Area 7482626048 bytes

Fixed Size                  2236048 bytes

Variable Size            3271557488 bytes

Database Buffers         4194304000 bytes

Redo Buffers               14528512 bytes

Database mounted.

Database opened.

SQL> exit

Now good to see that instance getting stop & start immediately without any delay. 

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Read Full Post »

Oracle 12C New Feature…!!!!

As Oracle 12c recently certified for SAP decided to write something on Oracle 12c new features as most of follower’s demands to write on 12c.

  • Pluggable database:

This is most popular feature of oracle, so I am writing little longer on this feature.

With 12C, Oracle is trying to address the problem of Multi tenancy. There is a radical change and a major change in the core database architecture through the introduction of Container Databases also called CDB and Pluggable Databases (PDB).

The memory and process is owned by the Container Database. The container holds the metadata where the PDBs hold the user data.

We can create up to 253 PDBs including the seed PDB.

In a large setup, it is common to see 20 or 30 different instances running in production environment. With these many instances, it is a maintenance nightmare as all these instances have to be separately Upgraded, Patched, Monitored, Tuned, RAC Enabled, Adjusted Backed up and Data Guarded.

With Pluggable Databases feature, we just have to do all this for ONE single instance. Without this feature, prior to 12C, we would have to create separate schemas and there is always a thread of security how much ever the isolation we build into it. There are problems with namespace conflicts, there is always going to be one public synonym that we can create. With PDBs we can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link feature. There is no high startup cost of creating a database any more. Instead of one instance per day, the shift is into one instance per many databases. For the developer community, we can be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but for the DBAs the world would look like it has changed a lot.

Another cool feature is, we can allocate a CPU percentage for each PDB.

Undo tablespace is common in all PDBs but system, sysaux users & temp tablespace individual .

Whereas container DB have all their own.

Data dictionary one in container DB & one in each PDB so in both (obj$,tab$,Source$).

Total 252 pluggable DB we can create

Container  DB will contain 2 DB (Root having separate DD view & PDB Seed).

Using PDB seed basics template we can create new PDB

Root & main template seed exist initially

DBCA command if we run .. then here creation mode we have to mention global DB name & Plugable DB name

Command to create PDB

SCQL>CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pbd1_admin IDENTIFIED BY oracle ROLES=(CONNECT) FILE_NAME_CONVERT=(/oracle/SID/data1/pdbseed’,’/oracle/SID/data1/pdb1’);

Where all datafile are present … using this we are creating PDB … Copping all files from seed to pdb1

Creating system, sysaux and temp tablespace and then u can create user tablespace

How to Unplug PDB1

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READY ONLY;

SQL>ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO ‘pdb1.xml’; (IT WONT WORK IN rw MODE)

SQL>DROP PLUGGABLE DATABASE pdb1 KEEP DATAFILES; (Datafile will remain at location)

Now plugged unplugged PDB in CDB

SQL>CREATE PLUGGABLE DATABASE pdb1 USING ‘/stage/pdb1.xml’ NOCOPY;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ WRITE;

Now cloning pdb2 from pdb2

SQL>ALTER PLUGGABLE DATABASE pdb1 CLOSE;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ ONLY;

SQL>ALTER SYSTEM SET db_create_file_dest=’/oracle/SID/data1/pdb2’;

SQL>CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;

SQL>ALTER PLUGGABLE DATABASE pdb2 OPEN;

What is Common & Local User

Common Users are one which is created in root container that has same identity across all users.

Local Users are users which are created & exist in only PDBs , they cant be created in ROOT.

How to create common user

SQL>CREATE USER sam IDENTIFIED BY sam CONTAINER = ALL; (By Default container=ALL). – connect / as  sysdba

Now pdb1

Connect to user sam (Local user created by Common User) – connect sam/sam@pdb1

SQL>CREATE USER mgr IDENTIFIED BY mgr CONNTAINER=CURRET;

Connect to mgr (Local User created by Local User) – connect mgr/mgr@pdb1

SQL>CREATE USER emp IDENTIFIED BY emp;

  • Data type size for column increase:

In 12c varchar limit increase upto 32k, it’s like we can have one PL SQL block.

I remember in Oracle 7 we use to have 253 character limits.

In Oracle 8 it was up to 4000 character limit.

The extended character size will reduce the use of going for LOB data types, whenever possible. In order to enable the extended character size, we will have to set the MAX_STRING_SIZE initialization database parameter to EXTENDED.

The following procedure need to run to use the extended data types:

  1. Shutdown the database
  2. Restart the database in UPGRADE mode
  3. Modify the parameter: ALTER SYSTEM SET MAX_STRING_SIZE=EXTENDED;
  4. Execute utl32k.sql as sysdba : SQL> @?/rdbms/admin/utl32k.sql
  5. Shutdown the database
  6. Restart the database in READ WRITE mode

Once modified we can’t change the setting back to STANDARD

  • Row Limiting Clause:

Run sql query and we can ask for top n rows or rows from 100 to 120, so it means we can select particular no of records.

e.g.

SQL>SELECT Eno

FROM   emp

ORDER BY Eno DESC

FETCH FIRST 5 ROWS ONLY;

The following example fetches all similar records of Nth row. For example, if the 10th row has salary of 5000 value, and there are other employees whose salary matches with the Nth value, the will also be fetched upon mentioning WITH TIES clause.

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC FETCH FIRST 10 ROWS ONLY WITH TIES;

The following example offsets the first 5 rows and will display the next 5 rows from the table:

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC  OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;

  • Column Invisible

I remember when I was in payment domain based IT company where we have to hide some important data from being displayed we use to create some view.

In Oracle Database 12c, table columns can be defined as invisible either during its creation with the CREATE TABLE command or by modifying existing table columns via the ALTER TABLE statement. By default, table columns are always visible. Once a column has been set to invisible, it can be reverted back to visible using the ALTER TABLE statement.

SQL> create table sam (col1 data-type invisible);

SQL> alter table sam modify col1 invisible;

SQL> alter table sam modify column-name visible;

We must explicitly refer to the invisible column name with the INSERT statement to insert the database into invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary tables, external tables and cluster tables won’t support invisible columns.

  • Identity Columns

In Oracle 12c when we create a table we can populate a column automatically via a system generated sequence by using the GENERATED [] AS IDENTITY clause in the CREATE TABLE statement.

We can use GENERATED AS IDENTITY with the ALWAYS, DEFAULT or DEFAULT ON NULL keywords and that will affect the way or when the identity column value is populated.

By default the GENERATED AS IDENTITY clause implicitly includes the ALWAYS keyword i.e GENERATED ALWAYS AS IDENTITY.

When the ALWAYS keyword is specified it is not possible to explicitly include values for the identity column in INSERT OR UPDATE SQL statements.

  • Temporary UNDO

Oracle database contains a set of system related tablespaces, such as SYSTEM, SYSAUX, UNDO & TEMP and each are used for different purposes within the Oracle database. Prior to Oracle 12c undo records generated by the temporary tables used to be stored in undo tablespace, much similar to a general table undo records. However, with the temporary undo feature in 12c , the temporary undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime benefits of temporary undo includes: reduction in undo tablespace and also less redo data generation as the information won’t be logged in redo logs. We have the flexibility to enable the temporary undo option either at session level or database level.

Enabling temporary UNDO

For Session Level:

ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;

ALTER SESSION SET TEMP_UNDO_ENABLED = FALSE;

For System Level:

CONN sys@pdb1 AS SYSDBA

 

ALTER SYSTEM SET TEMP_UNDO_ENABLED = TRUE;

ALTER SYSTEM SET TEMP_UNDO_ENABLED = FALSE;

Above functionality is only available if the COMPATIBLE=12.0.0 or higher.

  • Multiple indexes on the same column

This is one also popular feature from optimization point of view.

Previous to 12c, we could not create an index if the same column list is already indexed and would generate an error ORA-01408: such column list already indexed error.

So, if we wanted to change an index from being say from Unique to Non-Unique or B-Tree index to a Bitmap index, or from being Non-Partitioned to Partitioned in same manner, etc. then we had to first drop the original index and re-create it again as required.

This means for the period in which the index is being re-created (which could be a considerable period for a larger index), the column list is not covered by an index, which might prove to be problematic from DB performance point of view.

However, only one type of index is usable / visible at a given time.

This means we can now for e.g. replace the index policing the PK constraint quicker (or convert a B-Tree to a Bitmap index or convert a Non-Partitioned index to a Partitioned index, etc.) as we don’t now have to wait for the new index to be created first:

SQL> alter table emp drop primary key;

SQL> drop index emp_id_i1;

SQL> alter index emp_id_i2 visible;

SQL> alter table emp add constraint emp_pk primary key(Emp_id);

  • Moving and Renaming datafile is now ONLINE

Prior to 12c moving datafile is always offline task.

While the data file is being transferred, the end user can perform queries, DML and DDL tasks. Additionally, data files can be migrated between storages e.g. from non-ASM to ASM and vice versa.

Syntax:

ALTER DATABASE MOVE DATAFILE ( ‘filename’ | ‘ASM_filename’ | file_number )

 [ TO ( ‘filename’ | ‘ASM_filename’ ) ]

 [ REUSE ] [ KEEP ]

Where REUSE keywords indicate new file should be created even if it already existed.

The KEEP keyword indicates the original copy of the datafile should be retained.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u01/app/oracle/oradata/sam/system01.dbf’ TO ‘/tmp/system01.dbf’;

 

  • DDL LOGGING

In 12cR1, we can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who.

The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels.

When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location.

An xml file contains information, such as DDL command, IP address, timestamp etc.

This helps to identify when a user or table dropped or when a DDL statement is triggered.

SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;

  • Backup user privileges
  1. SYSBACKUP – username SYSBACKUP – the ability to perform RMAN backup and recovery commands both from SQL and RMAN command line
  2. SYSDG – username SYSDG – the ability to perform Data Guard operations with Data Guard Broker or the DGMGRL command line
  3. SYSKM – username SYSKM – the ability to manage the encryption keys for Transparent Data Encryption
  • Table Recovery in RMAN

This is also one popular feature of 12c.

I was waiting this feature since collage days as wondering what will happened if mistakenly anyone drop table.

From 12c onwards we can recover particular table to Point In Time or SCN from RMAN backups in the event of table DROP or TRUNCATE.

Following action is performed whenever table recovery initiated.

  1. Required backup sets are identified to recover the table/partition
  2. An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
  3. Required table/partitions will be then exported to a dumpfile using the data pumps
  4. Optionally, we can import the table/partitions in the source database
  5. Rename option while recovery

Some of the limitation we have in table recovery

  1. SYS user table can’t be recovered
  2. Tables stored under SYSAUX and SYSTEM tablespaces can’t be recovered
  3. Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints

Example

RMAN> connect target “username/password as SYSBACKUP”;

RMAN> RECOVER TABLE username.tablename UNTIL TIME ‘TIMESTAMP…’

                                AUXILIARY DESTINATION ‘/sapdata1/tablerecovery’

                                DATAPUMP DESTINATION ‘/sapdata1/dpump’

                                DUMP FILE ‘tab.dmp’

                                NOTABLEIMPORT    — this option avoids importing the table automatically.

REMAP TABLE ‘username.tablename’: ‘username.new_table_name’;    — can rename table with this option.

  • Restricting PGA size

Prior to 12c Database there was no hard limit for the PGA.

Although, we set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements

12c Oracle has introduced a new Parameter PGA_AGGREGATE_LIMIT for controlling the maximum amount of PGA. The default limit of this Parameter is set to greatest value of these rules:

  1. 2 GB Memory
  2. 200% of PGA_AGGREGATE_TARGET
  3. 3MB per process (Parameter)

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; –disables the hard limit

So what happen, when the limit is reached? Simple Oracle is aborting the session which has the most untunable PGA, if the PGA is still over the limit then this session are terminated.

So need to be careful to set this parameter not to low (or to high), because if a important batch job is killed, it doesn’t looks good.

  • Database upgrade improvements

By reading all above features one question always comes in mind is to utilize those feature how to upgrade to 12c.

Oracle released the major version upgrade of Database 12c in July 2013, which also triggered the 11g version end-of-life support cycle.  The 11g support clock is ticking, with the first step of Extended Support starting Jan 2015 (in order to get Extended Support we will have to be on Oracle Database 11.2.0.4, the terminal patch set).

  1. Preupgrade Utility

Preupgrade utility “utlu121s.sql”  is replaced with “preupgrd.sql”. The new utility provides fixup scripts “preupgrade_fixups.sql” and “postupggrade_fixups.sql” to address issues that might be present both before and after the upgrade.These fixup scripts can be executed interactively.

  1. Upgrade Utility

The catupgrd.sql Upgrade utility is replaced with the catctl.pl utility. The new utility allows we to take advantage of CPUs by running the upgrade process in parallel thereby reducing the upgrade time.  This new script is used more like a wrapper on the existing one. Also by default “catuppst.sql” is run when “catctl.pl” is executed. The new script has lot of options , few of them are explained below.

Option “-n” specifies the degree of parallelism, the default is 4 and maximum is 8.

Option “-p” supports rerun of the upgrade by skipping successful steps.

Example: $ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -p $ORACLE_HOME/rdbms/admin/catupgrd.sql

  • Other Miscellaneous Features.
  1. Execute SQL statement in RMAN

Now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix

RMAN> SELECT username,machine FROM v$session;

                RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;

  1. GATHERING STATISTICS CONCURRENTLY ON MULTIPLE TABLES

Prior to 12c whenever we execute a DBMS_STATS procedure to gather table, index, schema or database level statistics, Oracle used to collect stats one table at a time. If the table is big enough, then increasing the parallelism was recommended. With 12c we can now collect stats on multiple tables, partitions and sub partitions concurrently.  Before we start using it, we must set the following at the database level to enable the feature:

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’DEFAULT_MAIN’;

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘CONCURRENT’, ‘ALL’);

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SCOTT’);

 

  1. TRUNCATE TABLE CASCADE

Prior to 12c there wasn’t a direct option provided to truncate a master table while it is referred to by the child tables and child records exist. The TRUNCATE TABLE with CASCADE option in 12c truncates the records in the master table and automatically initiates recursive truncate on child tables too, subject to foreign key reference as DELETE ON CASCADE. There is no CAP on the number of recursive levels as it will apply on all child, grand child and great grandchild etc.

This enhancement gets rid of the prerequisite to truncate all child records before truncating a master table. The new CASCADE clause can also be applied on table partitions and sub-partitions etc.

SQL> TRUNCATE TABLE <table_name> CASCADE;

SQL> TRUNCATE TABLE <table_name> PARTITION <partition_name> CASCADE;

 

  1. RESOURCE role doesn’t include UNLIMITED TABLESPACE anymore.
  2. No need to shutdown DB for enabling / disabling archive log mode.
  3. Some new views / packages in oracle 12c

dba_pdbs

v$pdbs

cdb_data_files

dbms_pdb

dbms_qopatch

UTL_CALLSTACK

dbms_redact

 

Once again sorry for long gap for new post.

As now since more than 5 yrs as I am also familiar with SAP and recently gone through the SAP HANA training. And as working in SAP Partner Company only listening SAP’s innovations.

So just decided to write something to write on latest hot topic

SAP HANA v/s Oracle Exadata v/s IBM DB2 BLU acceleration

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Read Full Post »

The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 54,000 times in 2011. If it were a concert at Sydney Opera House, it would take about 20 sold-out performances for that many people to see it.

Click here to see the complete report.

Read Full Post »

Note: One of our visitors  Lokesh Suryawanshi asked this question by posting a comment. Thanks Lokesh for your contribution. As promised i did patch upgrade on yesterday and here is the details of it. Keep visiting/commenting!

Required Software and Installable are available in “ */*/Patch” directory

$HOME = /oracle/SAM

$ORACLE_HOME = /oracle/SAM/102_64

1. 10.2.0.4 Patchset (patchset file is already extracted)

2. OPatch version 10.2.0.5.0

 3. MOPatch version2.1.7

Note:- All Required Software are kept in $HOME/Patch Directory and extract files in following order

1) Mopatch-2-1-7.zip in $ORACLE_HOME directory Note:- Unzip After 10.2.0.4 patchset installation.

2) OPatch10205_Generic_v0.zip file in $ORACLE_HOME directory Note:- Unzip After 10.2.0.4 patchset installation.

3) Patchset_10204.zip file in current directory i.e. in Patch Path

Step 1.-> Check upgradation prerequisites:

1.1 Stop SAP. (No need while applying on BCV)

 1.2:- check oratab and orainventory location at “/var/opt/oracle” and entry of “SAM:/oracle/SAM/102_64: N” in oratab file, If not make an entry in oratab file.

1.3 :- To check invalid objects and Tablepsace prerequisites Sql> @?/rdbms/admin/utlu102i.sql (? Sign indicates ORACLE_HOME path)

Sql> @?/rdbms/admin/utlu102i.sql (? Sign indicates ORACLE_HOME path)

Important Note:- After running prerequisites script you seen some public objects are invalid in SYS schema and other schemas(SAPSR3). Here we are only concern about SYS objects. Which  is known and need to be compiled. Next step to run utlrp script to get compiled invalid objects.

1.2:- Run utlrp.sql to validate invalid objects.

Purpose:- To ensure there is no object invalid found in SYS schema otherwise probabilities patch up gradation failed.  Run below scripts

 Sql>@?/rdbms/admin/utlrp.sql

Result:- In most cases result should report 0 in SYS schemas.( invalid objects). IF NOT ABORT OR STOP INSTALLATION PROCESS

Again Run Prerequistes script

Sql>@?/rdbms/admin/utlu102i.sql (? Sign indicates ORACLE_HOME path)

Note:- No Objects are invalid in SYS and SAPSR3 Schemas and DB is Ready for Upgradation. Next step to Shutdown Database and related Components.

Step 2:- Shutdown database listener,OEM and all component related to Oracle

 2.1 Gracefully stop Database instance and Listener process (Don’t use abort or kill Oracle process)

2.2 OEM is not valid when you test in BCV server as this is not configured for same but in Production you have to gracefully stop.

 Sql>shutdown immediate;

Log off from sql prompt

Orap22>lsnrctl stop

 Check the oracle processes by executing command ps –ef |grep -i ora_

Check the listener process by executing command ps –ef |grep -i LIS

Step: 3 Run OPatch

 3.1 Run OPatch lsinventory to ensure ora inventory location

 Orap22>/oracle/SAM/102_64/OPatch/opatch lsinventory

Step:-4  start 10.2.0.4 Patch Binary upgradation process.

Go to Patch directory in Disk1 directory

1.1   Start XBrowser

1.2   Set Display Env type command “export DISPLAY=your Desktop/Laptop IP:0.0”

1.3   ./oracle/SAM/Patch/Disk1/runInstaller

4.4 Verified ORACLE_HOME

Purpose:- To Validate correct ORACLE_HOME where 10.2.0.4 binary is going to update.

Workaround:- If not seen correct ORACLE_HOME abort and verify oraInventory location

4.5 Installaler summary

4.6 sh /oracle/SAM/102_64/root.sh from another session as root user

a)      Go to ORACLE_HOME run command from $ Prompt sh root.sh

Step-5:-  Run MOPatch for CPU patches(SBP)

5.1) Unzip Mopatch-2-1-7.zip in $ORACLE_HOME directory from Patch location

Orap22>unzip /oracle/SAM/Patch/Mopatch-2-1-7.zip /oracle/SAM/102_64/.

5.2) Unzip OPatch 10205_Generic_v0.zip file in $ORACLE_HOME directory Patch location

Orap22>unzip /oracle/SAM/Patch/OPatch10205_Generic_v0.zip  /oracle/SAM/102_64/.

Note:-  Run OPatch version to confirm OPatch version 10.2.0.5.0 which is required to run SAP Supplied Bundle Patch

CAUTION:- Before running mopatch ensure OPatch version:10.2.0.5.0

Otherwise MOpatch run but some patch failed which required OUI version 10.2.0.5.0

Verified OPatch lsinventory

Purpose:-  To Ensure Inventory of Patches which we applied recently. Next Step to Upgrade Database.

Orap22>Sh /oracle/SAM/102_64/MOPatch/mopatch.sh –v –s SAP_102048_201105_SOL64.zip

Step6:- Upgrade Database

6.1 startup database in upgrade mode

Sql>startup upgrade

 check oracle version 10.2.0.4 after loggin into sql prompt.

6.2 Run catupgrd.sql

Execute SQL from sql command prompt

Sql>spool catupgrd.log

Sql> @?/rdbms/admin/catupgrd.sql

Step 7:- Shutdown Database and restart in normal mode.

Sql>shutdown immediate;

Sql>startup

Step 8:- Run utlrp to validate any invalid object while upgrading

Sql> @?/rdbms/admin/utlrp.sql

Step:-9 Shutdown and restart database to take new parameter in effects.

Step:-10 Start listener and OEM Test and release to user.

Coming post will be on Oracle most popular wait events and there solutions, till that enjoy the Oracle jou5rney….. 🙂

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!

Read Full Post »

Dear Friends,

          Back to work after long time, was busy in trekking. Today we did database migration from 9i to 10g using Export / Import.

To upgrade a database using the Export/Import utilities kindly follow below mention steps.

  •  Keep database in restricted mode – Export data from the current database
    exp  FILE=exp_20092011.dmp FULL=y GRANTS=y BUFFER=4096  ROWS=y CONSISTENT=y

 

  • Now once export is done prerequisite for import :

1) Please verify for sysaux tablespace is available. And also the 10g parameter properly set.

2)Please create the required Users and Tablespace as per the requirement.

Query yo check  User with respective privilages.

select

  lpad(‘ ‘, 2*level) || granted_role “User, his roles and privileges”

from

  (

  /* THE USERS */

    select

      null     grantee,

      username granted_role

    from

      dba_users

    where

      username like upper(‘%&enter_username%’)

  /* THE ROLES TO ROLES RELATIONS */

  union

    select

      grantee,

      granted_role

    from

      dba_role_privs

  /* THE ROLES TO PRIVILEGE RELATIONS */

  union

    select

      grantee,

      privilege

    from

      dba_sys_privs

  )

start with grantee is null

connect by grantee = prior granted_role;

 
 Query to check Tablespaces and there sizes.

set line 120

col host_name for a20;

select w.name,Q.host_name,a.tablespace_name,round(b.total_mb/1024/1024,2) total_mb,

       round(x.maxbytes_mb/1024/1024,2) maxbytes_mb,

       round(nvl(c.used_mb/1024/1024,0),2) Used_Mb,

       round(nvl(d.free_mb/1024/1024,0),2) Free_mb,

       round(((nvl(Used_mb,1)/decode(maxbytes_mb,NULL,total_mb,0,total_mb,maxbytes_mb))*100),2) Used_percent

from   dba_tablespaces a, (select tablespace_name,bytes Total_Mb from sys.sm$ts_avail) b,

       (select tablespace_name,bytes Used_mb from sys.sm$ts_used) c,

       (select tablespace_name,bytes free_mb from sys.sm$ts_free) d, v$database w, v$instance Q,

       (select tablespace_name,sum(maxbytes) maxbytes_mb from dba_data_files group by tablespace_name) x

where  a.tablespace_name=b.tablespace_name(+)

and    a.tablespace_name=c.tablespace_name(+)

and    a.tablespace_name=x.tablespace_name(+)

and    a.tablespace_name=d.tablespace_name(+) order by 8 desc;

 

You can get the DDL of tablespace using below script.

set pagesize 0

set escape on

spool ‘\tablespace.sql’

select ‘create tablespace ‘ || df.tablespace_name || chr(10)

|| ‘ datafile ”’ || df.file_name || ”’ size ‘ || df.bytes

 || decode(autoextensible,’N’,null, chr(10) || ‘ autoextend on maxsize ‘

 || maxbytes)

 || chr(10)

 || ‘default storage ( initial ‘ || initial_extent

 || decode (next_extent, null, null, ‘ next ‘ || next_extent )

|| ‘ minextents ‘ || min_extents

|| ‘ maxextents ‘ ||  decode(max_extents,’2147483645′,’unlimited’,max_extents)

 || ‘) ;’

from dba_data_files df, dba_tablespaces t

where df.tablespace_name=t.tablespace_name;

set pagesize 100

set escape off

spool off

 

You can get the DDL of  Users using below script.

set pagesize 0

set escape on

spool users.sql

select ‘create user ‘ || U.username || ‘ identified ‘ ||

DECODE(password,

      NULL, ‘EXTERNALLY’,

      ‘ by values ‘ || ”” || password || ””

      )

|| chr(10) ||

‘default tablespace ‘ || default_tablespace || chr(10) ||

‘temporary tablespace ‘ || temporary_Tablespace || chr(10) ||

‘ profile ‘ || profile || chr(10) ||

‘quota ‘ ||

decode ( Q.max_bytes, -1, ‘UNLIMITED’, NULL, ‘UNLIMITED’, Q.max_bytes) ||

‘ on ‘ || default_tablespace ||

decode (account_status,’LOCKED’, ‘ account lock’,

                                                ‘EXPIRED’, ‘ password expire’,

                                                ‘EXPIRED \& LOCKED’, ‘ account lock password expire’,

                                                null)

||

‘;’

from dba_users U, dba_ts_quotas Q

— Comment this clause out to include system & default users

where U.username not in (‘SYS’,’SYSTEM’,

‘SCOTT’,’DBSNMP’,’OUTLN’,’WKPROXY’,’WMSYS’,’ORDSYS’,’ORDPLUGINS’,’MDSYS’,

‘CTXSYS’,’XDB’,’ANONYMOUS’,’OWNER’,’WKSYS’,’ODM_MTR’,’ODM’,’OLAPSYS’,

‘HR’,’OE’,’PM’,’SH’,’QS_ADM’,’QS’,’QS_WS’,’QS_ES’,’QS_OS’,’QS_CBADM’,

‘QS_CB’,’QS_CS’,’PERFSTAT’)

and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)

;

set pagesize 100

set escape off

spool off

 

Now run this  tablespace.sql  and users.sql  to create tablespace and users  in 10g.
[Note : Actually we need not create  user and tablespace while importing DB in 10g]
3)Make sure that 10g DB is in Non-Archive log mode.

 imp FILE=exp_20092011.dmp LOG=imp_20092011.log FULL=Y GRANTS=Y BUFFER=4096 ROWS=Y

  • Once import is done please follow the below steps:

 1)Once import is done please Check import logs for errors.
 
2)Make sure users are pointing to proper tablespaces.

 select USERNAME,ACCOUNT_STATUS,DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users;
 
3)Please check the total object count and respective schema object count and confirm with original.

 select OBJECT_TYPE,count(*)from dba_objects where owner=’CROSSLNK’ group by OBJECT_TYPE;

 4)Please check the roles and privilages for respective user and do the required changes for same if required..

 select

  lpad(‘ ‘, 2*level) || granted_role “User, his roles and privileges”

from

  (

  /* THE USERS */

    select

      null     grantee,

      username granted_role

    from

      dba_users

    where

      username like upper(‘%&enter_username%’)

  /* THE ROLES TO ROLES RELATIONS */

  union

    select

      grantee,

      granted_role

    from

      dba_role_privs

  /* THE ROLES TO PRIVILEGE RELATIONS */

  union

    select

      grantee,

      privilege

    from

      dba_sys_privs

  )

start with grantee is null

connect by grantee = prior granted_role;

 
 
5)Please check the invalid object count.Recompile for invalid objects (run utrp.sql)

   select count(*) from dba_objects where STATUS=’INVALID’;
 
6)Gather statistics for entire database
 
 DATABASE LEVEL

begin

 dbms_stats.gather_database_stats(

options=> ‘GATHER AUTO’);

 end;

7) Put the 10g DB back into ARCHIVELOG mode if required.

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup mount

ORACLE instance started.

 

Total System Global Area  272629760 bytes

Fixed Size                   788472 bytes

Variable Size             103806984 bytes

Database Buffers          167772160 bytes

Redo Buffers                 262144 bytes

Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

8) Verify the backup is configured properly.

9)Monitor the alert log and dump directories for possible issues.

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”

Read Full Post »

Dear Friends,

          Today we face new issue relate to listener, so wanted to share with you same. We imported the dump into fresh blank copy of tha database and just chages IP of th server so that application configaration will not impacted. But when we swap the IP of both server we get the error while starting the listener.

Problem:

LSNRCTL for Linux: Version 10.2.0.1.0 – Production on 21-SEP-2011 12:54:12

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Welcome to LSNRCTL, type “help” for information.

LSNRCTL> start
Starting /u01/app/oracle/oracle/product/10.2.0/db_1//bin/tnslsnr: please wait…

TNSLSNR for Linux: Version 10.2.0.2.0 – Production
System parameter file is /u01/app/oracle/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/oracle/product/10.2.0/db_1/network/log/listener.log
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
TNS-12555: TNS:permission denied
TNS-12560: TNS:protocol adapter error
TNS-00525: Insufficient privilege for operation
Linux Error: 1: Operation not permitted

Listener failed to start. See the error message(s) above…

LSNRCTL>

Cause:

1) Ensure that /tmp/.oracle or /var/tmp/.oracle directory exists.

2) Confirm that the DBA user who is trying to start the listener has adequate read and write permissions on the directory specified above. The permissions should be 777.

3) If the /tmp directory has reached full capacity, this would cause the listener to fail to write the socket files.

Solution

To implement the solution, please use the following example:

1. cd /var/tmp

2. Check the whether the .oracle directory exists:

cd .oracle

3. If the directory does not exist, request the System Administrator create the directory and set the ownership as root:root with the permissions set to 01777

mkdir /var/tmp/.oracle
chmod 01777 /var/tmp/.oracle
chown root /var/tmp/.oracle
chgrp root /var/tmp/.oracle

4. Next try starting the TNS Listener using the ‘lsnrctl start <listener_name>’ command.

I Hope this article helped to you. I am expecting your suggestions/feedback.
It will help to motivate me to write more articles….!!!!

Thanks & Regards,
Samadhan
https://samadhandba.wordpress.com/
“Key for suceess, always fight even knowing your defeat is certain….!!!!”

Read Full Post »

Dear Friens back to my Oracle journey after long time. This was interview question ask to my friend during the interview last week.Lots of questions relate with flashback and 10g / 11g new features.

      As requested by him decided to put new post on  Flashback database.

Enable Flash Recovery Area / Flashback database

Flashback functionalities provide fast and flexible data recovery.Type of flashback recovery:
  • Flashback Database (We can revert database at a past time)
  • Flashback Drop (Reverses the effects of a DROP TABLE statement)
  • Flashback Table (Reverses a table to its state at a previous point in time)
  • Flashback Query (We can specify a target time and then run queries, viewing results and recover from an unwanted change)
  • Flashback Transaction Query (We can view changes made by a transaction during a period of time.)

Requirement for Flashback:

  • Database must be in Archive log mode
  • Must have flash recovery area enable

Why database must be in archive log mode? Because archive logs are used in the Flashback operation.

Why flash back recovery area required? Because flashback logs can only be store in this area.

Important: In RAC Database, flashback recovery area must be store is clustered file system or in ASM.

How to Enable Flash Recovery Area?

Set two parameter db_recovery_file_dest and db_recovery_file_dest_size.

SQL> alter system set db_recovery_file_dest=”;

SQL> alter system set db_recovery_file_dest_size=2048m

How to Enable Flashback database?

Mount the database (not open), set parameter DB_FLASHBACK_RETENTION_TARGET and issue the ALTER DATABASE FLASHBACK ON command.

SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440;

SQL> ALTER DATABASE FLASHBACK ON;

Note :

  • Default retation target is 1440 (One Days).
  • If we want to retain flashback logs to perform a 48 hour flashback, set the retention target to 2880 minutes (2 days x 24 hours/day x 60 minutes/hour)
  • By default, flashback logs are generated for all permanent tablespaces
  • We can reduce overhead by disabling flashback logging specific tablespaces:

SQL> ALTER TABLESPACE users FLASHBACK OFF;

We can re-enable flashback logging for a tablespace later with this command:

SQL> ALTER TABLESPACE users FLASHBACK ON;

Important:

  • If we disable Flashback Database for a tablespace, then we must take its datafiles offline before running FLASHBACK DATABASE.
  • We can enable Flashback Database not only on a primary database, but also on a standby database.

Prerequisites for Flashback recovery:

  • Must have FLASHBACK ANY TABLE system privilege or must have FLASHBACK object privilege on the table.
  • Must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
  • Row movement must be enabled on the table.
    SQL>ALTER TABLEENABLE ROW MOVEMENT;

Flashback Drop:( Reverses the effects of a DROP TABLE statement)

Reverse the Drop table ->
SQL> FLASHBACK TABLE EMP TO BEFORE DROP;

Assign a new name to the restored table->
SQL> FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO;

  • We can drop it immediately and permanently with purge option (DROP TABLE emp PURGE)
  • We can view the dropped objects in the recycle bin from two dictionary views:
    USER_RECYCLEBIN — list all dropped user objects.
    DBA_RECYCLEBIN — list all dropped system-wide objects

Viewing the original name of dropped objects?
SQL> SELECT object_name as recycle_name, original_name, object_type FROM recyclebin;
How to remove table from recycle bin?
SQL> purge table “BIN$0+ktoVChEmXgNAAADiUEHQ==$0”;
How to purge recycle bin?
SQL> purge recyclebin;
How to purge all objects from the recycle bin?
SQL> purge dba_recyclebin;
How to purge all objects from tablespace in the recycle bin? SQL> purge tablespace users; Monitor logging in the Flashback Database logs?      SQL>select begin_time, flashback_data, db_data, redo_data, ESTIMATED_FLASHBACK_SIZE from v$flashback_database_stat;    Monitor the Flashback Database retention target?     SQL>select * from v$flashback_database_log; Hope you guy’s enjoy this post.Very soon I wil come up with Parsing & execution as well.Experts are always welcome for their valuable comment or suggestion for the above post.

Read Full Post »

Older Posts »