Feeds:
Posts
Comments

Hi All,

Recently when one of my office colleagues while upgrading from 11.2.0.2 to 11.2.0.4 64 bit he came across the below error.

“The Upgrade Assistant fails in bringing up the database XXX. Oracle Home /oracle/XXX/11204 obtained from file /etc/oratab was used to connect to the database. Either the database is not running from Oracle Home /oracle/XXX/11204 or the correct Initialization Parameter file (pfile) was not found”  

Then it ask us to locate the pfile.

Actually DB is up and running on ORACLE_HOME:/oracle/XXX/112_64

Where as 11204 is not there in /etc/oratab and this still pointing to 112_64

And also dbua.sap.sh -q runs without any error.

Here I checked env as well and confirmed its correct one.

 

Solution:

Here after lot of analysis we realize  we have to check the log directories from dbua and make sure they are cleared out.

ORACLE_BASE/cfgtoollogs/dbua/logs/directory , here if this file exist already then DBUA it is rerun and will act differently when it is executed.

So presence of this file says that database is opened up with new oracle Home Binaries.

So here is issue , once you remove the Welcome_<SID>.txt file from log dir and then everything work fine and upgrade finished smoothly.

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

 

 

 

 

Hi Guys recently we had issue while starting the DB of our Hyperion application.  Just wanted to share this with you guys which will help you.

Background:

Since 2 weeks while stopping DB it was taking more than 40 mins. This time it took more than 40 min. While checking the pending oracle process only one it was showing so I killed that one.

$ ps -ef | grep -i hyq

  orasid  3342396        1   0   Sep 16      –  0:15 /oracle/SID/112_64/bin/tnslsnr LISTENER -inherit

  orasid 36372574 51970254   1 06:14:07      –  0:07 oracleSID (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

$ kill -9 51970254

 

Problem:

Now while starting the DB instance.

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 25 07:23:45 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected.

SQL> startup

ORA-01012: not logged on

SQL> conn / as sysdba

Connected to an idle instance.

SQL> startup

ORA-01012: not logged on

So whenever we forcefully shut down the DB this issue may occurs

“SYSRESV”  shows a shared memory segment for a non-existing instance

 

Solution:

At OS level remove the orphaned shared memory segment using this utility

$ sysresv

IPC Resources for ORACLE_SID “SID” :

Shared Memory:

ID              KEY

1048579         0xffffffff

4               0xffffffff

5               0x6767020c

Oracle Instance not alive for sid “SID”

$ ipcrm -m 1048579

$ ipcrm -m 4

$ ipcrm -m 5

 

Now tried to start DB

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 25 07:29:17 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup

ORACLE instance started.

Total System Global Area 7482626048 bytes

Fixed Size                  2236048 bytes

Variable Size            3271557488 bytes

Database Buffers         4194304000 bytes

Redo Buffers               14528512 bytes

Database mounted.

Database opened.

SQL> exit

Now good to see that instance getting stop & start immediately without any delay. 

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

In Memory Database Concept….!!!!

As we are discussing the Oracle 12c new features In Memory is most popular feature of Oracle 12c.

Memory optimization is always my favorite topic since I started working on Oracle. Optimization means whole idea to wonder around SGA and tune to gain max DB performance.

And here tuning means what most used data should be read from memory instead of from hard disk.

So well tuned database almost 95% data request find required data already in MEMORY.

Now let’s talk about data storage raw / column format

Generally there are two types of databases OLTP & DSS

OLTP: Which operates on few rows but many columns and it works best on row format.

DSS: Accessing few columns of many rows, works best on the column format.

How it will be if we get both OLTP and DSS technique in single product

So here it is Oracle 12c In Memory based on dual format data store.

Generally data are stored on disk in row format only and whenever data requested for read / writes, they will be loaded into traditional Row Store (SGA – Buffer Cache). And whenever data requested for only read operation they will be populated into new In Memory Column Store. So this population includes transformation from row to columnar format.

So it means whenever transaction includes INSERTS, UPDATES or DELETES with commit, new data will be immediately and simultaneously appear row store and the In Memory Column Store. So both the store format transactionally consistent.

And most imp this approach doesn’t require more memory.

Advantages of this approach:

  • There is no need to modify / change application. All existing applications run unchanged in new architecture.
  • There is no need to modify the database. Oracle 12c In Memory option can be implemented without Database migration or table reorganization.
  • There are no limits for database or table sizes. The Oracle Database 12c In- Memory option can be used with databases and systems of any size.
  • Therefore there is no need to change the infrastructure. The new In Memory feature can be implemented on existing hardware.

This In Memory option is compatible with other features such as table compression, table encryption, and table partitioning also with RAC & Data Guard.

Now how to use this feature ….!!!!

This is very easy to use

Assign the value to new initialization parameter inmemory_size to define the size of the In Memory column store.

SQL> ALTER SYSTEM SET inmemory_size= 10G scope=spfile;

And now select table that you want to be available in te In Memory column store:

SQL> ALTER TABLE T1 INMEMORY;

And it’s done …. Ready to move!!!!

Note: As a static pool any changes to INMEMORY_SIZE will not take effect until DB instance restarted.

In Memory must have minimum size of 100MB.

INMEMORY attribute can be specified on a tablespace , table , (sub)partitioned or materialized view.

If enable for tablespace level then all tables and materialized views of respective tablespace will be enable for the IN MEMORY column store by default.

 Above statement doesn’t change or populate table data into In Memory column store.

It just tells the DB that you want the table data to be available In Memory column store at certain point in time.

But point in time means On Demand OR during DB startup time.

Here on Demand means table populated into In Memory column store whenever they are refer by any query.

OR DBA can define this job should executed during the Db startup.

SQL> ALTER TABLE TAB1 INMEMORY PRIORITY CRTICAL;

So let us discus bout this priority criteria !!!!

CRITICAL: Object is populated immediately after the DB opened.

HIGH: If space remains available in IN MEMORY column store then next objects will be this one.

MEDIUM:  These objects will be populated after CRITICAL & HIGH.

LOW: After CRITICAL , HIGH & MEDIUM.

NONE: Objects will be populated after they are scanned for the first time.

Following objects cannot be populated in to IN MEMORY

  • Any objects owned by SYS user and stored in SYSTEM or SYSAUX tablespace.
  • INDEX organized tables (IOTs).
  • Clustered tables.
  • LONG data types also not supported.
  • LOBs also not supported.

IN MEMORY Compression….!!!!

Compression is considered only as a space-saving mechanism. However, data populated into the IM column store is compressed using a new set of compression algorithms that not only help to save space but also improve query performance. The new Oracle In-Memory compression format allows queries to execute directly against the compressed columns. This means all scanning and filtering operations will execute on a much smaller amount of data. Data is only decompressed when it is required for the result set.

As I am in SAP partner company always heard SAP doing research on this / that but thought let us discuss what Oracle research team doing considering SAP HANA’s  market. So as I am working on SAP HANA let us discus about difference between Oracle Exadata & SAP HANA in next article.

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Oracle 12C New Feature…!!!!

As Oracle 12c recently certified for SAP decided to write something on Oracle 12c new features as most of follower’s demands to write on 12c.

  • Pluggable database:

This is most popular feature of oracle, so I am writing little longer on this feature.

With 12C, Oracle is trying to address the problem of Multi tenancy. There is a radical change and a major change in the core database architecture through the introduction of Container Databases also called CDB and Pluggable Databases (PDB).

The memory and process is owned by the Container Database. The container holds the metadata where the PDBs hold the user data.

We can create up to 253 PDBs including the seed PDB.

In a large setup, it is common to see 20 or 30 different instances running in production environment. With these many instances, it is a maintenance nightmare as all these instances have to be separately Upgraded, Patched, Monitored, Tuned, RAC Enabled, Adjusted Backed up and Data Guarded.

With Pluggable Databases feature, we just have to do all this for ONE single instance. Without this feature, prior to 12C, we would have to create separate schemas and there is always a thread of security how much ever the isolation we build into it. There are problems with namespace conflicts, there is always going to be one public synonym that we can create. With PDBs we can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link feature. There is no high startup cost of creating a database any more. Instead of one instance per day, the shift is into one instance per many databases. For the developer community, we can be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but for the DBAs the world would look like it has changed a lot.

Another cool feature is, we can allocate a CPU percentage for each PDB.

Undo tablespace is common in all PDBs but system, sysaux users & temp tablespace individual .

Whereas container DB have all their own.

Data dictionary one in container DB & one in each PDB so in both (obj$,tab$,Source$).

Total 252 pluggable DB we can create

Container  DB will contain 2 DB (Root having separate DD view & PDB Seed).

Using PDB seed basics template we can create new PDB

Root & main template seed exist initially

DBCA command if we run .. then here creation mode we have to mention global DB name & Plugable DB name

Command to create PDB

SCQL>CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pbd1_admin IDENTIFIED BY oracle ROLES=(CONNECT) FILE_NAME_CONVERT=(/oracle/SID/data1/pdbseed’,’/oracle/SID/data1/pdb1’);

Where all datafile are present … using this we are creating PDB … Copping all files from seed to pdb1

Creating system, sysaux and temp tablespace and then u can create user tablespace

How to Unplug PDB1

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READY ONLY;

SQL>ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO ‘pdb1.xml’; (IT WONT WORK IN rw MODE)

SQL>DROP PLUGGABLE DATABASE pdb1 KEEP DATAFILES; (Datafile will remain at location)

Now plugged unplugged PDB in CDB

SQL>CREATE PLUGGABLE DATABASE pdb1 USING ‘/stage/pdb1.xml’ NOCOPY;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ WRITE;

Now cloning pdb2 from pdb2

SQL>ALTER PLUGGABLE DATABASE pdb1 CLOSE;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ ONLY;

SQL>ALTER SYSTEM SET db_create_file_dest=’/oracle/SID/data1/pdb2’;

SQL>CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;

SQL>ALTER PLUGGABLE DATABASE pdb2 OPEN;

What is Common & Local User

Common Users are one which is created in root container that has same identity across all users.

Local Users are users which are created & exist in only PDBs , they cant be created in ROOT.

How to create common user

SQL>CREATE USER sam IDENTIFIED BY sam CONTAINER = ALL; (By Default container=ALL). – connect / as  sysdba

Now pdb1

Connect to user sam (Local user created by Common User) – connect sam/sam@pdb1

SQL>CREATE USER mgr IDENTIFIED BY mgr CONNTAINER=CURRET;

Connect to mgr (Local User created by Local User) – connect mgr/mgr@pdb1

SQL>CREATE USER emp IDENTIFIED BY emp;

  • Data type size for column increase:

In 12c varchar limit increase upto 32k, it’s like we can have one PL SQL block.

I remember in Oracle 7 we use to have 253 character limits.

In Oracle 8 it was up to 4000 character limit.

The extended character size will reduce the use of going for LOB data types, whenever possible. In order to enable the extended character size, we will have to set the MAX_STRING_SIZE initialization database parameter to EXTENDED.

The following procedure need to run to use the extended data types:

  1. Shutdown the database
  2. Restart the database in UPGRADE mode
  3. Modify the parameter: ALTER SYSTEM SET MAX_STRING_SIZE=EXTENDED;
  4. Execute utl32k.sql as sysdba : SQL> @?/rdbms/admin/utl32k.sql
  5. Shutdown the database
  6. Restart the database in READ WRITE mode

Once modified we can’t change the setting back to STANDARD

  • Row Limiting Clause:

Run sql query and we can ask for top n rows or rows from 100 to 120, so it means we can select particular no of records.

e.g.

SQL>SELECT Eno

FROM   emp

ORDER BY Eno DESC

FETCH FIRST 5 ROWS ONLY;

The following example fetches all similar records of Nth row. For example, if the 10th row has salary of 5000 value, and there are other employees whose salary matches with the Nth value, the will also be fetched upon mentioning WITH TIES clause.

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC FETCH FIRST 10 ROWS ONLY WITH TIES;

The following example offsets the first 5 rows and will display the next 5 rows from the table:

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC  OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;

  • Column Invisible

I remember when I was in payment domain based IT company where we have to hide some important data from being displayed we use to create some view.

In Oracle Database 12c, table columns can be defined as invisible either during its creation with the CREATE TABLE command or by modifying existing table columns via the ALTER TABLE statement. By default, table columns are always visible. Once a column has been set to invisible, it can be reverted back to visible using the ALTER TABLE statement.

SQL> create table sam (col1 data-type invisible);

SQL> alter table sam modify col1 invisible;

SQL> alter table sam modify column-name visible;

We must explicitly refer to the invisible column name with the INSERT statement to insert the database into invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary tables, external tables and cluster tables won’t support invisible columns.

  • Identity Columns

In Oracle 12c when we create a table we can populate a column automatically via a system generated sequence by using the GENERATED [] AS IDENTITY clause in the CREATE TABLE statement.

We can use GENERATED AS IDENTITY with the ALWAYS, DEFAULT or DEFAULT ON NULL keywords and that will affect the way or when the identity column value is populated.

By default the GENERATED AS IDENTITY clause implicitly includes the ALWAYS keyword i.e GENERATED ALWAYS AS IDENTITY.

When the ALWAYS keyword is specified it is not possible to explicitly include values for the identity column in INSERT OR UPDATE SQL statements.

  • Temporary UNDO

Oracle database contains a set of system related tablespaces, such as SYSTEM, SYSAUX, UNDO & TEMP and each are used for different purposes within the Oracle database. Prior to Oracle 12c undo records generated by the temporary tables used to be stored in undo tablespace, much similar to a general table undo records. However, with the temporary undo feature in 12c , the temporary undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime benefits of temporary undo includes: reduction in undo tablespace and also less redo data generation as the information won’t be logged in redo logs. We have the flexibility to enable the temporary undo option either at session level or database level.

Enabling temporary UNDO

For Session Level:

ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;

ALTER SESSION SET TEMP_UNDO_ENABLED = FALSE;

For System Level:

CONN sys@pdb1 AS SYSDBA

 

ALTER SYSTEM SET TEMP_UNDO_ENABLED = TRUE;

ALTER SYSTEM SET TEMP_UNDO_ENABLED = FALSE;

Above functionality is only available if the COMPATIBLE=12.0.0 or higher.

  • Multiple indexes on the same column

This is one also popular feature from optimization point of view.

Previous to 12c, we could not create an index if the same column list is already indexed and would generate an error ORA-01408: such column list already indexed error.

So, if we wanted to change an index from being say from Unique to Non-Unique or B-Tree index to a Bitmap index, or from being Non-Partitioned to Partitioned in same manner, etc. then we had to first drop the original index and re-create it again as required.

This means for the period in which the index is being re-created (which could be a considerable period for a larger index), the column list is not covered by an index, which might prove to be problematic from DB performance point of view.

However, only one type of index is usable / visible at a given time.

This means we can now for e.g. replace the index policing the PK constraint quicker (or convert a B-Tree to a Bitmap index or convert a Non-Partitioned index to a Partitioned index, etc.) as we don’t now have to wait for the new index to be created first:

SQL> alter table emp drop primary key;

SQL> drop index emp_id_i1;

SQL> alter index emp_id_i2 visible;

SQL> alter table emp add constraint emp_pk primary key(Emp_id);

  • Moving and Renaming datafile is now ONLINE

Prior to 12c moving datafile is always offline task.

While the data file is being transferred, the end user can perform queries, DML and DDL tasks. Additionally, data files can be migrated between storages e.g. from non-ASM to ASM and vice versa.

Syntax:

ALTER DATABASE MOVE DATAFILE ( ‘filename’ | ‘ASM_filename’ | file_number )

 [ TO ( ‘filename’ | ‘ASM_filename’ ) ]

 [ REUSE ] [ KEEP ]

Where REUSE keywords indicate new file should be created even if it already existed.

The KEEP keyword indicates the original copy of the datafile should be retained.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u01/app/oracle/oradata/sam/system01.dbf’ TO ‘/tmp/system01.dbf’;

 

  • DDL LOGGING

In 12cR1, we can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who.

The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels.

When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location.

An xml file contains information, such as DDL command, IP address, timestamp etc.

This helps to identify when a user or table dropped or when a DDL statement is triggered.

SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;

  • Backup user privileges
  1. SYSBACKUP – username SYSBACKUP – the ability to perform RMAN backup and recovery commands both from SQL and RMAN command line
  2. SYSDG – username SYSDG – the ability to perform Data Guard operations with Data Guard Broker or the DGMGRL command line
  3. SYSKM – username SYSKM – the ability to manage the encryption keys for Transparent Data Encryption
  • Table Recovery in RMAN

This is also one popular feature of 12c.

I was waiting this feature since collage days as wondering what will happened if mistakenly anyone drop table.

From 12c onwards we can recover particular table to Point In Time or SCN from RMAN backups in the event of table DROP or TRUNCATE.

Following action is performed whenever table recovery initiated.

  1. Required backup sets are identified to recover the table/partition
  2. An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
  3. Required table/partitions will be then exported to a dumpfile using the data pumps
  4. Optionally, we can import the table/partitions in the source database
  5. Rename option while recovery

Some of the limitation we have in table recovery

  1. SYS user table can’t be recovered
  2. Tables stored under SYSAUX and SYSTEM tablespaces can’t be recovered
  3. Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints

Example

RMAN> connect target “username/password as SYSBACKUP”;

RMAN> RECOVER TABLE username.tablename UNTIL TIME ‘TIMESTAMP…’

                                AUXILIARY DESTINATION ‘/sapdata1/tablerecovery’

                                DATAPUMP DESTINATION ‘/sapdata1/dpump’

                                DUMP FILE ‘tab.dmp’

                                NOTABLEIMPORT    — this option avoids importing the table automatically.

REMAP TABLE ‘username.tablename’: ‘username.new_table_name’;    — can rename table with this option.

  • Restricting PGA size

Prior to 12c Database there was no hard limit for the PGA.

Although, we set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements

12c Oracle has introduced a new Parameter PGA_AGGREGATE_LIMIT for controlling the maximum amount of PGA. The default limit of this Parameter is set to greatest value of these rules:

  1. 2 GB Memory
  2. 200% of PGA_AGGREGATE_TARGET
  3. 3MB per process (Parameter)

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; –disables the hard limit

So what happen, when the limit is reached? Simple Oracle is aborting the session which has the most untunable PGA, if the PGA is still over the limit then this session are terminated.

So need to be careful to set this parameter not to low (or to high), because if a important batch job is killed, it doesn’t looks good.

  • Database upgrade improvements

By reading all above features one question always comes in mind is to utilize those feature how to upgrade to 12c.

Oracle released the major version upgrade of Database 12c in July 2013, which also triggered the 11g version end-of-life support cycle.  The 11g support clock is ticking, with the first step of Extended Support starting Jan 2015 (in order to get Extended Support we will have to be on Oracle Database 11.2.0.4, the terminal patch set).

  1. Preupgrade Utility

Preupgrade utility “utlu121s.sql”  is replaced with “preupgrd.sql”. The new utility provides fixup scripts “preupgrade_fixups.sql” and “postupggrade_fixups.sql” to address issues that might be present both before and after the upgrade.These fixup scripts can be executed interactively.

  1. Upgrade Utility

The catupgrd.sql Upgrade utility is replaced with the catctl.pl utility. The new utility allows we to take advantage of CPUs by running the upgrade process in parallel thereby reducing the upgrade time.  This new script is used more like a wrapper on the existing one. Also by default “catuppst.sql” is run when “catctl.pl” is executed. The new script has lot of options , few of them are explained below.

Option “-n” specifies the degree of parallelism, the default is 4 and maximum is 8.

Option “-p” supports rerun of the upgrade by skipping successful steps.

Example: $ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -p $ORACLE_HOME/rdbms/admin/catupgrd.sql

  • Other Miscellaneous Features.
  1. Execute SQL statement in RMAN

Now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix

RMAN> SELECT username,machine FROM v$session;

                RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;

  1. GATHERING STATISTICS CONCURRENTLY ON MULTIPLE TABLES

Prior to 12c whenever we execute a DBMS_STATS procedure to gather table, index, schema or database level statistics, Oracle used to collect stats one table at a time. If the table is big enough, then increasing the parallelism was recommended. With 12c we can now collect stats on multiple tables, partitions and sub partitions concurrently.  Before we start using it, we must set the following at the database level to enable the feature:

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’DEFAULT_MAIN’;

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘CONCURRENT’, ‘ALL’);

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SCOTT’);

 

  1. TRUNCATE TABLE CASCADE

Prior to 12c there wasn’t a direct option provided to truncate a master table while it is referred to by the child tables and child records exist. The TRUNCATE TABLE with CASCADE option in 12c truncates the records in the master table and automatically initiates recursive truncate on child tables too, subject to foreign key reference as DELETE ON CASCADE. There is no CAP on the number of recursive levels as it will apply on all child, grand child and great grandchild etc.

This enhancement gets rid of the prerequisite to truncate all child records before truncating a master table. The new CASCADE clause can also be applied on table partitions and sub-partitions etc.

SQL> TRUNCATE TABLE <table_name> CASCADE;

SQL> TRUNCATE TABLE <table_name> PARTITION <partition_name> CASCADE;

 

  1. RESOURCE role doesn’t include UNLIMITED TABLESPACE anymore.
  2. No need to shutdown DB for enabling / disabling archive log mode.
  3. Some new views / packages in oracle 12c

dba_pdbs

v$pdbs

cdb_data_files

dbms_pdb

dbms_qopatch

UTL_CALLSTACK

dbms_redact

 

Once again sorry for long gap for new post.

As now since more than 5 yrs as I am also familiar with SAP and recently gone through the SAP HANA training. And as working in SAP Partner Company only listening SAP’s innovations.

So just decided to write something to write on latest hot topic

SAP HANA v/s Oracle Exadata v/s IBM DB2 BLU acceleration

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Hi Friends,

After long time writing post.

Today I was facing some issue with 11.2.0.3.0 standard edition as this is not having default feature for AWR report.

Data was missing in AWR report as this standard edition not having license for AWR report.

 

So the sultion for this as below.

 

The problem caused by the fact that AWR is part of the DIAGNOSTIC and TUNING packs of Oracle 11g, which as of 11g is no longer a standard feature of the database but rather an optional extra which must be licensed in addition to pur default Oracle Standard Edition package. In terms of the licensing, we just need to license it as an optional extra so that we will actually be using it legally, and then we can go ahead and enable it.

 

While it may be an additional licensed item, it is actually installed on our DB by default, but is just not enabled. So, once we have cleared the licensing issue (With Oracle Support), we will be free to enable diagnostics by setting the new database parameter, CONTROL_MANAGEMENT_PACK_ACCESS. We can do this by running the following as a dba:

alter system set control_management_pack_access=”DIAGNOSTIC+TUNING” scope=both;

 

After running this command, our database will start accumulating diagnostic information.

 

Note that we will need to wait until new snapshots are created in which the new diagnostic information will be available.

I found that although most of the information became available after setting CONTROL_MANAGEMENT_PACK_ACCESS, it was only once the database was restarted that all of the information became available.

Old snapshots will obviously continue to give the errors because they did not have the diagnostic information available at the time.

 

Meantime we can get statspack report , which is also giving some same data in text format and this is available in all Edition without any license issue.

So do we perform this stats pack creation is as below.

1. Create PERFSTAT as default tablespacce & PERFSTAT_TMP as temporary tablespace for PERFSTAT user.
2. Create user PERFSTAT and password will be PERFSTAT.
3. Run the /rdbms/admin/spcreate to create statspack report.
4. Run the /rdbms/admin/spauto to gather stats with auto job.
5. Now will restart DB to make sure we will have latest snaps here onwards.

Thanks!!!!

Hi Friends,

Hope all of you doing well, here I have one more live scenario (Mainly for those who are new in DBA world and wanted to know what kind of issue came in PRD).

Yesterday I was doing one Database refresh activity and I came across this issue where while recovering database it was asking for archive file which was missing.

generally in this case, if you don’t have archive file and any how you want to open database i.e incomplete recovery then please follow the below steps.

While doing incomplete recovery we first find the which is current log file in DB.

Then try to apply those log file when it is asking for specific archive file.

In this scenario I try to apply the log file which was having status “CURRENT” but not luck.

Then I apply alternatively other log file as well to try luck and how wonder it took one of log file and recovery went successfully.

Luckily I copied those logs for all of you guys and , please find same one as below.

 

SQL> @control_CRP.sql.orig_noz ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started.

Total System Global Area 1.3095E+10 bytes Fixed Size                  2231848 bytes Variable Size            4362076632 bytes Database Buffers         8724152320 bytes Redo Buffers                6447104 bytes

Control file created.

SQL> recover database using backup controlfile; ORA-00279: change 1242045930 generated at 05/09/2013 06:42:28 needed for thread 1 ORA-00289: suggestion : /oracle/CRQ/oraarch/CRQarch1_145140_727276125.dbf ORA-00280: change 1242045930 for thread 1 is in sequence #145140

Specify log: {=suggested | filename | AUTO | CANCEL} /oracle/CRQ/origlogB/log_g18m1.dbf ORA-00310: archived log contains sequence 145139; sequence 145140 required ORA-00334: archived log: ‘/oracle/CRQ/origlogB/log_g18m1.dbf’

SQL> recover database using backup controlfile; ORA-00279: change 1242045930 generated at 05/09/2013 06:42:28 needed for thread 1 ORA-00289: suggestion : /oracle/CRQ/oraarch/CRQarch1_145140_727276125.dbf ORA-00280: change 1242045930 for thread 1 is in sequence #145140

Specify log: {=suggested | filename | AUTO | CANCEL} /oracle/CRQ/origlogA/log_g17m1.dbf ORA-00310: archived log contains sequence 145138; sequence 145140 required ORA-00334: archived log: ‘/oracle/CRQ/origlogA/log_g17m1.dbf’

SQL> recover database using backup controlfile; ORA-00279: change 1242045930 generated at 05/09/2013 06:42:28 needed for thread 1 ORA-00289: suggestion : /oracle/CRQ/oraarch/CRQarch1_145140_727276125.dbf ORA-00280: change 1242045930 for thread 1 is in sequence #145140

Specify log: {=suggested | filename | AUTO | CANCEL} /oracle/CRQ/origlogB/log_g16m1.dbf ORA-00310: archived log contains sequence 145137; sequence 145140 required ORA-00334: archived log: ‘/oracle/CRQ/origlogB/log_g16m1.dbf’

SQL> recover database using backup controlfile; ORA-00279: change 1242045930 generated at 05/09/2013 06:42:28 needed for thread 1 ORA-00289: suggestion : /oracle/CRQ/oraarch/CRQarch1_145140_727276125.dbf ORA-00280: change 1242045930 for thread 1 is in sequence #145140

Specify log: {=suggested | filename | AUTO | CANCEL} /oracle/CRQ/origlogA/log_g15m1.dbf ORA-00310: archived log contains sequence 145136; sequence 145140 required ORA-00334: archived log: ‘/oracle/CRQ/origlogA/log_g15m1.dbf’

SQL> recover database using backup controlfile; ORA-00279: change 1242045930 generated at 05/09/2013 06:42:28 needed for thread 1 ORA-00289: suggestion : /oracle/CRQ/oraarch/CRQarch1_145140_727276125.dbf ORA-00280: change 1242045930 for thread 1 is in sequence #145140

Specify log: {=suggested | filename | AUTO | CANCEL} /oracle/CRQ/origlogB/log_g14m1.dbf Log applied. Media recovery complete. SQL> alter database open resetlogs;

Database altered.

SQL> =============================================================================== Please find the log status during this recovery from other session…. FYI…:)

SQL> select GROUP#,BYTES/1024/1024,MEMBERS,STATUS from v$log;

    GROUP# BYTES/1024/1024    MEMBERS STATUS ———- ————— ———- —————-          1              50          2 INACTIVE          2              50          2 INACTIVE          3              50          2 INACTIVE          4              50          2 INACTIVE          5              50          2 INACTIVE          6              50          2 INACTIVE          7              50          2 INACTIVE          8              50          2 CURRENT

8 rows selected.

SQL> col MEMBER for a40 SQL> col TYPE for a12 SQL> select GROUP#,MEMBER,TYPE from v$logfile;

    GROUP# MEMBER                                   TYPE ———- —————————————- ————          8 /oracle/CRQ/origlogB/log_g18m1.dbf       ONLINE          8 /oracle/CRQ/mirrlogB/log_g18m2.dbf       ONLINE          7 /oracle/CRQ/origlogA/log_g17m1.dbf       ONLINE          7 /oracle/CRQ/mirrlogA/log_g17m2.dbf       ONLINE          6 /oracle/CRQ/origlogB/log_g16m1.dbf       ONLINE          6 /oracle/CRQ/mirrlogB/log_g16m2.dbf       ONLINE          5 /oracle/CRQ/origlogA/log_g15m1.dbf       ONLINE          5 /oracle/CRQ/mirrlogA/log_g15m2.dbf       ONLINE          4 /oracle/CRQ/origlogB/log_g14m1.dbf       ONLINE          4 /oracle/CRQ/mirrlogB/log_g14m2.dbf       ONLINE          3 /oracle/CRQ/origlogA/log_g13m1.dbf       ONLINE

    GROUP# MEMBER                                   TYPE ———- —————————————- ————          3 /oracle/CRQ/mirrlogA/log_g13m2.dbf       ONLINE          2 /oracle/CRQ/origlogB/log_g12m1.dbf       ONLINE          2 /oracle/CRQ/mirrlogB/log_g12m2.dbf       ONLINE          1 /oracle/CRQ/origlogA/log_g11m1.dbf       ONLINE          1 /oracle/CRQ/mirrlogA/log_g11m2.dbf       ONLINE

16 rows selected.

SQL>

 

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

 

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Friends after long time back to blog …. due to lots of up / down in life I was away from blog …. but promise from here onward will be in touch with you guys and will always reply your queries and suggestions …

Most of Jr. DBA while chating ask the question abt archive enable / disable query , so just coming with basic but most useful post for Jr.DBA.

Also please note , whenever we enabling the DB in archive mode as DBA we need to monitor the archive space as well.

In large database environmnet we need to take regular archive so backup , so generaly DBA use to schedule it to keep the free space in archive location.

Otherwise DB may go in hung state if archive location is fulled.

In oraganisation some time archive file is manually deleted by DBA it self or we schedule it to delete once it is backup.

This is how to enable archiving:
 

SQL*Plus: Release 10.2.0.4.0 – Production on Tue Feb 26 09:57:43 2013

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to: Oracle Database 10g Release 10.2.0.4.0 – 64bit Production
SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /oracle/SID/oraarch
Oldest online log sequence     2442
Current log sequence           2444

 SQL> shutdown immediate
 Database closed.
 Database dismounted.
 ORACLE instance shut down.
 SQL> startup mount
 ORACLE instance started.
 
Total System Global Area 289406976 bytes
 Fixed Size 1248576 bytes
 Variable Size 96469696 bytes
 Database Buffers 184549376 bytes
 Redo Buffers 7139328 bytes
 Database mounted.
 SQL> alter database archivelog;
 
Database altered.
 
SQL> alter database open;
 
Database altered.
 
SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /oracle/SID/oraarch
Oldest online log sequence     26
Next log sequence to archive   28
Current log sequence           28
 SQL>
 
=================================================
 
This is how to disable archiving:
 
SQL> shutdown immediate
 Database closed.
 Database dismounted.
 ORACLE instance shut down.
 SQL> startup mount
 ORACLE instance started.
 
Total System Global Area 289406976 bytes
 Fixed Size 1248576 bytes
 Variable Size 100664000 bytes
 Database Buffers 180355072 bytes
 Redo Buffers 7139328 bytes
 Database mounted.
 SQL> alter database noarchivelog;
 
Database altered.
 
SQL> alter database open;
 
Database altered.
 
SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            /oracle/SID/oraarch
Oldest online log sequence     26
Current log sequence           28
SQL>

 

I Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

 

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for suceess, always fight even knowing your defeat is certain….!!!!