Feeds:
Posts
Comments

Archive for July, 2015

In Memory Database Concept….!!!!

As we are discussing the Oracle 12c new features In Memory is most popular feature of Oracle 12c.

Memory optimization is always my favorite topic since I started working on Oracle. Optimization means whole idea to wonder around SGA and tune to gain max DB performance.

And here tuning means what most used data should be read from memory instead of from hard disk.

So well tuned database almost 95% data request find required data already in MEMORY.

Now let’s talk about data storage raw / column format

Generally there are two types of databases OLTP & DSS

OLTP: Which operates on few rows but many columns and it works best on row format.

DSS: Accessing few columns of many rows, works best on the column format.

How it will be if we get both OLTP and DSS technique in single product

So here it is Oracle 12c In Memory based on dual format data store.

Generally data are stored on disk in row format only and whenever data requested for read / writes, they will be loaded into traditional Row Store (SGA – Buffer Cache). And whenever data requested for only read operation they will be populated into new In Memory Column Store. So this population includes transformation from row to columnar format.

So it means whenever transaction includes INSERTS, UPDATES or DELETES with commit, new data will be immediately and simultaneously appear row store and the In Memory Column Store. So both the store format transactionally consistent.

And most imp this approach doesn’t require more memory.

Advantages of this approach:

  • There is no need to modify / change application. All existing applications run unchanged in new architecture.
  • There is no need to modify the database. Oracle 12c In Memory option can be implemented without Database migration or table reorganization.
  • There are no limits for database or table sizes. The Oracle Database 12c In- Memory option can be used with databases and systems of any size.
  • Therefore there is no need to change the infrastructure. The new In Memory feature can be implemented on existing hardware.

This In Memory option is compatible with other features such as table compression, table encryption, and table partitioning also with RAC & Data Guard.

Now how to use this feature ….!!!!

This is very easy to use

Assign the value to new initialization parameter inmemory_size to define the size of the In Memory column store.

SQL> ALTER SYSTEM SET inmemory_size= 10G scope=spfile;

And now select table that you want to be available in te In Memory column store:

SQL> ALTER TABLE T1 INMEMORY;

And it’s done …. Ready to move!!!!

Note: As a static pool any changes to INMEMORY_SIZE will not take effect until DB instance restarted.

In Memory must have minimum size of 100MB.

INMEMORY attribute can be specified on a tablespace , table , (sub)partitioned or materialized view.

If enable for tablespace level then all tables and materialized views of respective tablespace will be enable for the IN MEMORY column store by default.

 Above statement doesn’t change or populate table data into In Memory column store.

It just tells the DB that you want the table data to be available In Memory column store at certain point in time.

But point in time means On Demand OR during DB startup time.

Here on Demand means table populated into In Memory column store whenever they are refer by any query.

OR DBA can define this job should executed during the Db startup.

SQL> ALTER TABLE TAB1 INMEMORY PRIORITY CRTICAL;

So let us discus bout this priority criteria !!!!

CRITICAL: Object is populated immediately after the DB opened.

HIGH: If space remains available in IN MEMORY column store then next objects will be this one.

MEDIUM:  These objects will be populated after CRITICAL & HIGH.

LOW: After CRITICAL , HIGH & MEDIUM.

NONE: Objects will be populated after they are scanned for the first time.

Following objects cannot be populated in to IN MEMORY

  • Any objects owned by SYS user and stored in SYSTEM or SYSAUX tablespace.
  • INDEX organized tables (IOTs).
  • Clustered tables.
  • LONG data types also not supported.
  • LOBs also not supported.

IN MEMORY Compression….!!!!

Compression is considered only as a space-saving mechanism. However, data populated into the IM column store is compressed using a new set of compression algorithms that not only help to save space but also improve query performance. The new Oracle In-Memory compression format allows queries to execute directly against the compressed columns. This means all scanning and filtering operations will execute on a much smaller amount of data. Data is only decompressed when it is required for the result set.

As I am in SAP partner company always heard SAP doing research on this / that but thought let us discuss what Oracle research team doing considering SAP HANA’s  market. So as I am working on SAP HANA let us discus about difference between Oracle Exadata & SAP HANA in next article.

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Read Full Post »

Oracle 12C New Feature…!!!!

As Oracle 12c recently certified for SAP decided to write something on Oracle 12c new features as most of follower’s demands to write on 12c.

  • Pluggable database:

This is most popular feature of oracle, so I am writing little longer on this feature.

With 12C, Oracle is trying to address the problem of Multi tenancy. There is a radical change and a major change in the core database architecture through the introduction of Container Databases also called CDB and Pluggable Databases (PDB).

The memory and process is owned by the Container Database. The container holds the metadata where the PDBs hold the user data.

We can create up to 253 PDBs including the seed PDB.

In a large setup, it is common to see 20 or 30 different instances running in production environment. With these many instances, it is a maintenance nightmare as all these instances have to be separately Upgraded, Patched, Monitored, Tuned, RAC Enabled, Adjusted Backed up and Data Guarded.

With Pluggable Databases feature, we just have to do all this for ONE single instance. Without this feature, prior to 12C, we would have to create separate schemas and there is always a thread of security how much ever the isolation we build into it. There are problems with namespace conflicts, there is always going to be one public synonym that we can create. With PDBs we can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link feature. There is no high startup cost of creating a database any more. Instead of one instance per day, the shift is into one instance per many databases. For the developer community, we can be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but for the DBAs the world would look like it has changed a lot.

Another cool feature is, we can allocate a CPU percentage for each PDB.

Undo tablespace is common in all PDBs but system, sysaux users & temp tablespace individual .

Whereas container DB have all their own.

Data dictionary one in container DB & one in each PDB so in both (obj$,tab$,Source$).

Total 252 pluggable DB we can create

Container  DB will contain 2 DB (Root having separate DD view & PDB Seed).

Using PDB seed basics template we can create new PDB

Root & main template seed exist initially

DBCA command if we run .. then here creation mode we have to mention global DB name & Plugable DB name

Command to create PDB

SCQL>CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pbd1_admin IDENTIFIED BY oracle ROLES=(CONNECT) FILE_NAME_CONVERT=(/oracle/SID/data1/pdbseed’,’/oracle/SID/data1/pdb1’);

Where all datafile are present … using this we are creating PDB … Copping all files from seed to pdb1

Creating system, sysaux and temp tablespace and then u can create user tablespace

How to Unplug PDB1

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READY ONLY;

SQL>ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO ‘pdb1.xml’; (IT WONT WORK IN rw MODE)

SQL>DROP PLUGGABLE DATABASE pdb1 KEEP DATAFILES; (Datafile will remain at location)

Now plugged unplugged PDB in CDB

SQL>CREATE PLUGGABLE DATABASE pdb1 USING ‘/stage/pdb1.xml’ NOCOPY;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ WRITE;

Now cloning pdb2 from pdb2

SQL>ALTER PLUGGABLE DATABASE pdb1 CLOSE;

SQL>ALTER PLUGGABLE DATABASE pdb1 OPEN READ ONLY;

SQL>ALTER SYSTEM SET db_create_file_dest=’/oracle/SID/data1/pdb2’;

SQL>CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;

SQL>ALTER PLUGGABLE DATABASE pdb2 OPEN;

What is Common & Local User

Common Users are one which is created in root container that has same identity across all users.

Local Users are users which are created & exist in only PDBs , they cant be created in ROOT.

How to create common user

SQL>CREATE USER sam IDENTIFIED BY sam CONTAINER = ALL; (By Default container=ALL). – connect / as  sysdba

Now pdb1

Connect to user sam (Local user created by Common User) – connect sam/sam@pdb1

SQL>CREATE USER mgr IDENTIFIED BY mgr CONNTAINER=CURRET;

Connect to mgr (Local User created by Local User) – connect mgr/mgr@pdb1

SQL>CREATE USER emp IDENTIFIED BY emp;

  • Data type size for column increase:

In 12c varchar limit increase upto 32k, it’s like we can have one PL SQL block.

I remember in Oracle 7 we use to have 253 character limits.

In Oracle 8 it was up to 4000 character limit.

The extended character size will reduce the use of going for LOB data types, whenever possible. In order to enable the extended character size, we will have to set the MAX_STRING_SIZE initialization database parameter to EXTENDED.

The following procedure need to run to use the extended data types:

  1. Shutdown the database
  2. Restart the database in UPGRADE mode
  3. Modify the parameter: ALTER SYSTEM SET MAX_STRING_SIZE=EXTENDED;
  4. Execute utl32k.sql as sysdba : SQL> @?/rdbms/admin/utl32k.sql
  5. Shutdown the database
  6. Restart the database in READ WRITE mode

Once modified we can’t change the setting back to STANDARD

  • Row Limiting Clause:

Run sql query and we can ask for top n rows or rows from 100 to 120, so it means we can select particular no of records.

e.g.

SQL>SELECT Eno

FROM   emp

ORDER BY Eno DESC

FETCH FIRST 5 ROWS ONLY;

The following example fetches all similar records of Nth row. For example, if the 10th row has salary of 5000 value, and there are other employees whose salary matches with the Nth value, the will also be fetched upon mentioning WITH TIES clause.

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC FETCH FIRST 10 ROWS ONLY WITH TIES;

The following example offsets the first 5 rows and will display the next 5 rows from the table:

SQL> SELECT eno,ename,sal FROM emp ORDER BY SAL DESC  OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;

  • Column Invisible

I remember when I was in payment domain based IT company where we have to hide some important data from being displayed we use to create some view.

In Oracle Database 12c, table columns can be defined as invisible either during its creation with the CREATE TABLE command or by modifying existing table columns via the ALTER TABLE statement. By default, table columns are always visible. Once a column has been set to invisible, it can be reverted back to visible using the ALTER TABLE statement.

SQL> create table sam (col1 data-type invisible);

SQL> alter table sam modify col1 invisible;

SQL> alter table sam modify column-name visible;

We must explicitly refer to the invisible column name with the INSERT statement to insert the database into invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary tables, external tables and cluster tables won’t support invisible columns.

  • Identity Columns

In Oracle 12c when we create a table we can populate a column automatically via a system generated sequence by using the GENERATED [] AS IDENTITY clause in the CREATE TABLE statement.

We can use GENERATED AS IDENTITY with the ALWAYS, DEFAULT or DEFAULT ON NULL keywords and that will affect the way or when the identity column value is populated.

By default the GENERATED AS IDENTITY clause implicitly includes the ALWAYS keyword i.e GENERATED ALWAYS AS IDENTITY.

When the ALWAYS keyword is specified it is not possible to explicitly include values for the identity column in INSERT OR UPDATE SQL statements.

  • Temporary UNDO

Oracle database contains a set of system related tablespaces, such as SYSTEM, SYSAUX, UNDO & TEMP and each are used for different purposes within the Oracle database. Prior to Oracle 12c undo records generated by the temporary tables used to be stored in undo tablespace, much similar to a general table undo records. However, with the temporary undo feature in 12c , the temporary undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime benefits of temporary undo includes: reduction in undo tablespace and also less redo data generation as the information won’t be logged in redo logs. We have the flexibility to enable the temporary undo option either at session level or database level.

Enabling temporary UNDO

For Session Level:

ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;

ALTER SESSION SET TEMP_UNDO_ENABLED = FALSE;

For System Level:

CONN sys@pdb1 AS SYSDBA

 

ALTER SYSTEM SET TEMP_UNDO_ENABLED = TRUE;

ALTER SYSTEM SET TEMP_UNDO_ENABLED = FALSE;

Above functionality is only available if the COMPATIBLE=12.0.0 or higher.

  • Multiple indexes on the same column

This is one also popular feature from optimization point of view.

Previous to 12c, we could not create an index if the same column list is already indexed and would generate an error ORA-01408: such column list already indexed error.

So, if we wanted to change an index from being say from Unique to Non-Unique or B-Tree index to a Bitmap index, or from being Non-Partitioned to Partitioned in same manner, etc. then we had to first drop the original index and re-create it again as required.

This means for the period in which the index is being re-created (which could be a considerable period for a larger index), the column list is not covered by an index, which might prove to be problematic from DB performance point of view.

However, only one type of index is usable / visible at a given time.

This means we can now for e.g. replace the index policing the PK constraint quicker (or convert a B-Tree to a Bitmap index or convert a Non-Partitioned index to a Partitioned index, etc.) as we don’t now have to wait for the new index to be created first:

SQL> alter table emp drop primary key;

SQL> drop index emp_id_i1;

SQL> alter index emp_id_i2 visible;

SQL> alter table emp add constraint emp_pk primary key(Emp_id);

  • Moving and Renaming datafile is now ONLINE

Prior to 12c moving datafile is always offline task.

While the data file is being transferred, the end user can perform queries, DML and DDL tasks. Additionally, data files can be migrated between storages e.g. from non-ASM to ASM and vice versa.

Syntax:

ALTER DATABASE MOVE DATAFILE ( ‘filename’ | ‘ASM_filename’ | file_number )

 [ TO ( ‘filename’ | ‘ASM_filename’ ) ]

 [ REUSE ] [ KEEP ]

Where REUSE keywords indicate new file should be created even if it already existed.

The KEEP keyword indicates the original copy of the datafile should be retained.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u01/app/oracle/oradata/sam/system01.dbf’ TO ‘/tmp/system01.dbf’;

 

  • DDL LOGGING

In 12cR1, we can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who.

The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels.

When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location.

An xml file contains information, such as DDL command, IP address, timestamp etc.

This helps to identify when a user or table dropped or when a DDL statement is triggered.

SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;

  • Backup user privileges
  1. SYSBACKUP – username SYSBACKUP – the ability to perform RMAN backup and recovery commands both from SQL and RMAN command line
  2. SYSDG – username SYSDG – the ability to perform Data Guard operations with Data Guard Broker or the DGMGRL command line
  3. SYSKM – username SYSKM – the ability to manage the encryption keys for Transparent Data Encryption
  • Table Recovery in RMAN

This is also one popular feature of 12c.

I was waiting this feature since collage days as wondering what will happened if mistakenly anyone drop table.

From 12c onwards we can recover particular table to Point In Time or SCN from RMAN backups in the event of table DROP or TRUNCATE.

Following action is performed whenever table recovery initiated.

  1. Required backup sets are identified to recover the table/partition
  2. An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
  3. Required table/partitions will be then exported to a dumpfile using the data pumps
  4. Optionally, we can import the table/partitions in the source database
  5. Rename option while recovery

Some of the limitation we have in table recovery

  1. SYS user table can’t be recovered
  2. Tables stored under SYSAUX and SYSTEM tablespaces can’t be recovered
  3. Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints

Example

RMAN> connect target “username/password as SYSBACKUP”;

RMAN> RECOVER TABLE username.tablename UNTIL TIME ‘TIMESTAMP…’

                                AUXILIARY DESTINATION ‘/sapdata1/tablerecovery’

                                DATAPUMP DESTINATION ‘/sapdata1/dpump’

                                DUMP FILE ‘tab.dmp’

                                NOTABLEIMPORT    — this option avoids importing the table automatically.

REMAP TABLE ‘username.tablename’: ‘username.new_table_name’;    — can rename table with this option.

  • Restricting PGA size

Prior to 12c Database there was no hard limit for the PGA.

Although, we set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements

12c Oracle has introduced a new Parameter PGA_AGGREGATE_LIMIT for controlling the maximum amount of PGA. The default limit of this Parameter is set to greatest value of these rules:

  1. 2 GB Memory
  2. 200% of PGA_AGGREGATE_TARGET
  3. 3MB per process (Parameter)

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; –disables the hard limit

So what happen, when the limit is reached? Simple Oracle is aborting the session which has the most untunable PGA, if the PGA is still over the limit then this session are terminated.

So need to be careful to set this parameter not to low (or to high), because if a important batch job is killed, it doesn’t looks good.

  • Database upgrade improvements

By reading all above features one question always comes in mind is to utilize those feature how to upgrade to 12c.

Oracle released the major version upgrade of Database 12c in July 2013, which also triggered the 11g version end-of-life support cycle.  The 11g support clock is ticking, with the first step of Extended Support starting Jan 2015 (in order to get Extended Support we will have to be on Oracle Database 11.2.0.4, the terminal patch set).

  1. Preupgrade Utility

Preupgrade utility “utlu121s.sql”  is replaced with “preupgrd.sql”. The new utility provides fixup scripts “preupgrade_fixups.sql” and “postupggrade_fixups.sql” to address issues that might be present both before and after the upgrade.These fixup scripts can be executed interactively.

  1. Upgrade Utility

The catupgrd.sql Upgrade utility is replaced with the catctl.pl utility. The new utility allows we to take advantage of CPUs by running the upgrade process in parallel thereby reducing the upgrade time.  This new script is used more like a wrapper on the existing one. Also by default “catuppst.sql” is run when “catctl.pl” is executed. The new script has lot of options , few of them are explained below.

Option “-n” specifies the degree of parallelism, the default is 4 and maximum is 8.

Option “-p” supports rerun of the upgrade by skipping successful steps.

Example: $ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -p $ORACLE_HOME/rdbms/admin/catupgrd.sql

  • Other Miscellaneous Features.
  1. Execute SQL statement in RMAN

Now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix

RMAN> SELECT username,machine FROM v$session;

                RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;

  1. GATHERING STATISTICS CONCURRENTLY ON MULTIPLE TABLES

Prior to 12c whenever we execute a DBMS_STATS procedure to gather table, index, schema or database level statistics, Oracle used to collect stats one table at a time. If the table is big enough, then increasing the parallelism was recommended. With 12c we can now collect stats on multiple tables, partitions and sub partitions concurrently.  Before we start using it, we must set the following at the database level to enable the feature:

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’DEFAULT_MAIN’;

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS(‘CONCURRENT’, ‘ALL’);

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘SCOTT’);

 

  1. TRUNCATE TABLE CASCADE

Prior to 12c there wasn’t a direct option provided to truncate a master table while it is referred to by the child tables and child records exist. The TRUNCATE TABLE with CASCADE option in 12c truncates the records in the master table and automatically initiates recursive truncate on child tables too, subject to foreign key reference as DELETE ON CASCADE. There is no CAP on the number of recursive levels as it will apply on all child, grand child and great grandchild etc.

This enhancement gets rid of the prerequisite to truncate all child records before truncating a master table. The new CASCADE clause can also be applied on table partitions and sub-partitions etc.

SQL> TRUNCATE TABLE <table_name> CASCADE;

SQL> TRUNCATE TABLE <table_name> PARTITION <partition_name> CASCADE;

 

  1. RESOURCE role doesn’t include UNLIMITED TABLESPACE anymore.
  2. No need to shutdown DB for enabling / disabling archive log mode.
  3. Some new views / packages in oracle 12c

dba_pdbs

v$pdbs

cdb_data_files

dbms_pdb

dbms_qopatch

UTL_CALLSTACK

dbms_redact

 

Once again sorry for long gap for new post.

As now since more than 5 yrs as I am also familiar with SAP and recently gone through the SAP HANA training. And as working in SAP Partner Company only listening SAP’s innovations.

So just decided to write something to write on latest hot topic

SAP HANA v/s Oracle Exadata v/s IBM DB2 BLU acceleration

Hope this article helped to you. I am expecting your suggestions/feedback.

It will help to motivate me to write more articles….!!!!

Thanks & Regards,

Samadhan

https://samadhandba.wordpress.com/

“Key for success, always fight even knowing your defeat is certain….!!!!

Read Full Post »