“A new multitenant architecture that easily deploy and manage database clouds. Innovations such as Oracle Multitenant for consolidating multiple databases, Automatic Data Optimization for compressing and tiering data at a higher density also maximize resource efficiency and flexibility. These unique advancements, combined with major enhancements in availability, security, and big data support, ideal platform for private and public cloud deployments.”
Customer Pain Points:Database sessions outages (planned and unplanned) have a significant impact on the user experience.Doubtful outcome: users are left not knowing what happened to their funds transfers, orders, payments, bookings ...Usability: users see an error, lose screens of uncommitted data, and need to login again and re-enter or resubmit, sometimes leading to logical corruption.Disruption. DBA’s sometimes need to reboot mid-tiers.Developer Pain Points:Current approach for outages places the onus on developers to write exception handling in every possible place.Every code module needs exception code to know if transactions committed.Exception handling must work for all transaction sources. Rebuilding non-transactional state is near impossible for an application that is modifying state at runtime.
Transaction Guard is a reliable protocol and API that applications use to provide to provide a reliable commit outcome. The API is embedded in error handling and should be called following recoverable errors. The outcome indicates whether or not the last transaction was committed and completed. Once the commit outcome is returned to the application, this outcome persists. That is, if Transaction Guard returns committed or uncommitted the status stays this way. This enables the application or user to make a next stable decision.Why use Transaction Guard ?The application uses Transaction Guard to return the known outcome committed or uncommitted. The user or application can make their own decision the next action to take. For example, to resubmit when the last transaction on the session has not committed, or to continue when the last transaction has committed and the last call has completed. Transaction Guard is used by Application Continuity and automatically enabled by it, but it can also be enabled independently. Transaction Guard prevents the transaction being replayed by Application Continuity from being applied more than once. If the application has implemented an application level replay, then it may require that application is integrated with transaction guard to provide idempotence.Understanding Transaction GuardIn the standard commit case, the database commits a transaction and returns a success message to the client. In the illustration shown in the slide, the client submits a commit statement and receives a message stating that communication failed. This type of failure can occur due to several reasons, including a database instance failure or network outage. In this scenario, the client does not know the state of the transaction. Oracle Database solves the communication failure by using a globally unique identifier called a logical transaction ID. When the application is running, both the database and client hold the logical transaction ID. The database gives the client a logical transaction ID at authentication and at each round trip from the client driver that executes one or more commit operations. The logical transaction ID uniquely identifies the last database transaction submitted on the session that failed. For each round trip from the client in which one or more transactions are committed, the database persists a logical transaction ID. This ID can provide transaction idempotence for interactions between the application and the database for each round trip that commits data.When a recoverable outage occurs, the exception handling is modified to get the logical transaction Id, and call a new PL/SQL interface DBMS_APP_CONT.GET_LTXID_OUTCOME that returns the reliable commit outcome. (See development and deployment section)Preserving Commit OutcomeClient receives a Logical Transaction ID (LTXID)Client is guaranteed the outcome of the last submissionGlobal protocol that blocks out of order flowsSafe for applications to return success/resubmitUsed by Application Continuity Typical Usage========Database crashed: (i) FAN aborts dead session (ii) application gets an error; (iii) Connectionpool removes orphan connection from the poolIf recoverable error Get last LTXID of the dead session using getLogicalTransactionIdor from your callbackObtain a new session Call DBMS_APP_CONT.GET_LTXID_OUTCOME* If committed then return result, application may continue Else return “uncommitted”; application cleans up and resubmit request* If uncommitted, prevent transaction from eventually committing Transaction Guard solution coverage======================Clients JDBC-thin, OCI, OCCI, ODP.netDatabase -Uses logical transaction ID (LTXID)Commit Models - Local TXN - Auto-commit, Commit on Success - Commit embedded in PL/SQL - DDL, DCL, Parallel DDL - Remote, Distributed Excludes XA in 12.1============Transaction Guard supports all listed transaction types. The primary exclusions in 12c are XA and read/write database links from Active Data Guard.To configure Transaction Guard, set the service attribute COMMIT_OUTCOMEValues – TRUE and FALSE Default – FALSEApplies to new sessions Optionally change the service attribute RETENTION_TIMEOUTUnits – secondsDefault – 24 hours (86400)Maximum value – 30 days (2592000) Oracle Database 12c provides Transaction Guard interface and API’s for JDBC thin, OCI, OCCI, and ODP.Net.
When replay is successful, Application Continuity masks many recoverable database outages from the applications and the users. It achieves the masking by restoring the database session, the full session (including session states, cursors, variables), and the last in-flight transaction (if there is one).Without Application Continuity, database recovery does not mask outages that are caused by network outages, instance failures, hardware failures, repairs, configuration changes, patches and so on. If the database session becomes unavailable due to a recoverable error, Application Continuity attempts to rebuild the session and any open transactions to the correct states; If the transaction is successful and does not need to be re-executed, the successful status is returned to the application.If the replay is successful, the request continues safely without duplication. If the replay is not successful, the database rejects the replay and the application receives the original error. To be successful, the replay must return to the client the exact same data that the client received previously in the request, and that the application potentially made a decision on.How Application Continuity works:Here are the steps:The client application sends a database request that is received by the JDBC replay driver.The replay driver sends the calls that make up the request to the database, receiving directions for each call from the database.The replay driver receives a Fast Application Notification (FAN) notification or a recoverable error.The replay driver performs the following actions:- It checks the request has replay enabled and checks timeouts. Assuming all is good -- It obtains a new database session, and if a callback is registered, runs this callback to initialize the session.- It checks with the database to determine whether replay can progress, for example, whether the last transaction was committed or rolled back.If replay is required, then the JDBC replay driver resubmits calls, receiving directions for each call from the database. Each call must establish the same client-visible state. Before the last call is replayed, the replay driver ends the replay, and returns to normal runtime mode. Solution Coverage===========Client JDBC-Thin driver UCP, WebLogic Server, Third Party Java AppsDatabaseSQL, PL/SQL, JDBC RPC - Select, ALTER SESSION, DML, DDL, COMMIT/ROLLBACK/SAVEPOINT Transaction models - Local, Parallel, Remote, Distributed Mutable function supportHardware acceleration on current Intel & SPARC chips==========Application Continuity is supported for thin JDBC, Universal Connection Pool and, WebLogic Server. It is included with Oracle Real Application Clusters (Oracle RAC), RAC One, and Oracle Active Data Guard. Application Continuity recovers the database request including any in-flight transaction and the database session states. The requests may include most SQL and PL/SQL, RPCs, and local JDBC callsApplication Continuity uses Transaction Guard. Transaction Guard tags each database session with a logical transaction ID (LTXID), so that the database recognizes if a request committed the transaction before the outage. Application Continuity offers the ability to keep the original values for some ORACLE functions such as Seq.NEXTVAL that change their values each time that they are called. This improves the likelihood that replay will succeed.On current SPARC and Intel-based chips, the validation that Application Continuity uses is supported by firmware at the database server.BENEFIT=====Transaction Guard & Application Continuity: Intelligent, application-transparent fault tolerance
ChallengesDatabase utilization hampered by geographic fragmentationLoad balancing and fault tolerance hard to automate globallyResource allocation and management dictated by geographyResulting inSub-optimal resource utilizationHampered or no enterprise-wide data integrationUnclear strategy for consolidation & distribution candidates
GDS tested with WLS 12.1.2.In an active/active configuration, a global service can be available on all the GoldenGate replicasCan also start different services on each replica – useful for conflict avoidanceClients connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestSupports all Oracle connection pools (UCP, WLS, OCI, ODP.NET)If a database fails, its global services restarted on another replica
Update service runs on primary; Reporting service runs on primary or Active Data Guard standbyGlobal service may be started in another database based on policies (e.g., singleton service, minimum of 3 instances, …)
In an active/active configuration, a global service can be available on all the GoldenGate replicasCan also start different services on each replica – useful for conflict avoidanceClients connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestSupports all Oracle connection pools (UCP, WLS, OCI, ODP.NET)If a database fails, its global services restarted on another replica
Orders Capture Application run on Primary.History orders Application offloaded to Active Standby. Without GDS, to bring the History App online:Steps- Change the properties (role definition) of the History Service via srvctl- Manually start the History Service on the primary database- Restart the apps- Connect to the History Service on the primary databaseDrawbacks- Unplanned application downtime- Manual, Time-consuming, Error-prone
At least 2-3 GSMs per region is recommendedOne GSM per region is assigned as “Master”Master GSM is responsible for publishing FAN events to the clients via ONS serverIf Master GSM dies, other GSM in the region takes overIf all GSMs in the region dies, Master GSM from the other region takes overIf all the GSMs in all the regions fail, the clients can still connect to the local listenersGDS Catalog Database can be replicated for HA/DR
RequirementsTo be able to load balance across data centersOptimal resource utilizationGlobal scalability and availabilityCapability to centrally manage global resourcesSolutionGlobal Database Services (GDS)Global Data Services (GDS) allows:Load-balancing of application workloads across regionsExtends RAC-like connect time & run time load balancing globallyAddresses inter-region resource fragmentation, so that underutilized resources in one region can be used to satisfy another region’s workload. Thus enabling optimal resource utilizationGlobal scalability and availabilityEasy to elastically add/remove databases from the GDS infrastructureSupports seamless service failoverCentralized management of global resourcesEasier management for globally distributed multi-database configurations
Example: Reserve Bank of India report – “Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds”, http://www.rbi.org.in/scripts/PublicationReportDetails.aspx?UrlPage=&ID=609Chapter 7 of this report – “Business Continuity Planning” has some specific guidelines wrt RPO and RTO, and this particular guideline has sparked a lot of interest within the Banking IT community:Given the need for drastically minimizing the data loss during exigencies and enable quick recovery and continuity of critical business operations, banks may need to consider near site DR architecture. Major banks with significant customer delivery channel usage and significant participation in financial markets/payment and settlement systems may need to have a plan of action for creating a near site DR architecture over the medium term (say, within three years).
Set via NOAFFIRM attribute
1) DDL to create temporary tables must be issued on the primary databaseEnables more reporting apps to leverage Active Data GuardNew init.ora parameter TEMP_UNDO_ENABLED=============Global Sequences Sequences created using the default CACHE and NOORDER options can be accessed from an Active Data Guard standby databasePrimary allocates a unique range of sequence numbers to each standbyEnables more flexible reporting choices for Active Data GuardSession SequencesUnique range of sequence numbers only within a sessionSuitable for reporting apps leveraging global temporary tables===============In an Active Data Guard environment, sequences created by the primary database with the default CACHE and NOORDER options can be accessed from standby databases as well. When a standby database accesses such a sequence for the first time, it requests that the primary database allocate a range of sequence numbers. The range is based on the cache size and other sequence properties specified when the sequence was created. Then the primary database allocates those sequence numbers to the requesting standby database by adjusting the corresponding sequence entry in the data dictionary. When the standby has used all the numbers in the range, it requests another range of numbers.The primary database ensures that each range request from a standby database gets a range of sequence numbers that do not overlap with the ones previously allocated for both the primary and standby databases. This generates a unique stream of sequence numbers across the entire Data Guard configuration.Because the standby's requests for a range of sequences involve a round-trip to the primary, be sure to specify a large enough value for the CACHE keyword when you create a sequence that will be used on an Active Data Guard standby. Otherwise, performance could suffer.Restrictions: Sequences created with the ORDER or NOCACHE options cannot be accessed on an Active Data Guard standby===============Supported types in 11.2BINARY_DOUBLEBINARY_FLOATBLOBCHARCLOB and NCLOBDATEINTERVAL YEAR TO MONTHINTERVAL DAY TO SECONDLONGLONG RAWNCHARNUMBERNVARCHAR2RAWTIMESTAMPTIMESTAMP WITH LOCAL TIMEZONETIMESTAMP WITH TIMEZONEVARCHAR2 and VARCHARXMLType stored as CLOBLOBs stored as SecureFileAdditional Data Types Supported in Oracle Database 12cXML OR and Binary XMLXDB repository operations and other commonly used XDB operationsADTs with attributes of simple types and varrays, with inheritance and type evolutionVARCHAR32Commonly used AQ operations ANYDATA with non-opaque typesSpatial, Image, Oracle Text, DICOMComplete SecureFile supportDBFSScheduler job definitionsStill unsupported in 12.1BFILECollections (nested tables)ROWID, UROWIDUser-defined typesADTs with attributes of nested tables, refs and bfilesTop level nested tables, varrays, refs and bfilesIf primary key involves ADT columnsSecurefile FRAGMENT_OPERATION. This was intended only for internal consumption, but it got documented. It is supported via EDS. =====================DGMGRL command: validate databaseValidates each database’s current statusVerifies there are no archive log gapsPerforms a log switch on primary to verify the log is applied on all standbysShows any databases or RAC instances that are not discoveredDetects inconsistencies between database properties and values stored in databaseEnsures online redo log files have been cleared in advance of role transitionChecks for previously disabled redo threadsEnsures primary and all standbys are on the same redo branch
RMAN automatically creates auxiliary instance on target database host where relevant backups are restored and recoveredRecovered table(s) in auxiliary instance are:Imported directly into target database, orExported to a Data Pump dump fileUseful in scenarios where Flashback cannot be used:Flashback Drop: Table has been purged out of recycle binFlashback Table: Point-in-time needed is older than UNDO_RETENTION
To create the backup set containing data that must be transported to the destination database, use the BACKUP command on the source database. To indicate that you are creating a cross-platform backup, the BACKUP command must contain either the FORTRANSPORT or TO PLATFORM clause.================You can transport an entire database from a source platform to a different destination platform. While creating the cross-platform backup to transport a database, you can convert the database either on the source database or the destination database.Back up the source database using the FOR TRANSPORT or TO PLATFORM clause in the BACKUP command. Using either of these clauses creates a cross-platform backup that uses backup sets.Example 28–5 creates a cross-platform backup of the entire database. This backup can be restored on any supported platform. Because the FOR TRANSPORT clause is used, the conversion is performed on the destination database. The source platform is Sun Solaris and the cross-platform database backup is stored in db_trans.bck in the /tmp/xplat_backups directory on the source host.Example 28–5 Creating a Cross-Platform Database Backup for Restore on Any Supported PlatformBACKUPFOR TRANSPORTFORMAT '/tmp/xplat_backups/db_trans.bck'DATABASE;Example 28–6 creates a cross-platform backup of the entire database that can berestored on the Linux x86 64-bit platform. Because the TO PLATFORM clause is used, conversion is performed on the source database. The backup is stored in thebackup set db_trans_lin.bck in the /tmp/xplat_backups directory on the source host.Example 28–6 Creating a Cross-Platform Database Backup for Restore on a SpecificPlatformBACKUPTO PLATFORM='Linux x86 64-bit'FORMAT '/tmp/xplat_backups/db_trans_lin.bck'DATABASE;Restore the backup sets that were transferred from the source by using theRESTORE command with the FOREIGN DATABASE clause.Example 28–7 restores the cross-platform database backup created inExample 28–5. The FROM PLATFORM clause specifies the name of the platform on which the backup was created. This clause is required to convert backups on the destination. The backup set containing the cross-platform database backup is stored in the /tmp/xplat_restores directory on the destination host. The TONEW option specifies that the restored foreign data files must use new OMF-specified names in the destination database. Ensure that the DB_CREATE_ FILE_DEST parameter is set.Example 28–7 Restoring a Cross-Platform Database Backup on the DestinationDatabaseRESTOREFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATABASE TO NEWFROM BACKUPSET '/tmp/xplat_restores/db_trans.bck';Example 28–8 restores the cross-platform database backup that was created inExample 28–6. The destination database is on the Linux x86 64-bit platform. The backup set containing the cross-platform backup that needs to be restored is stored in /tmp/xplat_restores/db_trans_lin.bck. The restored foreign data files are stored in the /oradata/datafiles directory using names that begin with df_.Example 28–8 Restoring a Cross-Platform Database Backup that was Created for aSpecific PlatformRESTOREALL FOREIGN DATAFILESFORMAT ’/oradata/datafiles/df_%U’FROM BACKUPSET ’/tmp/xplat_restores/db_trans_lin.bck’;===================In 11.2.0.3 -> only Exadata target -> 1389592.1 - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups Minimize read-only impact with multiple incremental backupsSuccessive incrementals converted & applied to restored data filesFinal incremental while tablespace in read-only mode, with separate Data Pump metadata export and importCreate a cross-platform level 0 inconsistent backup of the tablespace my_tbs when the tablespace is read/write mode. This backup is stored in a backup setnamed my_tbs_incon.bck in the directory /tmp/xplat_backups.BACKUPFOR TRANSPORTALLOW INCONSISTENTINCREMENTAL LEVEL 0TABLESPACE my_tbs FORMAT '/tmp/xplat_backups/my_tbs_incon.bck';Create a cross-platform level 1 incremental backup of the tablespace my_tbs that contains the changes made after backup in Step 2 was created. The tablespace is still in read/write mode. This incremental backup is stored in my_tbs_incon1.bck in the directory /tmp/xplat_backups.BACKUPFOR TRANSPORTALLOW INCONSISTENTINCREMENTAL LEVEL 1TABLESPACE my_tbs FORMAT '/tmp/xplat_backups/my_tbs_incon1.bck';ALTER TABLESPACE my_tbs READ ONLY;Create the final cross-platform level 1 incremental backup of the tablespace my_tbs. This backup contains changes made to the database after the backup that was created in Step 3. It must include the export dump file that contains the tablespacemetadata.BACKUPFOR TRANSPORTINCREMENTAL LEVEL 1TABLESPACE my_tbsFORMAT '/tmp/xplat_backups/my_tbs_incr.bck'DATAPUMP FORMAT '/tmp/xplat_backups/my_tbs_incr_dp.bck'DESTINATION '/tmp';Move the backup sets and the export dump file generated in Steps 2, 3, and 5 from the source host to the desired directories on the destination host.Restore the cross-platform level 0 inconsistent backup created in Step 2.Use the FOREIGN DATAFILE clause to specify the data files that must be restored. The FROM PLATFORM clause specifies the name of the platform on which the backup was created. This clause is required to convert backups on the destination database.RESTOREFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATAFILE 6FORMAT '/tmp/aux/mytbs_6.df',7FORMAT '/tmp/aux/mytbs_7.df',20FORMAT '/tmp/aux/mytbs_20.df',10FORMAT '/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incon.bck';Recover the foreign data files obtained in Step 8 by applying the first cross-platform level 1 incremental backup that was created Step 3.RECOVERFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATAFILECOPY '/tmp/aux/mytbs_6.df','/tmp/aux/mytbs_7.df','/tmp/aux/mytbs_20.df','/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incon1.bck';Recover the foreign data files obtained in Step 8 by applying the final cross-platform level 1 incremental backup that was created in Step 5. This backup was created with the tablespaces in read-only mode.RECOVERFROM PLATFORM ’Solaris[tm] OE (64-bit)’FOREIGN DATAFILECOPY '/tmp/aux/mytbs_6.df','/tmp/aux/mytbs_7.df','/tmp/aux/mytbs_20.df','/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incr.bck';Restore the backup set containing the export dump file. This dump file contains the tablespace metadata required to plug the tablespaces into the destination database.RESTOREFROM PLATFORM ’Solaris[tm] OE (64-bit)’DUMP FILE 'my_tbs_restore_md.dmp'DATAPUMP DESTINATION '/tmp/dump'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incr_dp.bck';
RMAN DUPLICATE leverages restore process (from backup or source DB) to create new clone or standby databaseClone the entire CDB or ROOT + selected PDBsCommands:RMAN> DUPLICATE TARGET DATABASE TO <CDB1>;RMAN> DUPLICATE TARGET DATABASE TO <CDB1> PLUGGABLE DATABASE <PDB1>, <PDB2>;For in-place cloning or creating a new PDB within a CDB, use SQL:SQL> CLONE PLUGGABLE DATABASE ..SQL> CREATE PLUGGABLE DATABASE ..
Previously multi-section possible only for full backup sets. Now:BACKUP INCREMENTAL LEVEL 1SECTION SIZE 100MDATAFILE '/oradata/datafiles/users_df.dbf';BACKUP AS COPYSECTION SIZE 500MDATABASE;==========================================================Rolling Forward a Physical Standby Database and Synchronizing it with the Primary DatabaseIn this example, the DB_UNIQUE_NAME of the primary database is MAIN and that of the physical standby database STANDBY. You want to refresh the physical standby database with the latest changes made to the primary database. You can use the RECOVER command with the FROM SERVICE clause to fetch an incremental backup from the primary database and then apply this backup to the physical standby database. The service name of the primary database is main_tns and the compression algorithm used is Basic.When the RECOVER command is executed, the incremental backup is created on the primary database and then transferred, over the network, to the physical standby database. RMAN uses the SCN from the standby data file header and creates the incremental backup starting from this SCN on the primary database. If block change tracking is enabled for the primary database, it will be used while creating the incremental backup.To refresh a physical standby database with changes made to the primary database, use the following steps:1. Connect to the physical standby database as a user with the SYSBACKUP privilege.%RMANRMAN> CONNECT TARGET "sbu@standby AS SYSBACKUP";Enter the password for the sbu user when prompted.2. Specify that the compression algorithm used is Basic.RMAN> SET COMPRESSION ALGORITHM 'basic';3. Ensure that the tnsnames.ora file in the source database contains an entrycorresponding to the physical standby database. Also ensure that the password files on the source and physical standby database are the same.4. Recover the data file on the physical standby database by using an incrementalbackup of the primary database. The following command creates a compressed, multisection incremental backup on the primary database to recover the standbydatabase.RECOVER DATABASEFROM SERVICE main_tnsSECTION SIZE 120MUSING COMPRESSED BACKUPSET;===================RMAN can transfer the files required for active database duplication as image copies or backup sets.When active database duplication is performed using image copies, after RMAN establishes a connection with the source database, the source database transfers the required database files to the auxiliary database. Using image copies may require additional resources on the source database. This method is referred to as the push-based method of active database duplication.When RMAN performs active database duplication using backup sets, a connection is established with the source database and the auxiliary database. The auxiliary database then connects to the source database through Oracle Net Services and retrieves the required database files from the source database. This method of active database duplication is also to as the pull-based method.Using backup sets for active database duplication provides certain advantages. RMAN can employ unused block compression while creating backups, thus reducing the size of backups that are transported over the network. Backup sets can be created in parallel on the source database by using multisection backups. You can also encrypt backup sets created on the source database.Factors That Determine Whether Backup Sets or Image Copies Are Used for Active Database Duplication RMAN only uses image copies to perform active database duplication when no auxiliary channels are allocated or when the number of auxiliary channels allocated is less than the number of target channels.RMAN uses backup sets to perform active database duplication when the connection to the target database is established using a net service name and any one of the following conditions is satisfied:- The DUPLICATE ... FROM ACTIVE DATABASE command contains either the USING BACKUPSET, USING COMPRESSED BACKUPSET, or SECTION SIZE clause.- The number of auxiliary channels allocated is equal to or greater than the number of target channels allocated.Note:Oracle recommends that you use backup sets to perform active database duplication.
From the users view, ASM exposes a small number of Disk Groups. These Disk Groups consists of ASM disks and files that are striped across all the disks in a Disk Group. The Disk Groups are global in nature and database instances running individually or in clusters have shared access to the Disk Groups and the files within them. This is illustrated in this picture. The green database has files in Disk Group A and are striped across all its disks. Disk Group A is shared with both the green database and the purple database.Notice the ASM instance on every server in the cluster. The ASM instances communication amongst themselves and form an ASM cluster.These simple ideas delivered a powerful solution that eliminates many headaches DBAs and Storage Administrators once had with managing storage in an Oracle environment.
Oracle Flex ASM enables an Oracle ASM instance to run on a separate physical server from the database servers. With this deployment, larger clusters of Oracle ASM instances can support more ASM clients (database instances) while reducing the Oracle ASM footprint for the overall system.With Oracle Flex ASM as with Standard ASM, you can consolidate all the storage requirements into a single set of disk groups. However, these disk groups are managed by a small set of Oracle Flex ASM instances running in the cluster. If a host running an ASM instance fails, ASM clients using that ASM instance, failover to a surviving ASM instance on a different host. You can specify the number of Oracle ASM instances with a cardinality setting. The default is three instances.The configurations of Oracle ASM in Oracle database 12c are:- Standard ASM: With this mode (Standard Oracle ASM cluster), Oracle ASM instances continue to support existing standard architecture in which database clients are running with Oracle ASM instances on the same host computer. - Oracle Flex ASM:With this mode, database clients running on nodes in a cluster can access Oracle Flex ASM instances remotely for metadata, but perform block I/O operations directly to Oracle ASM disks. All the nodes with in the cluster must have direct access to the ASM disks.You can choose the Oracle ASM deployment model during the installation of Oracle Grid Infrastructure and you canuse Oracle ASM Configuration Assistant (ASMCA) to enable Oracle Flex ASM after the installation / upgrade was performed. This functionality is only available in an Oracle Grid Infrastructure configuration, not an Oracle Restart configuration. Oracle Flex ASM is managed by ASMCA, CRSCTL, SQL*Plus, and SRVCTL. To determine whether an Oracle Flex ASM has been enabled, use the ASMCMD showclustermode command. $ asmcmdshowclustermodeASM cluster : Flex mode enabled You can also use SRVCTL to determine whether Oracle Flex ASM is enabled. If enabled, then srvctlconfigasm displays the number of Oracle ASM instances that has been specified for use with the Oracle Flex ASM configuration. For example:$ srvctlconfigasmASM instance count: 3Clients are automatically relocated to another instance if an Oracle ASM instance fails. If necessary, clients can be manually relocated using the ALTER SYSTEM RELOCATE CLIENT command. For example:SQL> ALTER SYSTEM RELOCATE CLIENT 'client-id';When you issue this statement, the connection to the client is terminated and the client fails over to the least loaded instance. Every database user must have a wallet with credentials to connect to Oracle ASM. CRSCTL commands can be used by the database user to manage this wallet. All Oracle ASM user names and passwords are system generated.
When consolidating pre-Oracle 12c databases and Oracle 12c on the same system using a cluster with Oracle Flex ASM enabled, the administrator will have to ensure that a local ASM instance is running on each node in the cluster. This is achieved by issuing a post installation SRVCTL command on Oracle Clusterware level and increasing the number of Oracle Flex ASM instances to the number of servers in the cluster (srvctl modify asm –count ALL). This setup preserves the Oracle Database 12c failure protection on local ASM instance failure and enables database consolidation across versions, maintaining the pre-12c behavior for pre-12c databases. ================Scrubbing Disk GroupsOracle ASM disk scrubbing improves availability and reliability by searching for data that may be less likely to be read. Disk scrubbing checks logical data corruptions and repairs them automatically in normal and high redundancy disks groups. The scrubbing process repairs logical corruptions using the mirror disks. Disk scrubbing can be combined with disk group rebalancing to reduce I/O resources. The disk scrubbing process has minimal impact to the regular I/O in production systems.You can perform scrubbing on a disk group, a specified disk, or a specified file of a disk group with the ALTER DISKGROUP SQL statement. For example, the following SQL statements show various options used when running the ALTER DISKGROUP disk_group SCRUB SQL statement.SQL> ALTER DISKGROUP data SCRUB POWER LOW;SQL> ALTER DISKGROUP data SCRUB FILE 'EXAMPLE.265.767873199' REPAIR POWER HIGH FORCE;When using ALTER DISKGROUP with the SCRUB option, the following items apply:The optional REPAIR option automatically repairs disk corruptions. If the REPAIR option is not specified, then the SCRUB option only checks and reports logical corruptions of the specified target.The optional POWER value can be set to AUTO, LOW, HIGH, or MAX. If the POWER option is not specified, the power value defaults to AUTO and the power adjusts to the optimum level for the system.If the optional WAIT option is specified, the command returns after the scrubbing operation has completed. If the WAIT option is not specified, the scrubbing operation is added into the scrubbing queue and the command returns immediately.If the optional FORCE option is specified, the command is processed even if the system I/O load is high or scrubbing has been disabled internally at the system level.
Online Redefinition===========Improved sync_interim_table performance with optimized Materialized View Log processingAbility to redefine table with VPD policies with a new parameter “copy_vpd_opt” in start_redef_tableImproved resilience of finish_redef_table with better lock managementBetter handling of multi-partition redefinitionMultiple partitions can be specified together in a single redefinition sessionBetter availability for partition redefinition with only partition-level locksImproved performance by logging changes for only the specified partitionsAdditional Details==========11.2 log handling enhancementsCommit-scn based MV LogDeferred MV Log purgeRemove MV Log setup and purge out of the refresh processFor a single MV refresh solely depending on the MV Log, the log handling could spend up to 2/3 of the total refresh execution time. Removing the log handling overhead could make the refresh 3X faster.===================================Before:No easy way to redefine multiple partitions, must launch a separate START_REDEFINITION and FINISH_REDEFINITION for each partition:Takes a long time to get all partitions redefinedDifficult to redefine all partitions needed in a single maintenance windowGoals/Benefits:Pay the non-recurring overheads of START_REDEFINITION and FINISH_REDEFINITION once (creation of MV Log, metadata modification, etc)Easy to move a large number of partitions to new tablespace(s) in an online mannerEnables the atomic redefinition of more than one partition.Support partial completion with new continue_after_errors parameter Step 1: Start redefinition with multi-partitionsDBMS_REDEFINITION.START_REDEF_TABLE (‘GROCERY’,’SALES’,int_table=>’tbl1,tbl2,tbl3’, part_name=>’sales_p1,sales_p2,sales_p3’, continue_after_errors=>TRUE);Step 2: Synchronize interim tables for multi-partitions DBMS_REDEFINITION.SYNC_INTERIM_TABLE (‘GROCERY’,’SALES’, int_table=>’tbl1,tbl2,tbl3’, part_name=>’sales_p1,sales_p2,sales_p3’ , continue_after_errors=>TRUE);Step 3: Finish redefinition with multi-partitionsDBMS_REDEFINITION.FINISH_REDEF_TABLE (‘GROCERY’,’SALES’, int_table=>’tbl1,tbl2,tbl3’, part_name=>’sales_p1,sales_p2,sales_p3’, continue_after_errors=>TRUE);If any failures and continue_after_errors=TRUE, record the error and process the next partitionIf any failures and continue_after_errors=FALSE, rollback by exchanging all successfully exchanged partition(s)part_name is used today, but can’t take a list===================================Support VPD policies with a new parameter “copy_vpd_opt” in start_redef_tableOption 1: Not copied (Default)No VPD policiesError when existing VPD policies on original tableOption 2: Copy VPD policies automaticallyColumn names and types unchangedOption 3: Copy VPD policies manuallyApplicable when:Column names or types are changedUsers want to modify VPD policies===================================Existing issues (in finish_redef_table):The execution can be unpredictably long and sometimes never finish => forcing interrupt and abortHard to get dml lock before moving into the final operationThe final refresh after acquiring locks could take very long => a wider “blackout” window blocking DMLsEnhancements:Providing timeout to gracefully exit from finish_redef_tableUtilize dml-lock-wait timeout window to refresh the interim tableGet dml lock in wait mode to get better lock chance ===================================Before: redefine partition P, lock the table T and log to all partition changes DMLs cannot occur to other partitionsUnnecessary change logging12c (only lock/log the partition)DMLs allowed on other partitionsRefresh with only needed changes======================================================The privileges explicitly granted to SYSDG are (see admin/catadmprvs.sql):< SYSTEM PRIVILEGES >alter databasealter sessionalter systemselect any dictionary< OBJECT PRIVILEGES >execute on sys.dbms_drsselect on sys.dba_captureselect on sys.dba_logstdby_eventsselect on sys.dba_logstdby_logselect on sys.dba_logstdby_historyselect on appqossys.wlm_classifier_plandelete on appqossys.wlm_classifier_planAlso, SYSDG is implicitly allowed to perform the following operations:- STARTUP- SHUTDOWN- CREATE RESTORE POINT- DROP RESTORE POINT- FLASHBACK DATABASE- SELECT fixed tables/views (e.g., X$ tables, GV$ and V$ views)================================================Additional Online Operations---------------------Drop index online (create/rebuild index online in 10g and 11g)Alter index unusable onlineAlter index visible / invisibleDrop constraint online (create constraint online in 11g)Set unused column online (add column online in 11g)Add column with default is fast (metadata-only operation), and online (only not-null in 11g)Online move partitionEdition-based redefinition simplification=======================================ONLINE MOVE Partition: you can move a partition while DMLs are ongoing on the partition you are moving. DDL operations that took subtle X-locks here and there, leading to a pile-up of DMLs in systems like SAP. We fixed one or two of these cases for SAP in the 11.2 time frame and added the following DDLs in 12.1: CREATE/DROP INDEX, ADD/DROP CONSTRAINT, ADD/SET UNUSED COLUMN (all Beta 1).Make it easier to use Edition-Based RedefinitionCan editions-enable a database with tables that depend on UDT (such as AQ payload), without schema reorganizationSupports MVs, indexes, and virtual columns (based on PLSQL or views) on editioned objectsGreatly reducing the need to separate application objects into different schemas -------------------------Moving a Table to a New Segment or TablespaceThe ALTER TABLE...MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot bemodified using ALTER TABLE. You can also use the ALTER TABLE...MOVE statement with a COMPRESS clause to store the new segment using table compression. Tables are usually moved either to enable compression or to perform data maintenance. For example, you can move a table from one tablespace to another. Most ALTER TABLE...MOVE statements do not permit DML against the table while the statement is executing. The exceptions are the following statements:- ALTER TABLE ... MOVE PARTITION ... ONLINE- ALTER TABLE ... MOVE SUBPARTITION ... ONLINEThese two statements support the ONLINE keyword, which enables DML operations to run uninterrupted on the partition or subpartition that is being moved. For operations that do not move a partition or subpartition, you can use online redefinition to leave the table available for DML while moving it.---------------------------------------------------------Moving a Table Partition or Subpartition OnlineUse the ALTER TABLE...MOVE PARTITION statement or ALTER TABLE...MOVE SUBPARTITION statement to move a table partition or subpartition, respectively. When you use the ONLINE keyword with either of these statements, DML operations can continue to run uninterrupted on the partition or subpartition that is being moved. If you do not include the ONLINE keyword, then DML operations are not permitted on the data in the partition or subpartition until the move operation is complete.When you include the UPDATE INDEXES clause, these statements maintain both local and global indexes during the move. Therefore, using the ONLINE keyword with these statements eliminates the time it takes to regain partition performance after the move by maintaining global indexes and manually rebuilding indexes.To move a table partition or subpartition online: In SQL*Plus, connect as a user with the necessary privileges to alter the table and move the partition or subpartition.Run the ALTER TABLE ... MOVE PARTITION or ALTER TABLE ... MOVE SUBPARTITION statement.Example 20-9 Moving a Table Partition to a New SegmentThe following statement moves the sales_q4_2003 partition of the sh.sales table to a new segment with advanced row compression and index maintenance included:ALTER TABLE sales MOVE PARTITION sales_q4_2003 ROW STORE COMPRESS ADVANCED UPDATE INDEXES ONLINE;
Oracle GoldenGate enables zero down time upgrade migration or consolidation by synchronizing the Oracle DB 12c with the existing Oracle or non-Oracledatabases in real time. During the synchronization the production systems can continue to support transaction processing. As soon as the new system on Oracle Database 12c is in sync with legacy systems, users can do immediate switchover, thus experiencing minimal to zero downtime. While the target Oracle DB 12c is instantiated (via DB tools or ODI) with bulk data transfer, GoldenGate captures the change data committed in the production systems and stores them in its queue. Once the target system is ready it delivers the change data to the consolidated system to make sure they are in synch. After that point the target can be tested with production load. At this point Oracle GoldenGateVeridata can verify that there is no data discrepancy. When the new system is ready users can be switched over without any database downtime. With bidirectional replication capabilities GG can capture new transactions happening in the new consolidated environment and deliver to the legacy systems to keep them up to date for fail back option. The other option is to run both legacy and new environment concurrently where GG does the bidirectional synchronization in real time. This allows phased migration of users, and completely seamless transition into the new system with minimized risk. In addition to removing downtime and minimizing risk, GoldenGate allows the IT team to test the new environment without time pressure.
Active Active database replication is a key use case for GoldenGate when it comes to achieving high availability. GoldenGate’s bidirectional real-time data replication works across heterogeneous systems.Multi master database replication with GG helps eliminate any downtime planned or unplanned,because of the ability to work with the remaining databases if one db fails.. It also increases system performance by allowing transaction load distribution between completely parallel systems. Data can be filtered to move only certain tables or rows. There are no distance limitations. GG offers out of the box conflict management to handle possible data collisions that comes with multi-master replication.
Plan decision deferred until runtimeFinal decision is based on statistics collected during execution Alternate sub-plans are pre-computed, and stored in the cursorStatistic collectors are inserted at key points in the planEach sub-plan has a valid range for stats collectedIf stats prove to be out of range sub-plans can be swapped Requires buffering near the swap point to avoid returning rows to userOnly join methods and distribution method can change
These are some of the key differentiating features for Oracle NoSQL Database. Flexibility A key differentiator is the combination of the simple key-value data model and configurable ACID transactions maximizes the flexibility and configurability of NoSQL DB. It allows more applications to use a common, enterprise grade distribute storage technology for multiple NoSQL applications. Applications don’t have to conform to an out-of-the box transaction model which is often limited or non-existent. Instead they can specify the transaction semantics on a per-operation basis. Applications don’t have to conform to a pre-defined data document, columnar or graph data model. They can use the NoSQL DB key-value pair model in a what that is most suited to the application. Key-value pairs are the simplest and most flexible data model. Keys are simple structures or strings that encapsulate a record hierarchy. Values can be simple byte arrays, complex application structures or JSON objects. Key-value pair records provide very simple and very fast (1-2 milliseconds per operation) data access. Key-value pairs can be used to model Document storage (like Berkeley DB XML), Columnar storage (like time series vectors) and Graphs (like RDF data). Oracle’s Key-value pairs utilize a flexible key definition that allows the application developer to leverage it for both data distribution and data clustering. Transactions and transactional consistency is a key element to every application. NoSQL DB supports ACID transactions in the storage layer (within a data partition), and allows the application to configure the transactional behavior. Easy to use Smart topology is about: a) distributed topology awareness, b) automated configuration and load balancing, c) automated failure detection and failover handling. This is a key differentiator because it makes NoSQL DB much easier to configure and manage. In a nutshell, Smart topology helps customers because it: Automatically allocates resources Guarantees even distribution of master nodes Guarantees HA distribution of replicas Minimizes impact of storage node failures Avoids outages due to admin mistakes Automation simplifiesadministrationThe NoSQL DB driver (linked into each client application) and the Storage Nodes both maintain a map of the current topology and it’s state. This allows NoSQL DB to optimize query operations and minimize impact of storage node failures. NoSQL DB does NOT require complex topology planning and management. You simple tell NoSQL DB how many storage nodes are available and a couple of simple configuration parameters (replication factor and storage node capacity) and the system will optimally configure itself, ensuring proper load balancing and resilience to failure. If topology changes cause the system to become unbalanced (certain storage nodes become overloaded for example) the system can automatically rebalance itself remaining online the entire time. The storage nodes will automatically detect and respond to storage node failure. If the master fails, a new master is elected. If a replica fails, it’s status us updated as “offline” and queries are served by the remaining replicas. NoSQL Database comes integrated with: Oracle Database via External TablesHadoop MapReduce via the KVInputFormatOracle Event Processing. NoSQL DB can serve as a data source for data lookups.Oracle Coherence. NoSQL DB can serve as the backing store for Coherence, faulting in objects that are not in the Coherence cache grid and write out objects that have been modified. RDF/Jena. NoSQL DB can store RDF graph data and perform SPARQL queries. This is a keywhen you consider that NoSQL applications function within a IT infrastructure ecosystem. Interaction and interoperability with the RDBMS, DW, Application in-memory caches, Business Rules engines, etc. is a crucial characteristic in leveraging the value of the NoSQL data within an overall enterprise data management solution. differentiator because other NoSQL providers tend to have a single product or product silos. This is especially important
Emphasize the long history of analytical functionality in SQL and the database.