1. T3 is an optimized protocol used to transport data between WebLogic Server and
other Java programs,including clients and other WebLogic Servers. WebLogic
Server keeps track of every Java Virtual
Machine (JVM) with which it connects, and creates a single T3 connection to
carry all traffic for a JVM.
For example, if a Java client accesses an enterprise bean and a JDBC connection
pool on WebLogic Server, a single network connection is established between
the WebLogic Server JVM and the client JVM.
Oracle Support NOTE 1465038.1, “Calculating Usable Space in Exadata Cell“
[grid@dodpdb04 ~]$ asmcmd -p (shows present working directory)
ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 36384768 15924908
3307706 6308601 0 N DATA_DODP/
MOUNTED NORMAL N 512 4096 4194304 2087680 303648
257400 23124 0 Y DBFS_DG/
MOUNTED NORMAL N 512 4096 4194304 9090816 6850084
826437 3011823 0 N RECO_DODP/
ASMCMD [+] > du DATA_DODP/
Used_MB Mirror_used_MB
10228996 20458476
So the disk redundancy is normal... in DATA_DODP
[grid@dodpdb04 ~]$ echo "36384768-20458476" |bc
15926292
which is pretty close with Free_MB
The free MB divided by 2 (normal redundancy)
[grid@dodpdb04 ~]$ echo "15926292/2"|bc
7963146
which close to Usable_file_MB , the space available new files
[grid@dodpdb04 ~]$ asmcmd du DATA_DODP/ ( asm commands non-interactive mode)
Used_MB Mirror_used_MB
10228996 20458476
ASMCMD is very slow. How can I speed it up?
The asmcmd utility appears to be very slow. This slowness is a result of queries
against the v$asm_diskgroup view. To solve this problem edit the
$ORACLE_HOME/bin/asmcmdcore script and change all v$asm_diskgroup references to
v$asm_diskgroup_stat.
V$asm_diskgroup and v$asm_diskgroup_stat provides exactly the same information,
but the %_stat view operates from cache, while v$asm_diskgroup rescans all disk
headers. This method is also used by Oracle in their Enterprise Manager product.
What is SYSASM role?
Starting from Oracle 11g, SYSASM role can be used to administer the ASM
instances. You can continue using SYSDBA role to connect to ASM but it will
generate following warning messages at time of startup/shutdown, create
Diskgroup/add disk, etc
Alert entry
WARNING: Deprecated privilege SYSDBA for command 'STARTUP'
2. How can we copy the files from/to ASM?
You can use RMAN or DBMS_FILE_TRANSFER.COPY_FILE procedure to copy the files
to/from ASM from/to Filesystem.
Same size disk for Failuregroups in Normal/High redundancy will prevent issues
like ORA-15041 as the file extents needs to be mirrored across the disks
I have created Oracle database using DBCA and having a different home for ASM
and Oracle Database. I see that listener is running from ASM_HOME. Is it
correct?
This is fine. When using different home for ASM, you need to run the listener
from ASM_HOME instead of ORACLE_HOME.
How does one create a database directly on ASM?
The trick is to create an SPFILE and restart the instance before issuing the
CREATE DATABASE statement:
Code:
STARTUP NOMOUNT PFILE=initorcl_0.ora
CREATE SPFILE FROM pfile='initorcl_0.ora';
SHUTDOWN IMMEDIATE
STARTUP NOMOUNT
Point all OMF files into ASM:
Code:
ALTER SYSTEM SET db_create_file_dest = '+DATA';
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 134G;
ALTER SYSTEM SET db_recovery_file_dest = '+RECOVER';
Issue the create database command:
Code:
CREATE DATABASE orcl
UNDO TABLESPACE undots
DEFAULT TEMPORARY TABLESPACE temp
character set "WE8ISO8859P1"
national character set "AL16UTF16";
ASM Instance not recognized by DBCA ,why?
I had the same error and the way I fixed is by changing TNS_ADMIN value in
.bash_profile.
Old Value
TNS_ADMIN=$ORACLE_HOME/network/admin
New Value
TNS_ADMIN=$ASM_HOME/network/admin
My ORACLE_HOME=/d01/app/oracle/product/10.2.0/db_1
My ASM_HOME=/d01/app/oracle/product/10.2.0/*asm*
Once this is done, I re-executed my .bash_profile and started dbca and the error
went away.
Also, after you create your databse, you might have to set your TNS_ADMIN env in
srvctl to start database . Wrong TNS_ADMIN setting might cause the following
3. errors during the start of database using srvctl but not sqlplus.
$ srvctl start database -d ProdDb
PRKP-1001 : Error starting instance ProdDb on node node1
CRS-0215: Could not start resource 'ora.ProdDb.ProdDb1.inst'.
PRKP-1001 : Error starting instance ProdDb2 on node node2
CRS-0215: Could not start resource 'ora.ProdDb.ProdDb2.inst'.
For example:
srvctl setenv database -d ProdDb -T
TNS_ADMIN='/d01/app/oracle/product/10.2.0/asm/network/admin'
Which shows you that all you need is an +ASM entry in the tnsnames.ora file
under /d01/app/oracle/product/10.2.0/db_1/network/admin.
Bigfile vs standard db files
The performance of database opens, checkpoints, and DBWR processes should
improve if data is stored in bigfile tablespaces instead of traditional
tablespaces. However, increasing the datafile size might increase the time to
restore a corrupted file or create a new datafile.
in oracle 11g release 2, instances register with SCAN listeners only as remote
listeners. In your case it should be
REMOTE_LISTENER=<scan-name>:<port>
That is the purpose of REMOTE_LISTENER
Explaination about each parameter in Data Guard
1.LOG_ARCHIVE_CONFIG =
{
[ SEND | NOSEND ]
[ RECEIVE | NORECEIVE ]
[ DG_CONFIG=(remote_db_unique_name1 [, ... remote_db_unique_name9) |
NODG_CONFIG ]
SEND -log files
RECEIVE =log file
database that are part of data guard configuration
2.Do not use the default value, VALID_FOR=(ALL LOGFILES, ALL_ROLES), for
logical standby databases.
log_archive_dest_state_2=DEFER/ENABLE;
3.fal_server and fal_client are used for smooth switch over
4.standby configuration should have one more standby redo log file group than
the number of online redo log file groups on the primary database.
Because,
Logical standby databases may require more standby redo log files (or
additional ARCn processes) depending on the workload. This is because logical
standby databases also write to online redo log files, which take precedence
over standby redo log files. Thus, the standby redo log files may not be
archived as quickly as the online redo log files.
Logical standby databases may require more standby redo log files (or
4. additional ARCn processes) depending on the workload. This is because logical
standby databases also write to online redo log files, which take precedence
over standby redo log files. Thus, the standby redo log files may not be
archived as quickly as the online redo log files.
v$standby_log
select member from v$logfile where type='STANDBY';
RMAN>backup current controlfile for standby;
5.cp orapwprimary orapwstandby
Rename the password file wrt to SID in standby
chown -R oracle:oinstall /u01/app will never equals to chown -R oracle:oinstall
/u01
scope=both is not valid when database is nomount
Error: ORA-12528: TNS:listener: all appropriate instances are blocking new
connections
Reason : DB is currently starting up or not available
We should have all the archive log files from the backup onwards
dorecover clause in the duplicate database means ,
we have taken a backup at primary as backup database plus archivelog;
so oracle will restore the datafiles from backupset and recover using
archivelog that was backed up.
(Anyway once MRP starts it do-recover)
Backing Up Logs with BACKUP ... PLUS ARCHIVELOG
You can add archived redo logs to a backup of other files by using the
BACKUP ... PLUS ARCHIVELOG clause. Adding BACKUP ... PLUS ARCHIVELOG causes RMAN
to do the following:
Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.
Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then
RMAN skips logs that it has already backed up to the specified device.
Backs up the rest of the files specified in BACKUP command. ( if it is a
database ,then complete database)
Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.
Backs up any remaining archived logs generated during the backup.
Why we need a standby control file backup
Standby controlfile set a flag with that DB identify the DB in standby or Normal
Database.
The controlfile Must be created after the backup of the datafiles.
restore standby controlfile from 'C:ORACLECONTROL08N3GEA6_1_1.BKP';
Even if the above control file is creating using current controlfile.
5. If you have ILOM, you can view the progress of restart through its interface.
ssh
/SP/start console
ASMDeactivationOutcome??????
This attribute shows whether the grid disk can be deactivated without loss of
data. A value of “Yes“ indicates you can deactivate this grid disk without data
loss.
CellCLI> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
crsctl status resource -t??? is it same as crs_stat -t
Hash join (small table ,large table) is also called as bloom filter.
Exadata is kind of "different Oracle", for example it doesn't require indexing
on table that much and so and so.
An exdata DBA need experience in Database, Storage and OS.
A good understanding of SmartScan and Hybrid Columnar Compression. (ctrieb)
If you can answer the question "Why with Exadata do I probably not need [some or
all of my] indexes?" you are on the right road. (Dan Morgan).
Learn all you can about direct path reads as they are critical to enable smart
scan. Serial direct path reads are done often in 11gR2, probably because of
Exadata influence.
Learn about parallel query and the mechanisms available to throttle it (queuing
is now available in 11gR2).
Knowing something about Infiniband would probably be a good idea as well since
you'll have to figure out how to connect to external devices (tape drives for
example).
Look at the cssd.log files on both nodes; usually we will get more
information on the second node if the first node is evicted. Also take a look at
crsd.log file too
2. The evicted node will have core dump file generated and system reboot
info.
3. Find out if there was node reboot , is it because of CRS or others, check
system reboot time
4. If you see “Polling“ key words with reduce in percentage values in
cssd.log file that says the eviction is probably due to Network. If you see
“Diskpingout“ are something related to -DISK- then, the eviction is because of
Disk time out.
Storage access and private interconnect use different connectivity
diagcollection.pl ,
Diagnostic information about CRS in $CRS_HOME
crsctl debug trace
{css|crs|evm} enables tracing for the given process
6. crs_start
crs_start is used to start resources, either one at a time or for the entire
cluster.
To start all resources across a cluster, the crs_start command can be used with
the -all option:
$ crs_start -all
crsctl stop all