Thursday, December 23, 2010

Where to find MAXxxxxxx control file parameters in Data Dictionary

PURPOSE
How to find  information about the following  controlfile parameters
- MAXLOGFILES

- MAXDATAFILES
- MAXLOGHISTORY
- MAXLOGMEMBERS
- MAXINSTANCES

Explanation
  • The values of MAXLOGFILES, MAXLOGMEMBERS, MAXDATAFILES, MAXINSTANCES and MAXLOGHISTORY are set either during CREATE DATABASE, or CREATE CONTROLFILE
  • In all Oracle versions, the CREATE CONTROLFILE syntax can be regenerated from the data dictionary using 'ALTER DATABASE BACKUP CONTROLFILE TO TRACE;'  and can be checked from trace
  • For  parameters (exepct  MAXLOGMEMBERS) can be seen  from  v$controlfile_record_section
    select decode(TYPE,
      'REDO LOG',    'MAXLOGFILES',
      'DATAFILE',     'MAXDATAFILES',
      'CKPT PROGRESS','MAXINSTANCES',
      'REDO THREAD','MAXINSTANCES',
      'LOG HISTORY','MAXLOGHISTORY') TYPE,RECORDS_TOTAL
    from v$controlfile_record_section
    where type in
    ('REDO LOG','DATAFILE','CKPT PROGRESS','REDO THREAD','LOG HISTORY');


    TYPE RECORDS_TOTAL

    --------------- -------------
    MAXINSTANCES 11
    MAXINSTANCES 8
    MAXLOGFILES 128
    MAXDATAFILES 10000
    MAXLOGHISTORY 10225
  • MAXLOGMEMBERS is available via "x$kccdi.dimlm".

Friday, December 17, 2010

Database Crashes With ORA-00494

Symptoms

Database may crashed with the following error in the alert file

ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by 'inst 1, osid 5488'

Incident details in: d:\oracle\admin\ecore\diag\rdbms\ecore\ecore\incident\incdir_12529\ecore_lgwr_5484_i12529.trc
Killing enqueue blocker (pid=5488) on resource CF-00000000-00000000 by (pid=5484)
by killing session 3.1
Killing enqueue blocker (pid=5488) on resource CF-00000000-00000000 by (pid=5484)
by terminating the process
LGWR (ospid: 5484): terminating the instance due to error 2103
Fri Dec 17 03:05:09 2010
Instance terminated by LGWR, pid = 5484

Cause


The lgwr has killed the ckpt process, causing the instance to crash.
From the alert.log we can see:
That the database has waited too long for a CF enqueue, so the following error has been reported.
ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by by 'inst 1, osid 5488'

Then the LGWR has killed the blocker, which was in this case the CKPT process which cause the instance to crash.
Checking the alert.log we can see that the frequency of redo log files switch is very high(almost every 1 min).

Solution


1-We usually suggest to configure the redo log switches to be done every 20~30 min to reduce the contention on the control files.

You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to
determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online
logs to be at least this size.

References

BUG:7448854 - ORA-00494 CAUSE THE INSTANCE TO CRASH
Metalink Note: 753290.1

Friday, December 10, 2010

Oracle enqueue wait

List of Oracle 10g enqueue waits  are  following .The aggregated statistics for each of these enqueue types is displayed by the view V$ENQUEUE_STAT


enq: AD - allocate AU:Synchronizes accesses to a specific OSM (Oracle Software Manager) disk AU


enq: AD - deallocate AU:Synchronizes accesses to a specific OSM disk AU

enq: AF - task serialization:Serializes access to an advisor task

enq: AG - contention:Synchronizes generation use of a particular workspace

enq: AO - contention:Synchronizes access to objects and scalar variables

enq: AS - contention:Synchronizes new service activation

enq: AT - contention:Serializes alter tablespace operations

enq: AW - AW$ table lock:Allows global access synchronization to the AW$ table (analytical workplace tables used in OLAP option)

enq: AW - AW generation lock:Gives in-use generation state for a particular workspace

enq: AW - user access for AW:Synchronizes user accesses to a particular workspace

enq: AW - AW state lock:Row lock synchronization for the AW$ table

enq: BR - file shrink:Lock held to prevent file from decreasing in physical size during RMAN backup

enq: BR - proxy-copy:Lock held to allow cleanup from backup mode during an RMAN proxy-copy backup

enq: CF - contention:The CF enqueue is a Control File enqueue   and happens during parallel access to the control files.  The CF enqueue can be seen during any action that requires reading the control file, such as redo log archiving, redo log switches and begin backup commands

enq: CI - contention:The CI enqueue is the Cross Instance enqueue  and happens when a session executes a cross instance call such as a query over a database link

enq: CL - drop label:Synchronizes accesses to label cache when dropping a label

enq: CL - compare labels:Synchronizes accesses to label cache for label comparison

enq: CM - gate:Serializes access to instance enqueue

enq: CM - instance:Indicates OSM disk group is mounted

enq: CT - global space management:Lock held during change tracking space management operations that affect the entire change tracking file

enq: CT - state:Lock held while enabling or disabling change tracking to ensure that it is enabled or disabled by only one user at a time

enq: CT - state change gate 2:Lock held while enabling or disabling change tracking in RAC

enq: CT - reading:Lock held to ensure that change tracking data remains in existence until a reader is done with it

enq: CT - CTWR process start/stop:Lock held to ensure that only one CTWR (Change Tracking Writer, which tracks block changes and is initiated by the alter database enable block change tracking command) process is started in a single instance

enq: CT - state change gate 1:Lock held while enabling or disabling change tracking in RAC

enq: CT - change stream ownership:Lock held by one instance while change tracking is enabled to guarantee access to thread-specific resources

enq: CT - local space management:Lock held during change tracking space management operations that affect just the data for one thread

enq: CU - contention:Recovers cursors in case of death while compiling

enq: DB - contention:Synchronizes modification of database wide supplemental logging attributes

enq: DD - contention:Synchronizes local accesses to ASM (Automatic Storage Management) disk groups

enq: DF - contention:Enqueue held by foreground or DBWR when a datafile is brought online in RAC

enq: DG - contention:Synchronizes accesses to ASM disk groups

enq: DL - contention:Lock to prevent index DDL during direct load

enq: DM - contention:Enqueue held by foreground or DBWR to synchronize database mount/open with other operations

enq: DN - contention:Serializes group number generations

enq: DP - contention:Synchronizes access to LDAP parameters

enq: DR - contention:Serializes the active distributed recovery operation

enq: DS - contention:Prevents a database suspend during LMON reconfiguration

enq: DT - contention:Serializes changing the default temporary table space and user creation

enq: DV - contention:Synchronizes access to lower-version Diana (PL/SQL intermediate representation)

enq: DX - contention:Serializes tightly coupled distributed transaction branches

enq: FA - access file:Synchronizes accesses to open ASM files

enq: FB - contention:This is the Format Block enqueue, used only when data blocks are using ASSM (Automatic Segment Space Management or bitmapped freelists).  As we might expect, common FB enqueue relate to buffer busy conditions, especially since ASSM tends to cause performance problems under heavily DML loads

enq: FC - open an ACD thread:LGWR opens an ACD thread

enq: FC - recover an ACD thread:SMON recovers an ACD thread

enq: FD - Marker generation:Synchronization

enq: FD - Flashback coordinator:Synchronization

enq: FD - Tablespace flashback on/off:Synchronization

enq: FD - Flashback on/off:Synchronization

enq: FG - serialize ACD relocate:Only 1 process in the cluster may do ACD relocation in a disk group

enq: FG - LGWR redo generation enq race:Resolves race condition to acquire Disk Group Redo Generation Enqueue

enq: FG - FG redo generation enq race:Resolves race condition to acquire Disk Group Redo Generation Enqueue

enq: FL - Flashback database log:Synchronizes access to Flashback database log

enq: FL - Flashback db command:Synchronizes Flashback Database and deletion of flashback logs

enq: FM - contention:Synchronizes access to global file mapping state

enq: FR - contention:Begins recovery of disk group

enq: FS - contention:Synchronizes recovery and file operations or synchronizes dictionary check

enq: FT - allow LGWR writes:Allows LGWR to generate redo in this thread

enq: FT - disable LGWR writes:Prevents LGWR from generating redo in this thread

enq: FU - contention:Serializes the capture of the DB feature, usage, and high watermark statistics

enq: HD - contention:Serializes accesses to ASM SGA data structures

enq: HP - contention:Synchronizes accesses to queue pages

enq: HQ - contention:Synchronizes the creation of new queue IDs

enq: HV - contention:The HV enqueue is similar to the HW enqueue but for parallel direct path INSERTs

enq: HW - contention:The HW High Water enqueue  occurs when competing processing are inserting into the same table and are trying to increase the high water mark of a table simultaneously. The HW enqueue can sometimes be removed by adding freelists or moving the segment to ASSM

enq: IA - contention:Information not available

enq: ID - contention:Lock held to prevent other processes from performing controlfile transaction while NID is running

enq: IL - contention:Synchronizes accesses to internal label data structures

enq: IM - contention for blr:Serializes block recovery for IMU txn

enq: IR - contention:Synchronizes instance recovery

enq: IR - contention2:Synchronizes parallel instance recovery and shutdown immediate

enq: IS - contention:Synchronizes instance state changes

enq: IT - contention:Synchronizes accesses to a temp object’s metadata

enq: JD - contention:Synchronizes dates between job queue coordinator and slave processes

enq: JI - contention:Lock held during materialized view operations (such as refresh, alter) to prevent concurrent operations on the same materialized view

enq: JQ - contention:Lock to prevent multiple instances from running a single job

enq: JS - contention:Synchronizes accesses to the job cache

enq: JS - coord post lock:Lock for coordinator posting

enq: JS - global wdw lock:Lock acquired when doing wdw ddl

enq: JS - job chain evaluate lock:Lock when job chain evaluated for steps to create

enq: JS - q mem clnup lck:Lock obtained when cleaning up q memory

enq: JS - slave enq get lock2:Gets run info locks before slv objget

enq: JS - slave enq get lock1:Slave locks exec pre to sess strt

enq: JS - running job cnt lock3:Lock to set running job count epost

enq: JS - running job cnt lock2:Lock to set running job count epre

enq: JS - running job cnt lock:Lock to get running job count

enq: JS - coord rcv lock:Lock when coord receives msg

enq: JS - queue lock:Lock on internal scheduler queue

enq: JS - job run lock - synchronize:Lock to prevent job from running elsewhere

enq: JS - job recov lock:Lock to recover jobs running on crashed RAC inst

enq: KK - context:Lock held by open redo thread, used by other instances to force a log switch

enq: KM - contention:Synchronizes various Resource Manager operations

enq: KO - fast object checkpoint:The KO enqueue (a.k.a. enq: KO - fast object checkpoint) is seem in Oracle STAR transformations and high enqueue waits can indicate a sub-optimal DBWR background process


enq: KP - contention:Synchronizes kupp process startup

enq: KT - contention:Synchronizes accesses to the current Resource Manager plan

enq: MD - contention:Lock held during materialized view log DDL statements

enq: MH - contention:Lock used for recovery when setting Mail Host for AQ e-mail notifications

enq: ML - contention:Lock used for recovery when setting Mail Port for AQ e-mail notifications

enq: MN - contention:Synchronizes updates to the LogMiner dictionary and prevents multiple instances from preparing the same LogMiner session

enq: MR - contention:Lock used to coordinate media recovery with other uses of datafiles

enq: MS - contention:Lock held during materialized view refresh to set up MV log

enq: MW - contention:Serializes the calibration of the manageability schedules with the Maintenance Window

enq: OC - contention:Synchronizes write accesses to the outline cache

enq: OL - contention:Synchronizes accesses to a particular outline name

enq: OQ - xsoqhiAlloc:Synchronizes access to olapi history allocation

enq: OQ - xsoqhiClose:Synchronizes access to olapi history closing

enq: OQ - xsoqhistrecb:Synchronizes access to olapi history globals

enq: OQ - xsoqhiFlush:Synchronizes access to olapi history flushing

enq: OQ - xsoq*histrecb:Synchronizes access to olapi history parameter CB

enq: PD - contention:Prevents others from updating the same property

enq: PE - contention:The PE enqueue is the Parameter Enqueue, which happens after “alter system” or “alter session” statements

enq: PF - contention:Synchronizes accesses to the password file

enq: PG - contention:Synchronizes global system parameter updates

enq: PH - contention:Lock used for recovery when setting proxy for AQ HTTP notifications

enq: PI - contention:Communicates remote Parallel Execution Server Process creation status

enq: PL - contention:Coordinates plug-in operation of transportable tablespaces

enq: PR - contention:Synchronizes process startup

enq: PS - contention:The PS enqueue is the Parallel Slave synchronization enqueue which is only seen with Oracle parallel query.  The PS enqueue happens when pre-processing problems occur when allocating the factotum (slave) processes for OPQ

enq: PT - contention:Synchronizes access to ASM PST metadata

enq: PV - syncstart:Synchronizes slave start_shutdown

enq: PV - syncshut:Synchronizes instance shutdown_slvstart

enq: PW - prewarm status in dbw0:DBWR0 holds this enqueue indicating pre-warmed buffers present in cache

enq: PW - flush prewarm buffers:Direct Load needs to flush prewarmed buffers if DBWR0 holds this enqueue

enq: RB - contention:Serializes OSM rollback recovery operations

enq: RF - synch: per-SGA Broker metadata:Ensures r/w atomicity of DG configuration metadata per unique SGA

enq: RF - synchronization: critical ai:Synchronizes critical apply instance among primary instances

enq: RF - new AI:Synchronizes selection of the new apply instance

enq: RF - synchronization: chief:Anoints 1 instance's DMON (Data Guard Broker Monitor) as chief to other instance’s DMONs

enq: RF - synchronization: HC master:Anoints 1 instance's DMON as health check master

enq: RF - synchronization: aifo master:Synchronizes critical apply instance failure detection and failover operation

enq: RF - atomicity:Ensures atomicity of log transport setup

enq: RN - contention:Coordinates nab computations of online logs during recovery

enq: RO - contention:Coordinates flushing of multiple objects.The RO enqueue is the Reuse Object enqueue and is a cross-instance enqueue related to truncate table and drop table DDL operations

enq: RO - fast object reuse:Coordinates fast object reuse

enq: RP - contention:Enqueue held when resilvering is needed or when data block is repaired from mirror

enq: RS - file delete:Lock held to prevent file from accessing during space reclamation

enq: RS - persist alert level:Lock held to make alert level persistent

enq: RS - write alert level:Lock held to write alert level

enq: RS - read alert level:Lock held to read alert level

enq: RS - prevent aging list update:Lock held to prevent aging list update

enq: RS - record reuse:Lock held to prevent file from accessing while reusing circular record

enq: RS - prevent file delete:Lock held to prevent deleting file to reclaim space

enq: RT - contention:Thread locks held by LGWR, DBW0, and RVWR (Recovery Writer, used in Flashback Database operations) to indicate mounted or open status

enq: SB - contention:Synchronizes logical standby metadata operations

enq: SF - contention:Lock held for recovery when setting sender for AQ e-mail notifications

enq: SH - contention:Enqueue always acquired in no-wait mode; should seldom see this contention

enq: SI - contention:Prevents multiple streams table instantiations

enq: SK - contention:Serialize shrink of a segment

enq: SQ - contention:The SQ enqueue is the Sequence Cache enqueue  is used to serialize access to Oracle sequences

enq: SR - contention:Coordinates replication / streams operations

enq: SS - contention:Ensures that sort segments created during parallel DML operations aren't prematurely cleaned up.
enq: ST - contention:Synchronizes space management activities in dictionary-managed tablespaces

enq: SU - contention:Serializes access to SaveUndo Segment

enq: SW - contention:Coordinates the ‘alter system suspend’ operation

enq: TA - contention:Serializes operations on undo segments and undo tablespaces

enq: TB - SQL Tuning Base Cache Update:Synchronizes writes to the SQL Tuning Base Existence Cache

enq: TB - SQL Tuning Base Cache Load:Synchronizes writes to the SQL Tuning Base Existence Cache

enq: TC - contention:The TC enqueue is related to the DBWR background process and occur when “alter tablespace” commands are issued.  You will also see the TC enqueue when doing parallel full-table scans where rows are accessed directly, without being loaded into the data buffer cache

enq: TC - contention2:Lock during setup of a unique tablespace checkpoint in null mode

enq: TD - KTF dump entries:KTF dumping time/scn mappings in SMON_SCN_TIME table

enq: TE - KTF broadcast:KTF broadcasting

enq: TF - contention:Serializes dropping of a temporary file

enq: TL - contention:Serializes threshold log table read and update

enq: TM - contention:The TM enqueue related to Transaction Management  and can be seen when tables are explicitly locked with reorganization activities that require locking of a table

enq: TO - contention:Synchronizes DDL and DML operations on a temp object

enq: TQ - TM contention:TM access to the queue table

enq: TQ - DDL contention:DDL access to the queue table

enq: TQ - INI contention:TM access to the queue table

enq: TS - contention:Serializes accesses to temp segments.these enqueues happen during disk sort operations

enq: TT - contention:The TT enqueue  is used to avoid deadlocks in parallel tablespace operations.  The TT enqueue can be seen with parallel create tablespace and parallel point in time recovery (PITR)

enq: TW - contention:Lock held by one instance to wait for transactions on all instances to finish

enq: TX - contention:Lock held by a transaction to allow other transactions to wait for it

enq: TX - row lock contention:Lock held on a particular row by a transaction to prevent other transactions from modifying it

enq: TX - allocate ITL entry:Allocating an ITL entry in order to begin a transaction

enq: TX - index contention:Lock held on an index during a split to prevent other operations on it

enq: UL - contention:The UL enqueue is a User Lock enqueue   and happens when a lock is requested in dbms_lock.request.  The UL enqueue can be seen in Oracle Data Pump

enq: US - contention:The US enqueue happens with Oracle automatic UNDO management was undo segments are moved online and offline

enq: WA - contention:Lock used for recovery when setting watermark for memory usage in AQ notifications

enq: WF - contention:Enqueue used to serialize the flushing of snapshots

enq: WL - contention:Coordinates access to redo log files and archive logs

enq: WP - contention:Enqueue to handle concurrency between purging and baselines

enq: XH - contention:Lock used for recovery when setting No Proxy Domains for AQ HTTP notifications

enq: XR - quiesce database:Lock held during database quiesce

enq: XR - database force logging:Lock held during database force logging mode

enq: XY - contention:Lock used by Oracle Corporation for internal testing

 
See also:
Reference: http://oracle-dox.net/McGraw.Hill-Oracle.Wait.Interf/8174final/LiB0063.html

Thursday, December 9, 2010

enq: TX - index contention

It’s possible that we see high Index leaf block contention  on index associated with  tables, which are having high concurrency from the application.  This usually happens when the application performs lot of INSERTs and DELETEs  .

The reason for this is the index block splits while inserting a new row into the index. The transactions will have to wait  for  TX lock in mode 4, until the session that is doing the block splits completes the operations

Causes
  • Indexes on the tables which are being accessed heavily from the application.
  • Indexes on table columns which are having values inserted by a monotonically increasing.
  • Heavily deleted tables
Detecting index leaf block contention
There are many ways to find hot indexes
  • Check high  "enq: TX – index contention" system waits   from AWR report
  • At the same  time  you will see  high split events  in the  Instance  avtivity stats  in the AWR  report
  • If yor system is RAC also   you will see  the  following  wait events in the AWR report
    gc buffer busy waits on Index Branch Blocks
    gc buffer busy waits on Index Leaf Blocks
    gc current block busy on Remote Undo Headers
    gc current split
    gcs ast xid
    gcs refuse xid
  • we  can find sql_id ' s  from v$active_session_history

    select sql_id,count(*) from v$active_session_history a

    where a.sample_time between sysdate - 5/24 and sysdate
    and trim(a.event) like 'enq: TX - index contention%'
    group by sql_id
    order by 2;
  • Or we  can  query the segments  from  V$SEGMENT_STATISTICS or from the 'Segments by Row Lock Waits' of the AWR reports.

    select * from v$segment_statistics
    where statistic_name ='row lock waits'

    and value>0 order by value desc;
  • we can  query   dba_hist_enqueue_stat and the stats$enqueuestat  or  v$enqueue_stat       
    select * from  v$enqueue_stat        order  by cum_wait_time;
Solutions 
  • Reverse Key  Indexes :Rebuild the as reverse key indexes or hash partition the indexes which are listed in the 'Segments by Row Lock Waits' of the AWR reports.These indexes are excellent for insert performance.  But the downside of it is that, it may affect the performance of index range scans
  • Hash partitioned global indexes :When an index is monotonically growing because of a sequence or date key, global hash-partitioned indexes improve performance by spreading out the contention. Thus, hash-partitioned global indexes can improve the performance of indexes in which a small number of leaf blocks in the index have high contention in multi-user OLTP environments

  • CACHE size of the sequences:When we use monotonically increasing sequences for populating column values, the leaf block which is having high sequence key will be changing with every  insert, which makes it a hot block and potential candidate for a block split. With CACHE SIZE (and probably with NOORDER option), each instance would use start using the sequence keys with a different range reduces the index keys getting insert same set of leaf blocks
  • Index  block size:Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace. Adjusting the index block size may only produce a small effect and changing the index block size should never be your first choice for reducing index block contention, and it should only be done after the other approaches have been examined.
  • Rebuild indexes:If  many rows are  deleted  or  it is an  skewed indes   rebuilding will help  for a  while 
See Also:
All Oracle enqueue waits
Solving Waits on "enq: TM - contention"









Index leaf block contention is very common in busy databases and it’s especially common on tables that have monotonically increasing key values.

Wednesday, December 1, 2010

Oracle data loading performance

Oracle  choices for data loading
  • SQL insert and merge statements
  • PL/SQL bulk loads for the forall PL/SQL operator
  • SQL*Loader
  • Oracle10 Data Pump
  • Oracle import utility
TIPS FOR DATA LOADING
  1. Use a large blocksize:Data loads onto large  blocksize  (e.x 32k ) will run faster becuse more rows will be written in  an empty block
  2. Use a small db_cache_size:If loading with DML a small data cache will minimize DBWR work during async buffer cleanouts.You can reduce  it  with alter  system  temporarily
  3. Size your log_buffer properly:If you have waits associated to log_buffer size “db log sync wait”, try increasing to to 10m.
  4. Watch your commit frequency:At each commit, Oracle releases locks and undo segments.Benchmarks suggest that you should commit as infrequently as possible
  5. Use large undo segments:In order to avoid a ORA-1555 (snapshot too old) error use  large undo segments.
  6. Use append hint:If you must use SQL inserts By using the append hint, you ensure that Oracle always grabs "fresh" data blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don't need to specify an append hint.  Note:  Prior to 11g r2, the append hint supports only the subquery syntax of the INSERT statement, not the VALUES clause, where you can use the append_values hint

    insert /*+ append */ into customer (select  ....);
  7. Use  nologging if possible:NOLOGGING mode (for sql level,table level,database level)  will allow Oracle to avoid almost all redo logging. but the operaions will be unrecoverable
  8. Disable archiving if possible:
  9. Use PL/SQL bulking:PL/SQL often out-performs standard SQL inserts because of the array processing and bulking in the "forall" statement.   (see also  : PL/SQL forall operator speeds for table inserts )
  10. Partition :Load the data in a separate partition, using transportable tablespaces.
  11. Use multiple freelists or freelist groups for target tables:Avoid using bitmap freelists ASS management (automatic segment space management) for super high-volume loads.
  12. Preallocate  space  for target  tables:Pre allocate  space for  tables and index  with "allocate extent"  clause  in order  to  gain performance on  allocation of segments.
  13. Pre-sort the data in index key order:This will make subsequent SQL run far faster for index range scans. 
  14. Use parallel DML:Parallelize the data loads according to the number of processors and disk layout.Try to saturate your processors with parallel processes.
  15. Disable constraints :Disable during load and re-enable in parallel following the load
  16. Disable/drop indexes - It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size. If you choose to keep the indexes during inserts, consider creating a reverse key index to minimize insert contention.
  17. Disable/drop  triggers if possible:
  18. Use RAM Disk:Place undo tablespace and online redo logs on Solid-state disk (RAM SAN),
  19. Use SSD RAM Disk:Especially for the insert partition, undo and redo.  You can move the partition to standard disk later.
  20. Use SAME RAID:Avoid RAID5 and use Oracle Stripe and Mirror Everywhere approach (RAID 1+0, RAID10).


     

PL/SQL forall operator speeds for table inserts

Loading an Oracle table from a PL/SQL array involves expensive context switches, and the PL/SQL FORALL operator speed is amazing here is an example


create  table  t_dba_objects as  select * from dba_objects  where  1=2 ;

DECLARE

  TYPE prod_tab IS TABLE OF dba_objects%ROWTYPE;
  dba_objects_tab prod_tab := prod_tab();
  start_time number; end_time number;
BEGIN
  SELECT * BULK COLLECT INTO dba_objects_tab FROM dba_objects;
  EXECUTE IMMEDIATE 'TRUNCATE TABLE t_dba_objects';
  Start_time := DBMS_UTILITY.get_time;
  FOR i in dba_objects_tab.first .. dba_objects_tab.last LOOP
    INSERT INTO t_dba_objects VALUES (
        dba_objects_tab(i).OWNER ,dba_objects_tab(i).OBJECT_NAME,
        dba_objects_tab(i).SUBOBJECT_NAME,dba_objects_tab(i).OBJECT_ID,
        dba_objects_tab(i).DATA_OBJECT_ID,dba_objects_tab(i).OBJECT_TYPE ,
       dba_objects_tab(i).CREATED ,dba_objects_tab(i).LAST_DDL_TIME,
       dba_objects_tab(i).TIMESTAMP ,dba_objects_tab(i).STATUS ,
       dba_objects_tab(i).TEMPORARY,dba_objects_tab(i).GENERATED ,
      dba_objects_tab(i).SECONDARY  );
  END LOOP;
  end_time := DBMS_UTILITY.get_time;
  DBMS_OUTPUT.PUT_LINE('Conventional Insert: '||to_char(end_time-start_time));
  EXECUTE IMMEDIATE 'TRUNCATE TABLE t_dba_objects';
  Start_time := DBMS_UTILITY.get_time;
  FORALL i in dba_objects_tab.first .. dba_objects_tab.last
   INSERT INTO t_dba_objects VALUES dba_objects_tab(i);
  end_time := DBMS_UTILITY.get_time;
  DBMS_OUTPUT.PUT_LINE('Bulk Insert: ' ||to_char(end_time-start_time));
  COMMIT;
END;
/
Conventional Insert: 23689

Bulk Insert: 128

Note:Forall   is very fast then the convential for loop but it uses much  more  UGA memory depend on row count and column count.(e.x for 100m table I  examine that it used  5g UGA memory)

Thursday, November 25, 2010

Oracle Cross-Platform Migration Using Rman (Convert Database )

  1. Check Oracle version on  both side if   the patch level  of new  platform is  different   than old one try  to create a  new database  and  use  Transportable  tablespace  feature of Oracle (In this soluiton   be careful sys objects and so on  ).In my example  I will   migrate  from 10.2.0.4 Solaris  64  to AIX 64 bit
  2. Check  the endian format  of   source and  destination platforms If endian formats are the same converting only system and  undo tablespaces  (datafiles that contain undo ) are enough

    select platform_name, endian_format from V$TRANSPORTABLE_PLATFORM;
  3. Open database  read only;

    startup mount;
    alter database open read only;
  4. Use DBMS_TDB.CHECK_DB to check whether the database can be transported to a desired destination platform

    set serveroutput on

    declare
    db_ready boolean;
    begin
      db_ready := sys.dbms_tdb.check_db('AIX-Based Systems (64-bit)');
    end;
    /

     
  5. Use DBMS_TDB.CHECK_EXTERNAL to identify any external tables, directories or BFILEs. RMAN cannot automate the transport of such files as mentioned above.


    set serveroutput on

    declare
    external boolean;
    begin
      external := dbms_tdb.check_external;
    end;
    /
  6.  Identify datafiles that contain undo data by running the following query.Because  my endian format is  the same I will only convert system and undo tablespaces.

    select distinct(file_name)

    from dba_data_files a, dba_rollback_segs b
    where a.tablespace_name=b.tablespace_name;
  7. Run RMAN command to create conversion scripts and init.ora file:
    FORMAT defines location of init.ora file
    DB_FILE_NAME_CONVERT defines final data files location on destination server.

    rman
    connect target /

    CONVERT DATABASE ON TARGET PLATFORM
    CONVERT SCRIPT '/switchthome/switcht/tts/convertscript.rman'
    TRANSPORT SCRIPT '/switchthome/switcht/tts/transportscript.sql'
    new database 'SWITCHT'
    FORMAT '/switchthome/switcht/tts/%U'
    DB_FILE_NAME_CONVERT =('/switchtdata01/data','/wftesthome/sw/data');
  8. Copy the  files
    1. Transport.sql
    2. Convertscript.rman
    3. Pfile generated by the convert database command.
    4. Copy system  and undo databafiles to stage area
    5. Copy rest datafiles to final location on destination server (if endian formats are diffrent  also  we must copy all  the datafiles  to stage  and  convert  them)
  9. Check the envorminmets
    1. ORACLE_SID,ORACLE_HOME,PATH,LD_LIBRARY_PATH,LIBPATH ..
    2. Copy and edit the init.ora
    3. check the  directories  like dump..
  10. Create a dummy Controlfile  in order  to convert
    sqlplus '/ as sysdba'
    STARTUP NOMOUNT ;

    create controlfile reuse set database "SWITCHT" RESETLOGS ARCHIVELOG

     MAXLOGFILES 128
     MAXLOGMEMBERS 4
     MAXDATAFILES 10000
     MAXINSTANCES 8
     MAXLOGHISTORY 10210
    LOGFILE

     '/wftesthome/sw/log/redo01.log' size 10m,
     '/wftesthome/sw/log/redo02.log' size 10m,
     '/wftesthome/sw/log/redo03.log' size 10m,
     '/wftesthome/sw/log/redo04.log' size 10m,
     '/wftesthome/sw/log/redo05.log' size 10m,
     '/wftesthome/sw/log/redo06.log' size 10m
    DATAFILE

     '/wftesthome/sw/stage/system01.dbf',
     '/wftesthome/sw/stage/undorbs01.dbf',
     '/wftesthome/sw/data/sysaux01.dbf',
     '/wftesthome/sw/data/tools01.dbf',
     '/wftesthome/sw/data/users01.dbf',
     '/wftesthome/sw/data/oasis_index01.dbf',
     '/wftesthome/sw/data/oasisdata_100m01.dbf',
     '/wftesthome/sw/data/oasisindex_10m01.dbf',
     '/wftesthome/sw/data/oasisindex_1m01.dbf',
     '/wftesthome/sw/data/index_1m_ts01.dbf',
     '/wftesthome/sw/data/oasisindex_100m01.dbf',
     '/wftesthome/sw/data/data_128k_ts01.dbf',
     '/wftesthome/sw/data/data_1m_ts01.dbf',
     '/wftesthome/sw/data/data_10m_ts01.dbf',
     '/wftesthome/sw/data/data_100m_ts01.dbf',
     '/wftesthome/sw/data/index_128k_ts01.dbf',
     '/wftesthome/sw/data/index_10m_ts01.dbf',
     '/wftesthome/sw/data/index_100m_ts02.dbf',
     '/wftesthome/sw/data/index_100m_ts01.dbf',
     '/wftesthome/sw/data/oasisindex_64k01.dbf',
     '/wftesthome/sw/data/oasisdata_10m01.dbf',
     '/wftesthome/sw/data/oasisdata_1m01.dbf',
     '/wftesthome/sw/data/oasisdata_64k01.dbf',
     '/wftesthome/sw/data/oasis_01.dbf'
    CHARACTER SET WE8ISO8859P9;
  11.  edit the file Convertscript.rman  only for system and undo tablespaces  (if endian formats are diffrent  also  we must copy all  the datafiles  to stage  and  convert  them)

    rman target / nocatalog @convert.rman



    ---convert.rman
    RUN {
     CONVERT DATAFILE '/wftesthome/sw/stage/undorbs01.dbf'
     FROM PLATFORM 'Solaris[tm] OE (64-bit)'
     FORMAT '/wftesthome/sw/data/undorbs01.dbf';

     CONVERT DATAFILE '/wftesthome/sw/stage/system01.dbf'
     FROM PLATFORM 'Solaris[tm] OE (64-bit)'
     FORMAT '/wftesthome/sw/data/system01.dbf';
    }
  12. shutdown the database and delete the dummy controlfile
  13. edit the TRANSPORT sql script to reflect the new path for datafiles and redolog files in the CREATE CONTROLFILE section of the script

    sqlplus '/ as sysdba'

    STARTUP NOMOUNT ;



    create controlfile reuse set database "SWITCHT" RESETLOGS ARCHIVELOG

     ARCHIVELOG
     MAXLOGFILES 128
     MAXLOGMEMBERS 4
     MAXDATAFILES 10000
     MAXINSTANCES 8
     MAXLOGHISTORY 10210
    LOGFILE
     '/wftesthome/sw/log/redo01.log' size 10m,
     '/wftesthome/sw/log/redo02.log' size 10m,
     '/wftesthome/sw/log/redo03.log' size 10m,
     '/wftesthome/sw/log/redo04.log' size 10m,
     '/wftesthome/sw/log/redo05.log' size 10m,
     '/wftesthome/sw/log/redo06.log' size 10m
    DATAFILE
     '/wftesthome/sw/data/system01.dbf',
     '/wftesthome/sw/data/undorbs01.dbf',
     '/wftesthome/sw/data/sysaux01.dbf',
     '/wftesthome/sw/data/tools01.dbf',
     '/wftesthome/sw/data/users01.dbf',
     '/wftesthome/sw/data/oasis_index01.dbf',
     '/wftesthome/sw/data/oasisdata_100m01.dbf',
     '/wftesthome/sw/data/oasisindex_10m01.dbf',
     '/wftesthome/sw/data/oasisindex_1m01.dbf',
     '/wftesthome/sw/data/index_1m_ts01.dbf',
     '/wftesthome/sw/data/oasisindex_100m01.dbf',
     '/wftesthome/sw/data/data_128k_ts01.dbf',
     '/wftesthome/sw/data/data_1m_ts01.dbf',
     '/wftesthome/sw/data/data_10m_ts01.dbf',
     '/wftesthome/sw/data/data_100m_ts01.dbf',
     '/wftesthome/sw/data/index_128k_ts01.dbf',
     '/wftesthome/sw/data/index_10m_ts01.dbf',
     '/wftesthome/sw/data/index_100m_ts02.dbf',
     '/wftesthome/sw/data/index_100m_ts01.dbf',
     '/wftesthome/sw/data/oasisindex_64k01.dbf',
     '/wftesthome/sw/data/oasisdata_10m01.dbf',
     '/wftesthome/sw/data/oasisdata_1m01.dbf',
     '/wftesthome/sw/data/oasisdata_64k01.dbf',
     '/wftesthome/sw/data/oasis_01.dbf'
    CHARACTER SET WE8ISO8859P9;

    alter database open resetlogs;
    alter tablespace temp add tempfile '/wftesthome/sw/data/temp01.dbf'  SIZE 100m  autoextend off;
    shutdown immediate;

    startup upgrade;
    @@ ?/rdbms/admin/utlirp.sql
    shutdown immediate;

    startup;
    @@?/rdbms/admin/utlrp.sql
    set feedback 6;
See also:
Transportable Tablespaces (TTS)
References:
Metalink 415884.1 Cross Platform Database Conversion with same Endian
Metalink 414878.1 Cross-Platform Migration on Destination Host Using Rman Convert Database
Metalink 732053.1 Avoid Datafile Conversion during Transportable Database
Metalink 417455.1 Datafiles are not converted in parallel for transportable database
Metalink 100693.1 Getting Started with Transportable Tablespaces
Metalink 413586.1 How To Use RMAN CONVERT DATABASE for Cross Platform Migration:
Metalink 371556.1 How move tablespaces across platforms using Transportable Tablespaces with RMAN
Metalink 579136.1 IMPDP TRANSPORTABLE TABLESPACE FAILS for SPATIAL INDEX)
Metalink 77523.1 Transportable Tablespaces -- An Example to setup and use
Metalink 243304.1 10g : Transportable Tablespaces Across Different Platforms
Metalink 733824.1 HowTo Recreate a database using TTS (TransportableTableSpace)

Wednesday, November 24, 2010

ORA-08102: index key not found

I have  taken  folllowing error while  updating  the table
ORA-08102: index key not found, obj# 1456599, file 1133, block 63620 (2)

This can be a possible inconsistency in index because of a bug.

Example  :Bug 7329252  ORA-8102/ORA-1499/OERI[kdsgrp1] Index corruption after rebuild index ONLINE

Fix:Minimize updates during online index rebuild and rebuild the index.

Wednesday, November 3, 2010

Transportable Tablespaces (TTS)

You can use the Transportable Tablespaces feature to copy a set of tablespaces from one Oracle Database to another
  1. The evalutaion of TTS
    • Oracle8i:Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size
    • Oracle9i:TTS technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes
    • Oracle10g:TTS technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms which has same ENDIAN formats. If ENDIAN formats are different you have to use RMAN  to convert them .

       You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine each platform's endian format (byte ordering).
       
      SQL> COLUMN PLATFORM_NAME FORMAT A32
      SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

      PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
      ----------- -------------------------------- ------------


      1 Solaris[tm] OE (32-bit)           Big
      2 Solaris[tm] OE (64-bit)           Big7 Microsoft Windows IA (32-bit)     Little
      10 Linux IA (32-bit)                Little
      6 AIX-Based Systems (64-bit)        Big
      3 HP-UX (64-bit)                    Big
      5 HP Tru64 UNIX                     Little
      4 HP-UX IA (64-bit)                 Big
      11 Linux IA (64-bit)                Little
      15 HP Open VMS                      Little
      8 Microsoft Windows IA (64-bit)     Little
      9 IBM zSeries Based  Linux          Big
      13 Linux 64-bit for AMD             Little
      16 Apple Mac OS                     Big
      12 Microsoft Windows 64-bit for AMD Little
      17 Solaris Operating System (x86)   Little
  2. Usage of TTS
    • Exporting and importing partitions in data warehousing tables
    • Publishing structured data on CDs
    • Copying multiple read-only versions of a tablespace on multiple databases
    • Archiving historical data
    • Performing tablespace point-in-time-recovery (TSPITR) 
  3. Limitations on Transportable Tablespace Use
    • Character set:The source and target database must use the same character set and national character set.
    • Tablespace_name:You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
    • Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set
    • Encrypted tablespaces:Encrypted tablespaces have the following the limitations
      • Before transporting an encrypted tablespace, you must copy the Oracle wallet manually to the destination database, unless the master encryption key is stored in a Hardware Security Module (HSM) device instead of an Oracle wallet. When copying the wallet, the wallet password remains the same in the destination database. However, it is recommended that you change the password on the destination database so that each database has its own wallet password
      • You cannot transport an encrypted tablespace to a database that already has an Oracle wallet for transparent data encryption. In this case, you must use Oracle Data Pump to export the tablespace's schema objects and then import them to the destination database. You can optionally take advantage of Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and imported
      • You cannot transport an encrypted tablespace to a platform with different endianness
    • Encrypted columns:Tablespaces that do not use block encryption but that contain tables with encrypted columns cannot be transported. You must use Oracle Data Pump to export and import the tablespace's schema objects. You can take advantage of Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and imported
    • XML Types:Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes. Beginning with Oracle Database 11g Release 1, you must use only Data Pump to export and import the tablespace metadata for tablespaces that contain XMLTypes

      The following query returns a list of tablespaces that contain XMLTypes

      select distinct p.tablespace_name from dba_tablespaces p,
        dba_xml_tables x, dba_users u, all_all_tables t where
        t.table_name=x.table_name and t.tablespace_name=p.tablespace_name
        and x.owner=u.username;
      Transporting tablespaces with XMLTypes has the following limitations:
      • The target database must have XML DB installed.
      • Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
      • Schemas referenced by XMLType tables cannot have cyclic dependencies.
      • XMLType tables with row level security are not supported, because they cannot be exported or imported.
      • If the schema for a transported XMLType table is not present in the target database, it is imported and registered. If the schema already exists in the target database, an error is returned unless the ignore=y option is set
      • If an XMLType table uses a schema that is dependent on another schema, the schema that is depended on is not exported. The import succeeds only if that schema is already in the target database.
    • Advanced Queues:Transportable tablespaces do not support 8.0-compatible advanced queues with multiple recipients.
    • SYSTEM Tablespace Objects:You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences
    • Opaque Types:Types whose interpretation is application-specific and opaque to the database (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform. Types and objects that use these opaque types, either directly or indirectly, are also subject to this limitation
    • Floating-Point Numbers:BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump
  4.  Compatibility Considerations for Transportable Tablespaces:
    When you create a transportable tablespace set, Oracle Database computes the lowest compatibility level at which the target database must run. This is referred to as the compatibility level of the transportable set. Beginning with Oracle Database 11g, a tablespace can always be transported to a database with the same or higher compatibility setting, whether the target database is on the same or a different platform. The database signals an error if the compatibility level of the transportable set is higher than the compatibility level of the target database
  5. Example
    1. Determine if Platforms are Supported and Determine Endianness on both source and target

      SELECT d.PLATFORM_NAME, ENDIAN_FORMAT

      FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
      WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
    2. Pick a Self-Contained Set of Tablespaces
      TTS requires all the tablespaces, which we are moving, must be self contained. This means that the segments within the migration tablespace set cannot have dependency to a segment in a tablespace out of the transportable tablespace set. This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure.E.x for tablespaces  sales_1,sales_2


      SQL >EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE);
      SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

      If it were not self contained you should either remove the dependencies by dropping/moving them or include the tablespaces of segments into TTS set to which migration set is depended
    3. Generate a Transportable Tablespace Set
      SQL> ALTER TABLESPACE sales_1 READ ONLY;
      SQL> ALTER TABLESPACE sales_2 READ ONLY;

      expdp   system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = sales_1,sales_2

      or we  can  export  it with exp (old version )
      exp system/password  file=expdat.dmp  transport_tablespace=y tablespaces=sales_1,sales_2

      If you want to perform a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter
      expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir     TRANSPORT_TABLESPACES=sales_1,sales_2 TRANSPORT_FULL_CHECK=Y

      If sales_1 and sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the sales_1 and sales_2 tablespaces

      RMAN TARGET /

      RMAN> CONVERT TABLESPACE sales_1,sales_2

      2> TO PLATFORM 'Microsoft Windows NT'
      3> FORMAT '/temp/%U';

    4. Transport the tablespace
      • If both the source and destination are files systems, you can use:
        Any facility for copying flat files (for example, an operating system copy utility or ftp)
        The DBMS_FILE_TRANSFER package
        RMAN
        Any facility for publishing on CDs
      • If either the source or destination is an Automatic Storage Management (ASM) disk group, you can use:
        ftp to or from the /sys/asm virtual folder in the XML DB repository

        The DBMS_FILE_TRANSFER package
        RMAN
      • If you are transporting the tablespace set to a platform with endianness that is different from the source platform, and you have not yet converted the tablespace set, you must do so now. This example assumes that you have completed the following steps before the transport:
        1. Set the source tablespaces to be transported to be read-only
        2. Use the export utility to create an export file (in our example, expdat.dmp).
        3. Datafiles that are to be converted on the target platform can be moved to a temporary location on the target platform. However, all datafiles, whether already converted or not, must be moved to a designated location on the target database.Now use RMAN to convert the necessary transported datafiles to the endian format of the destination host format and deposit the results in /orahome/dbs, as shown in this hypothetical example:

          RMAN> CONVERT DATAFILE
          2> '/hq/finance/work/tru/tbs_31.f',
          3> '/hq/finance/work/tru/tbs_32.f',
          4> '/hq/finance/work/tru/tbs_41.f'
          5> TO PLATFORM="Solaris[tm] OE (32-bit)"
          6> FROM PLATFORM="HP TRu64 UNIX"
          7> DB_FILE_NAME_CONVERT=
          8> "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
          9> PARALLELISM=5; 
    5. Import the Tablespace Set

      IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
      TRANSPORT_DATAFILES=
      /salesdb/sales_101.dbf,
      /salesdb/sales_201.dbf
      REMAP_SCHEMA=(dcranney:smith) REMAP_SCHEMA=(jfee:williams)

      If required, put the tablespaces into read/write mode
      ALTER TABLESPACE sales_1 READ WRITE;
      ALTER TABLESPACE sales_2 READ WRITE;

Friday, October 1, 2010

Oracle Background processes

To maximize performance and accommodate many users, a multiprocess Oracle system uses some additional Oracle processes called background processes

You can see the Oracle background processes with  following queries
select  * from V$BGPROCESS ;
or
select *  from   v$session where  type ='BACKGROUND';

An Oracle instance can have many background processes; not all are always present

SMON - System Monitor process recovers after instance failure and monitors temporary segments and extents. SMON in a non-failed instance can also perform failed instance recovery for other failed RAC instance.


DBWR - Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block cache to the database data files. Generally, DBWR only writes blocks back to the data files on commit, or when the cache is full and space has to be made for more blocks. The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to ensure efficient processing is accomplished.

ARCH - Archive process writes filled redo logs to the archive log location(s). In RAC, the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other instances in the RAC setup should they be needed for recovery.

CJQ - Job Queue Process (CJQ) - Used for the job scheduler. The job scheduler includes a main program (the coordinator) and slave programs that the coordinator executes. The parameter job_queue_processes controls how many parallel job scheduler jobs can be executed at one time.
CKPT - Checkpoint process writes checkpoint information to control files and data file headers.

CQJ0 - Job queue controller process wakes up periodically and checks the job log. If a job is due, it spawns Jnnnn processes to handle jobs.

FMON - The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. When you specify the FILE_MAPPING initialization parameter for mapping data files to physical devices on a storage subsystem, then the FMON process is spawned.

LGWR - Log Writer process is responsible for writing the log buffers out to the redo logs. In RAC, each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs.

LMON - Lock Manager process

MMON - The Manageability Monitor (MMON) process was introduced in 10g and is associated with the Automatic Workload Repository new features used for automatic problem detection and self-tuning. MMON writes out the required statistics for AWR on a scheduled basis

MMNL - The Memory Monitor Light  process performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation.

MMAN - The Automatic Shared Memory Management feature uses a new background process named Memory Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The SGA Memory Broker keeps track of the sizes of the components and pending resize  operations

PMON - Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture) is being utilized, PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as service registration agent is particularly important.

Pnnn - (Optional) Parallel Query Slaves are started and stopped as needed to participate in parallel query operations.

RBAL - This is the ASM related process that performs rebalancing of disk resources controlled by ASM. 

ARBx - These processes are managed by the RBAL process and are used to do the actual rebalancing of ASM controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter

ASMB - The ASMB process is used to provide information to and from the Cluster Synchronization Services used by ASM to manage the disk resources. It is also used to update statistics and provide a heartbeat mechanism.

CTWR -  Change Tracking Writer (CTWR) which works with the new block changed tracking features  for fast RMAN incremental backups

WMON - The "wakeup" monitor process

RVWR - Recovery Writer ( RVWR) introduced which is responsible for writing flashback logs which stores pre-image(s) of data blocks

Data Guard/Streams/replication Background processes

DMON - The Data Guard Broker process.


SNP - The snapshot process.

MRP - Managed recovery process - For Data Guard, the background process that applies archived redo log to the standby database.

ORBn - performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these at a time, called ORB0, ORB1, and so forth.

OSMB - is present in a database instance using an Automatic Storage Management disk group. It communicates with the Automatic Storage Management instance.

RFS - Remote File Server process - In Data Guard, the remote file server process on the standby database receives archived redo logs from the primary database.

QMN - Queue Monitor Process (QMNn) - Used to manage Oracle Streams Advanced Queuing.


Oracle Real Application Clusters (RAC) Background Processes


DIAG: Diagnosability Daemon – Monitors the health of the instance and captures the data for instance process failures.

LCKx - This process manages the global enqueue requests and the cross-instance broadcast. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).

LMON - The Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage the global enqueues and the resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)

LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources. The LMD process also handles deadlock detection and remote enqueue requests. Remote resource requests are the requests originating from another instance.

LMSx - The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster.
The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of messages to remote instances.



The LMSn processes handle the blocking interrupts from the remote instance for the Global Cache Service resources by:
Managing the resource requests and cross-instance call operations for the shared resources.
Building a list of invalid lock elements and validating the lock elements during recovery.
Handling the global lock deadlock detection and Monitoring for the lock conversion timeouts

Wednesday, September 1, 2010

How to Find Sessions Generating Lots of Redo or Archive logs

There  are  two methods   to  find  sessions generating lots of redo

  1. Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates how much blocks have been changed by the session. High values indicate a session generating lots of redo

    SQL> SELECT s.sid, s.serial#, s.username, s.program,i.block_changes
               FROM v$session s, v$sess_io i
               WHERE s.sid = i.sid
               ORDER BY 5 desc, 1, 2, 3, 4;
  2. Query V$TRANSACTION. This view contains information about the amount of undo blocks and undo records accessed by the transaction (as found in the USED_UBLK and USED_UREC columns).

    SQL> SELECT s.sid, s.serial#, s.username, s.program, t.used_ublk, t.used_urec
                FROM v$session s, v$transaction t
                WHERE s.taddr = t.addr
                ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;

    Run the query multiple times and examine the delta between each occurrence of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by the session.
Note:You use the first query when you need to check for programs generating lots of redo when these programs activate more than one transaction. The latter query can be used to find out which particular transactions are generating redo.

Index Skip Scan

If you have an composite index  (Index consist of more  than one  column) and you use the non-prefix column alone in your SQL, it may still use index  dependening  of the  cost  (CBO will calculate the cost of using the index and compare it with full table scan)

Index skip scan works differently from a normal index (range) scan.A normal range scan works from top to bottom first and then move horizontal.But a Skip scan includes several range scans in it. Since the query lacks the leading column it will rewrite the query into smaller queries and each doing a range scan

Example

SQL>create table SERDAR.skiptest(a number,b number,c number);
SQL>create index SERDAR.ix1 on SERDAR.skiptest (a,b);
SQL>
declare

begin
for i in 1 .. 100000
loop
  insert into skiptest values(mod(i, 5), i, 100);
end loop;
commit;
end;
/

SQL>exec dbms_stats.gather_table_stats(ownname => 'SERDAR', tabname => 'skiptest', cascade => true);

SQL>set autotrace traceonly
SQL>select * from skiptest where b=88888;

Execution Plan


----------------------------------------------------------

0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=22 Card=1 Bytes=10)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'SKIPTEST' (TABLE) (Cost=22 Card=1 Bytes=10)
2 1 INDEX (SKIP SCAN) OF 'IX1' (INDEX) (Cost=21 Card=1)


In this case a composite index is split logically into smaller subindexes. The number of logical subindexes depends on the cardinality of the initial column.Above example, “select * from test where b=88888” was broken down to several small range scan queries. It was effectively equivalent to following

Select * from skiptest where a=0 and b=88888

Union
Select * from skiptest where a=1 and b=88888
Union
Select * from skiptest where a=2 and b=88888
Union
Select * from skiptest where a=3 and b=88888
Union
Select * from skiptest where a=4 and b=88888;

Conclusion 
Skip scan is not as efficient as normal “single range scan”

Thursday, August 12, 2010

How to use “/ as sysdba” to export and import

  1. For  windows  platforms

    exp userid=”"”/ as sysdba”"” file=export.dmp ...
    imp userid=”"”/ as sysdba”"” file=export.dmp ...
  2. For  Unix  paltforms

    exp userid=\’/ as sysdba\’   file=export.dmp ...
    imp userid=\’/ as sysdba\’  file=export.dmp ...

     

Wednesday, August 11, 2010

PLS-00923: native compilation failed

Symptoms :
Attempting to implement Native Compilation on a 10g Database fails with an error :PLS-00923: native compilation failed: C compilation or linking failed

plsql_code_type parameter  was set  to   native and plsql_native_library_dir was  set 

Compiling  the  packages  give error : PLS-00923: native compilation failed: C compilation or linking failed:spdzj.c:Could not invoke program /opt/SunProd/SUNWspro8/bin/cc:No such file or directory

Cause:
The wrong/invalid compiler is being referenced within the spnc_commands file.
Take the compiler and path referenced in the error message and at the OS prompt execute the 'ls' command.
The error "No such file or directory" is returned confirming why it can not be invoked

ls  -lrt   /opt/SunProd/SUNWspro8/bin/cc

Solution:
  1. Find a  compiler
    which cc
    /usr/bin/cc
  2. Modify  $ORACLE_HOME/plsql/spnc_commands that uses  new  compiler.for this example  change  /opt/SunProd/SUNWspro8/bin/cc  with  /usr/bin/cc  in the  file
  3. Backup to session and  recompile (It may  needed  to restart the database )
  4. check   the  directory  that  referenced in plsql_native_library_dir   parameter  There will  be  some  compiled  c  files

Monday, August 9, 2010

what is TWO_TASK

TWO_TASK is an environment variable used to specify that connections should be made to a remote database (via SQL*Net) without specifying a service name (tnsnames.ora entry). This is equivalent to LOCAL registry entry on Windows platforms

Unix/Linux:
export TWO_TASK=DATABASE1


Windows:set TWO_TASK=DATABASE1

From command  prompt  sqlplus  username/passwd will connect  to  remote  DATABASE1  defined  in $ORACLE_HOME/network/admin/tnsnames.ora

TWO_TASK

How to recover from Extract ERROR 180 encountered commit SCN that is not greater than the highest SCN already processed

This is  the  summary of  Metalink Doc:987379.1

This error occurs in an Oracle RAC environment after a transaction is written to the idle node but does not yet appear in the redo log when the current transaction is processed. This transaction will have a higher SCN then the previous transaction
In order  to avoid  to live  this  problem  the steps in   (Encountered SCN That Is Not Greater Than The Highest SCN Already Processed )  articale  must be  done

But  If you have  encountered  this  problem  here  is  the solution 
  1. Do an ETROLLOVER on Extract, and take note of the new sequence number of the trail file.
    ALTER EXTRACT [name], ETROLLOVER
  2. Start extract
    START EXTRACT [name]
  3. Send PUMP, LOGEND, to see if it's at the end of the previous trail.
    SEND EXTRACT [pump_name], LOGEND
  4. Once it is at the end of the trail file, You must stop the pump, and do an ETROLLOVER for it too. Take note of the new trail file sequence number that is created from this step
    STOP EXTRACT [pump_name]
    ALTER EXTRACT [pump_name], ETROLLOVER
  5. Alter the pump to SEQNO to the new trail file created from step #1.
    ALTER EXTRACT [pump_name], EXTSEQNO ##### EXTRBA 0
  6. Restart pump
    START EXTRACT [pump_name]
  7. Send Replicat, LOGEND to make sure it has processed all the remaining data, and stop Replicat.
    SEND REPLICAT [name], LOGEND
    STOP REPLICAT [name]
  8. If replicat is not at end of trail, generate a report and forcestop replicat
    SEND REPLICAT [name], REPORT
    STOP REPLICAT [name]!
  9. Add the following parameters to replicat parameter file to allow replicat to process each trail record as a single transaction, or set them to 1 if you have any of these parameters
    GROUPTRANSOPS 1
    MAXTRANSOPS 1
  10. Restart replicat
    START REPLICAT [name]
  11. Once replicat has completely processed the trail, stop the replicat
    STOP REPLICAT [name]
  12. Edit the replicat parameter file:
    Add parameter HANDLECOLLISIONS to Replicat parameter file
    Remove or comment out GROUPTRANSOPS and MAXTRANSOPS or revert them back to their original values.
  13. ALTER REPLICAT, SEQNO to the new trail file created in step #4.
    ALTER REPLICAT [name], EXTSEQNO ###### EXTRBA 0
  14. Start Replicat
    START REPLICAT [name]
  15. Once Replicat has processed the out of order SCN operations, disable HANDLECOLLISIONS. You could also look for the CSN and wait for Replicat to checkpoint past it.
    SEND REPLICAT [name], NOHANDLECOLLISIONS
  16. Edit the replicat parameter and comment out the HANDLECOLLISIONS parameter. You do not need to stop/restart replicat. This will ensure that on any subsequent replicat restarts the parameter is disabled
See also  :
Encountered SCN That Is Not Greater Than The Highest SCN Already Processed

Gathering System Statistics

  • System Statistics can be collected and displayed for CBO to use and apprehend CPU and system I/O information. For each plan candidate, the optimizer computes estimates for I/O and CPU costs.
  • Invoke the dbms_stats.gather_system_stats procedure as an elapsed time capture, making sure to collect the statistics during a representative heavy workload
  • How to collect  these  statistics.easist way  of  it  is

    execute dbms_stats.gather_system_stats('Start');
    -- one hour delay during high workload
    execute dbms_stats.gather_system_stats('Stop');

  •  Here are the data items collected by dbms_stats.gather_system_stats: we  can query it  from  aux_stats$ system view

    select * from aux_stats$;
    No Workload (NW) stats::
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    Workload-related stats:
    SREADTIM - Single block read time in milliseconds
    MREADTIM - Multiblock read time in ms
    CPUSPEED - CPU speed
    MBRC - Average blocks read per multiblock read
    MAXTHR - Maximum I/O throughput
    SLAVETHR - OPQ Factotum (slave)
  • If the hardware or workload has changed these  statistics must be  refreshed.
  • If  the workload  differs  from day to  nigth  (run OLTP during the day and DSS at night) then  statistics can be  exported  to  a table and  One  job may be   scheculed  to  import  statistics  before  begining  of  night  and   day

Gather Optimizer Statistics For Sys

Gathering statistics for sys objects
  • It is recommended to run gather statistics reqularly, specifically if you are using Oracle APPS and also after upgrades or running catalog scripts
  • To gather the dictionary stats run One of the following
    SQL> exec DBMS_STATS.GATHER_DICTIONARY_STATS;
    SQL> exec DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');
    SQL> exec DBMS_STATS.GATHER_DATABASE_STATS (gather_sys=>TRUE);
Gathering statistics for X$ tables (v$ views )
  • Gather_fixed_objects_stats would gather statistics for dynamic tables e.g. the X$ tables which loaded in SGA during the startup. Gathering statistics for fixed objects would normally if we have poor performance in querying the dynamic views e.g. V$ views.
  • Fixed objects record current database activity; statistics gathering should be done when database has representative activity.
  • To gather the fixed objects stats
    EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
See also :
Gathering System Statistics

Thursday, July 22, 2010

Examples and Solutions for Operations on Indexed Columns

In certain cases  operations and  functions on indexed  columns  prevent using  indexes.Here  are  some examples  and solutions  to them

  1. Most common error  is  to  select  the dates  that  has  time on it.e.x selecting the records of  yesterday .column_name  ise  insert_date  and  has an index on it

    ....where  trunc(insert_date)=trunc(sysdate-1)  

    Above example  prevents index usage  instead  of it  try this 

    ... where  insert_date >=trunc(sysdate-1)    and   insert_date  <trunc(sysdate)


    Lets'  arrange  this example  for  a  determined  date  (e.x '22/09/2010')

    ... where insert_date >=to_date('22/09/2010','dd/mm/yyyy') and insert_date <to_date('23/09/2010','dd/mm/yyyy');
  2. Adding,substracting  .. an expression  to indexed  column.(e.x There is an index  on  salary  column which we  expect  to be used )

    select  count(*) from employee  where  salary+100 >1000;

    Above example prevents index usage instead of it try this


    select count(*) from employee where salary >1000-100;

    also  this operation  can be  while  a   join operation   like 

    select  * from  table1 a,table2 b  where  a.start_date > sysdate-10  and b.salary+100=b.min_value;

    Above example prevents index usage instead of it try this

    select * from table1 a,table2 b where a.start_date > sysdate-10  and  b.salary=b.min_value-100;
  3. Concatenate index column (e.x we  have  index  on first_name,last_name)

    select  * from employee  where  first_name||' '||last_name='SERDAR TURGUT';

    instead of it  try this

    select * from employee where first_name='SERDAR'  and  last_name='TURGUT';
  4. substring operations

    ...where  substr(id,1,2) = 'AA';

    Think that  first two latter  of  a  column indicate  someting  so we  try to query the first  n later then try   this

    ...where id like   'AA%';

    If we  do not  use  only   first n characters  also  we  use  starting from n th character  like

    ...where substr(id,3,2) = 'AA';

    Then I can advice  two solutions.  first solution  is  tell your  dba  to    create  a  function based   index  on  substr(id,3,2) .Second  solution is  review  the data model  and   usage  of  this  column .Your  mistake  is  Instead  of defining  a different column  you have  concatenated  the  different columns on one column. like  first two characters are a different property and then three  characters  are  another .. Then you can divide this  column  into different columns 
  5. Inequalities   prevent  index usage  (e.x  we have  index  on status  integer  column )
    .. where  status  <>0;

    Think that all the  values  >=0   so we  want to  select  all the rows  except 0

    ...where  status  >0;

Note:This articale is for SQL developers for SQL Tuning

Related  Topics:Index not used because of type conversion 

Tuesday, July 13, 2010

Index not used because of type conversion

If  the  data type  of  a  variable  is  not  the same  with  the  column data type  then  an implicit  data type  conversion is  occured.This means  one side of equality  is  converted  to  other  side. For number  to  char conversion  char side is  converted  to  number(Because all numbers  are  charecters but  not all charecters  are  numbers ).Also  similiar  conversions are  made  for  different  combination of data types.

Let is  look  some  examples

Example  1:
  1. Create  table   and indexes ,insert some rows and also  analyze table 
    create table employee (
    emp_id        varchar2(10),emp_name  varchar2(100),
    emp_entry_date date ,
    emp_dept_no number,
    emp_born_date date);

    alter table employee add constraints emp_id_pk primary key(emp_id ) using index ;
  2. select  from table   emp_id is  character  but  the variable (100)  is  number

    SQL>set autotrace  traceonly
    SQL>select * from employee where emp_id=100;
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=41)
    1 0 TABLE ACCESS (FULL) OF 'EMPLOYEE' (TABLE) (Cost=2 Card=1 Bytes=41)  
  3. Try it with  the  variable  '100' no index  suppression.

    SQL>set autotrace traceonly
    SQL>select * from employee where emp_id='100';
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=41)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMPLOYEE' (TABLE) (Cost=1 Card=1 Bytes=41)
    2 1 INDEX (UNIQUE SCAN) OF 'EMP_ID_PK' (INDEX (UNIQUE)) (Cost=1 Card=1)
Example  2 :
wrong  type  definition  in java  or  .net  code .
  • Here is  an  example of  java code  for  index  suppression (The  table is  employee  table in example 1).Notice  that     emp_id  is   varchar2  in table  empID  is  defined  as  number 
StringBuffer stringBuffer = new StringBuffer();
String employeeAd = "";
int empID=100;
stringBuffer.append(" select emp_name ");
stringBuffer.append(" from employee where emp_id=? ");
...
..
statement.setInt(1, empID);


  • Here is an example of java code for index suppression (The table is employee table in example 1)
    string SQLCHK = @"select emp_name from employee where emp_id=:empID";
    int empID = 100;
    string empAd = "";
    ....
    empAdCMD.AddInParameter("empID", DbType.Int16, empID);
  • And   The   execution plan  of  the  the above sql's

SQL_ID 12hrzrx0n0wqx, child number 0
-------------------------------------
select emp_name from employee where emp_id=:1
Plan hash value: 1970680878

------------------------------------------------------------------------------
|Id |Operation |Name |Rows |Bytes |Cost (%CPU)|Time |
------------------------------------------------------------------------------
|0|SELECT STATEMENT ||||2 (100)||
|1|TABLE ACCESS FULL|EMPLOYEE |1|23|2 (0)|00:00:01 |
------------------------------------------------------------------------------




Example 3:
Also  conversions may be  done  if  different  type  of  columns  used  for  join operation 
  1. Create table sales  insert some rows and also analyze table

    create table SALES (
    emp_id         number,
    sale_date     date,
    sale_amount number );

    create index SALES_emp_ix on SALES(emp_id);
    create index SALES_date_ix on SALES(SALE_DATE);


  2. Try to  find  the sales (and  the employees who sales )  5 days  ago

    SQL>set autotrace traceonly
    SQL> select e.emp_id,e.emp_name,s.sale_amount
                     from employee e,sales s
                     where s.sale_date =TRUNC(SYSDATE)-5
                       and e.emp_id=s.emp_id;

    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=37)
    1 0 HASH JOIN (Cost=4 Card=1 Bytes=37)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'SALES' (TABLE) (Cost=1 Card=1 Bytes=14)
    3 2 INDEX (RANGE SCAN) OF 'SALES_DATE_IX' (INDEX) (Cost=1 Card=1)
    4 1 TABLE ACCESS (FULL) OF 'EMPLOYEE' (TABLE) (Cost=2 Card=100 Bytes=2300)


Since  sales.emp_id is  number  and  employee.emp_id  is  varchar2  then   "e.emp_id=s.emp_id"  is internally  converted  to   "to_number(e.emp_id)=s.emp_id"  and  the  index  of  employee  (emp_id_pk ) will not we  used.

Note:This  articale is  for  SQL  developers for  SQL Tuning

See also:
Examples and Solutions for Operations on Indexed Columns

Thursday, July 1, 2010

Diagnosing Oracle Clusterware Node evictions (Diagwait)

Oracle Clusterware evicts the node from the cluster when
  • Node is not pinging via the network heartbeat
  • Node is not pinging the Voting disk
  • Node is hung/busy and is unable to perform either of the earlier tasks
In Most cases when the node is evicted, there is information written to the logs to analyze the cause of the node eviction. However in certain cases this may be missing, the steps documented in this note are to be used for those cases where there is not enough information or no information to diagnose the cause of the eviction for Clusterware versions less than 11gR2 (11.2.0.1)

How to setup diagwait
  1. execute  as  root and   run it on both node
    #crsctl stop crs
    #$CRS_HOME/bin/oprocd stop
  2. Ensure that Clusterware stack is down on all nodes by executing  do not  continue If there are clusterware processes running and you proceed to the next step, you will corrupt your OCR on both node

    #ps -ef |egrep "crsd.bin|ocssd.bin|evmd.bin|oprocd"
  3. From one node of the cluster, change the value of the "diagwait" parameter to 13 seconds by issuing the command as root:Changing the Clusterware parameter diagwait to 13 is the Oracle supported technique to change the oprocd margin to 10 seconds Note that 13 is the only allowed value for setting the diagwait parameter to. Any value other than 13 (or unset) is not allowed and not support

    #crsctl set css diagwait 13 -force
  4. Check if diagwait is set successfully by executing. the following command.The command should return 13. If diagwait is not set, the following message will be returned "Configuration parameter diagwait is not defined"

    #crsctl get css diagwait
  5. Restart the Oracle Clusterware on all the nodes by executing

    #crsctl start crs
  6. Check that nodes  are running

    #crsctl check crs
Unsetting/Removing diagwaitDiagwait should not be unset without fixing the OS scheduling issues as that can lead to node evictions via reboot.Diagwait delays the node eviction (and reconfiguration) by diagwait (13) seconds .In case there is a need to remove diagwait, the above mentioned steps need to be followed except step 3 needs to be replaced by the following command
#crsctl unset css diagwait

Tuesday, June 8, 2010

Script to Check the Status of the JVM within the Database

The following script can be used to determine whether the Oracle JVM is installed completely and is valid also check  How to Check if the Oracle JVM is Installed Correctly in the Database
sqlplus  '/ as sysdba'
set serveroutput on

set echo on
set pagesize500
set linesize 100
column comp_name format a40

select comp_name, version, status from dba_registry;


select owner, status, count(*) from dba_objects  where object_type like '%JAVA%' group by owner, status;

select owner, object_type, count(*) from dba_objects  where object_type like '%JAVA%' and status <> 'VALID' group by owner, object_type;

select owner, status, object_type, object_name from dba_objects

where object_name like'%DBMS_JAVA%';

select owner, status, object_type, object_name from dba_objects

where object_name like'%INITJVMAUX%';

select * from v$sgastat where POOL = 'java pool' or NAME = 'free memory';


show parameter pool_size

show parameter sga

select owner, object_type, status, dbms_java.longname(object_name) from dba_objects

where object_type like '%JAVA%' and status <> 'VALID';


Related  topics:
How to Check if the Oracle JVM is Installed Correctly in the Database
Is the Oracle JVM actively used in the database