Thursday, September 27, 2007

Oracle Database Listener Concepts

The Listener is comprised of two binaries: (1) tnslsnr which is the Listener itself and (2) the Listener Control Utility (lsnrctl) which is used to administer the Listener on the server or remotely

The relevant files for the Listener are as follows

$ORACLE_HOME/bin/lsnrctl Listener control program
$ORACLE_HOME/network/admin/listener.ora Configuration file for the Listener
$ORACLE_HOME/network/admin/sqlnet.ora Configuration file for the Listener
$ORACLE_HOME/bin/tnslnsr Server Listener process

Listener Modes :The Listener can be configured in one of three modes (as configured in listener.ora) –
· Database Provides network access to an Oracle database instance
· PLSExtProc Method for PL/SQL packages to access operating system executables
· Executable Provides network access to operating system executables

LISTENER REMOTE MANAGEMENT

DBAs are not aware that the Listener in Oracle 8i/9i can be remotely managed using lsnrctl or a similar program from a remote machine. The Oracle 10g Listener by default cannot be remotely managed unless local OS authentication is disabled.

1-The simplest method to remotely issue commands to a Listener is to use lsnrctl with command-line parameters as such
– lsnrctl :
– lsnrctl status 192.168.1.100
– lsnrctl stop 192.168.1.100:1522
2- To set up a computer to remotely administer a Listener
- Configure the local listener.ora to resolve to the remote Listener
= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (Host = )(Port = ) ) )
- Start from the command line lsnrctl and specify the Listener name
lsnrctl
LSNRCTL> set current_listener

The following are some examples of possible attacks against an Oracle 8i/9i Listener which has a default configuration and is not properly secured.

Execute SQL as DBA: It is possible to overwrite the ORACLE_HOME/sqlplus/admin/glogin.sql by changing the location of the log file and then sending SQL statements in Listener commands to the file. When the SQL*Plus is executed locally on the server (usually by a DBA), then the SQL statements are executed during the SQL*Plus startup.
Allow Login via rlogin:The Listener log can be used to overwrite an .rlogin file with additional host information, thus allowing an attacker access the server using rlogin.
Denial of Service (DoS) An attacker is able to –
· Stop the Listener
· Set a Listener password so that the Listener ca not be started without a password, although the DBA simply has to edit the listener.ora file and remove the password line
Denial of Service (DoS):Undermine the stability of the server and database by overwriting arbitrary files by changing the directory and filename of the log and trace files to any location accessible by the operating system account that owns the database (usually "oracle").
Denial of Service (DoS) Setting the Listener trace level to "support" may cause performance degradation on a heavily accessed database server.
Information Disclosure:Obtain detailed information on the Listener configuration and database installation such as –
· Database Service Names (e.g., SIDs)
· Database and Listener versions
· Log and trace settings including directory and file names
· Security settings
· Database server operating system
· Oracle environmental variables (ORACLE_HOME, etc.)


LISTENER EXPLOITS

1- If a password is not set on the Listener, someone who knows just a hostname and port number (default port is 1521) has full control over the Listener
2- Oracle Security Alerts:
Check the Oracle security alerts
3- Brute Forcing Listener Password:
The Listener password can easily be brute forced, since there is no automatic lockout facility and no requirements for strong passwords. Repetitive set password commands can be sent to the listener using a hacking program. If logging is enabled (set log_status on), invalid password attempts will appear with an error code of TNS-01169.
4- Passwords Transmitted in Clear Text:
Using the set password command remotely will transmit the password across the network in clear text with every command. If encryption is setup for the listener using the Advanced Security Option (ASO), then the passwords will be sent encrypted across the network. The change password command does encrypt the password when the lsnrctl program is used

ORACLE LISTENER PASSWORD

The password for the Listener is stored in the listener.ora file. If the PASSWORDS_ parameter is manually set, then the password remains in plain-text. If set using lsnrctl and the change_password command, then the password is encrypted as 8-byte string. Unlike the database, the Listener password is case-sensitive.
Prior to Oracle 10g, the encrypted password string could be substituted for the actual password when issuing the set password command. This is useful in executing scripts to stop the Listener. If a password is set for the Oracle 10g Listener, scripts must use the actual password rather than the encrypted string.

If the Listener password is set to "mypassword", then the listener.ora file will have the encrypted string. The following lsnrctl commands using either the plain-text password or encrypted string will both work prior to Oracle 10g.
Listener.ora
PASSWORDS_LISTENER = F4BAA4A006C26134
LSNRCTL> set password
Password: mypassword
LSNRCTL> set password Password: F4BAA4A006C26134

ORACLE 10G LOCAL OS AUTHENTICATION
A major change to Listener security in Oracle 10g (10.1 and 10.2) was the introduction of Local OS Authentication. By default, the Listener cannot be remotely managed and can only be managed locally by the owner of the tnslsnr process (usually oracle).

If another operating system user attempts to manage the Listener, the following message will be displayed in the Listener log file –
TNS-01190: The user is not authorized to execute the requested listener command

If someone attempts to managed the Listener remotely, the following message will be displayed in the Listener log file –

TNS-01189: The listener could not authenticate the user

Local OS Authentication can be disabled by setting the LOCAL_OS_AUTHENTICATION_ parameter in listener.ora file as such –

LOCAL_OS_AUTHENTICATION_ = OFF

When Local OS Authentication is disabled, the Listener behaves exactly as in Oracle 8i/9i. Thus, it should have a password set and ADMIN_RESTRICTIONS set to On.

LOGGING

By default, logging is not enabled (LOG_STATUS=OFF). When logging is enabled, the default directory is $ORACLE_HOME/network/admin and the log file default is .log. The logfile contains a history of listener commands issued both locally and remotely. The logfile shows a timestamp, command issued, and result code. If an Oracle error is returned, it will include the error message. The logfile does not contain passwords or other significant information. The logfile does NOT show any information related to IP address, client name, or other identifying information for remote connections. It may show the client’s current user name, but this can easily be spoofed or not provided.

The following are TNS errors that may signify an attack or inappropriate activity

TNS-01169:An attempt was made to issue a command, but a password is set
TNS-01189:Oracle 10g – Local OS Authentication is enabled and attempt was made to manage the Listener remotely or locally by another user
TNS-01190:Oracle 10g – Local OS Authentication is enabled and attempt was made to manage the Listener locally by another user
TNS-12508:This error occurs when an invalid command is issue (e.g., statusx instead of status) or when a set command is issued and ADMIN_RESTRICTIONS is set to no.

Friday, September 21, 2007

"Idle transactions" or "Open transactions" waiting for a long time (Long Running transactions)

"idle in transaction" means that someone did a "begin", but didn't issuea "commit" or "rollback" yet. It is often a sign of bad application design and you should contact the application developers. Since open transactions may hold locks on tables, the whole application may stop unexpectedly if transactions are left open.


Another possibility is that you've just got a huge workload, e.g. lots of concurrent access to the application so that it has to perform a lot of work, but then you should see SELECT/INSERT/UPDATE/etc. as well, not only "idle in transaction".


Also idle transactions may cause "ora-01555 snapshot too old" errors.So we must find the cause of long running transactions


· We can find the idle transaction with following sql
SELECT
s.username,s.sid,s.serial#,t.start_time,
trunc((sysdate-to_date(T.start_time,'MM/DD/YY HH24:MI:SS'))*24*60) idle_time_in_min, s.status
FROM V$session s, V$transaction t, V$rollstat r
WHERE s.saddr=t.ses_addr
and t.xidusn=r.usn
-- and (s.username like 'DS%' or s.username like 'TF%' or s.username ='DBQUERY')
and trunc((sysdate-to_date(T.start_time,'MM/DD/YY HH24:MI:SS'))*24*60) >10
order by t.start_time;



· also we can write a script that will detect idle_transactions(status<>ACTIVE) and kill then Since killing a transation is a dangerous operation.You must define the users and their spesific idle time accurately .It can run from crontab




#!/bin/ksh

hosttype=`uname`
if [ "$hosttype" == "SunOS" ]
then
userid=`/usr/xpg4/bin/id -u -n`
else
userid=`/usr/bin/id -u -n`
fi
. ~$userid/.profile


MAX_DS_IDLE_TIME=60
MAX_EBANK_IDLE_TIME=500
MAX_DB_IDLE_TIME=500
MAX_BS_IDLE_TIME=30
OUT_FILE=$HOME/mntdir/kill_discoverer_transaction.out
OUT_FILE2=$HOME/mntdir/kill_discoverer_transaction.out2
LOG_FILE=$HOME/logdir/kill_discoverer_transaction.log

sqlplus -s / > $OUT_FILE2 <set arraysize 1
set linesize 1500
set pagesize 0
select 'MAKEGREP',s.sid'#'s.serial#'#'s.username'#' TRUNC((SYSDATE-TO_DATE(T.START_TIME,'MM\/DD\/YY HH24:MI:SS'))*24*60)
FROM V\$session s, V\$transaction t, V\$rollstat r
WHERE s.saddr=t.ses_addr AND t.xidusn=r.usn and
(((s.username like 'DS%' or s.username like 'TF%' ) AND TRUNC((SYSDATE-TO_DATE(T.START_TIME,'MM/DD/YY HH24:MI:SS'))*24*60)>$MAX_DS_IDLE_TIME) or
(s.username='EBANK_N' and TRUNC((SYSDATE-TO_DATE(T.START_TIME,'MM/DD/YY HH24:MI:SS'))*24*60)>$MAX_EBANK_IDLE_TIME) or
(s.username like 'DB%' and TRUNC((SYSDATE-TO_DATE(T.START_TIME,'MM/DD/YY HH24:MI:SS'))*24*60)>$MAX_DB_IDLE_TIME ) or
((s.username='DBQUERY' or s.username like 'BS%') and TRUNC((SYSDATE-TO_DATE(T.START_TIME,'MM/DD/YY HH24:MI:SS'))*24*60)>$MAX_BS_IDLE_TIME ))
and s.status in ('INACTIVE','SNIPED');
exit;
EOF



cat $OUT_FILE2grep MAKEGREPawk {'print $2'}>$OUT_FILE

date >> $LOG_FILE
for l in `cat $OUT_FILE `
do
SID=`echo $lcut -d# -f1`
SERIAL=`echo $lcut -d# -f2`
USERNAME=`echo $lcut -d# -f3`
IDLE_TIME=`echo $lcut -d# -f4`

echo "$USERNAME ($SID,$SERIAL) is idle for $IDLE_TIME munites so It will be killed "tee -a $LOG_FILE
sqlplus -s / >>$LOG_FILE << EOF
ALTER SYSTEM KILL SESSION '$SID,$SERIAL';
exit;
EOF
done

Wednesday, September 19, 2007

Installing and checking the Automatic Undo Management (AUM)

DBAs have the choice to manage rollback segments as they used to do under versions Oracle7, Oracle8, and Oracle8i, or to let the RDBMS do it.

1-How to enable AUM

a)Create undo tablespace

create undo tablespace UNDORBS datafile '/data01/undorbs.dbf' size 2048m;

(@undo1.sql)
select TABLESPACE_NAME, CONTENTS,
EXTENT_MANAGEMENT, ALLOCATION_TYPE,
SEGMENT_SPACE_MANAGEMENT
from dba_tablespaces where contents='UNDO';


b) Change init.ora (or spfile )
-Set these parameters

undo_management = AUTO
undo_retention = 900 #15 minutes
undo_tablespace = UNDORBS

-Unset rollback_segments

c)Restart database


2-Determine the size of undo tablespace and tuned undo retention

· requiered undo for UNDO_RETENTION now
(UR) UNDO_RETENTION in seconds
(UPS) Number of undo data blocks generated per second
(DBS) Overhead varies based on extent and file size (db_block_size)

UndoSpace = [UR * (UPS * DBS)] + (DBS * 24)

The following query calculates the number of bytes needed: (@undo3)

SELECT (UR * (UPS * DBS)) + (DBS * 24) AS "Bytes"
FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'),
(SELECT (SUM(undoblks)/SUM(((end_time - begin_time)*86400))) AS UPS FROM v$undostat),
(select block_size as DBS from dba_tablespaces where tablespace_name=
(select value from v$parameter where name = 'undo_tablespace'));


· tuned undoretention and max query (@undo2)
select
to_char(begin_time,'yyyy-mm-dd hh24:mi:ss') starttime,
to_char(end_time,'yyyy-mm-dd hh24:mi:ss') endtime,
undoblks, maxquerylen maxqrylen,maxqueryid,
tuned_undoretention from v$undostat
order by begin_time;

· To gurantee undo_retention (e.g 1440 sec) how much undo rbs is needed in KB (@undo6)
SELECT dbms_undo_adv.required_undo_size(1440) FROM dual

3-Undo advisor

· Current Undo Info (@undo5)
set serveroutput on
DECLARE
tsn VARCHAR2(40);
tss NUMBER(10);
aex BOOLEAN;
unr NUMBER(5);
rgt BOOLEAN;
retval BOOLEAN;
BEGIN
retval := dbms_undo_adv.undo_info(tsn, tss, aex, unr, rgt);
dbms_output.put_line('UNDO Tablespace is: ' tsn);
dbms_output.put_line('UNDO Tablespace size is: ' TO_CHAR(tss));

IF aex THEN
dbms_output.put_line('Undo Autoextend is set to: TRUE');
ELSE
dbms_output.put_line('Undo Autoextend is set to: FALSE');
END IF;

dbms_output.put_line('Undo Retention is: ' TO_CHAR(unr));

IF rgt THEN
dbms_output.put_line('Undo Guarantee is set to: TRUE');
ELSE
dbms_output.put_line('Undo Guarantee is set to: FALSE');
END IF;
END;

· To gurantee undo_retention (e.g 1440 sec) how much undo rbs is needed in KB after running with current undo (@undo6)
SELECT dbms_undo_adv.required_undo_size(1440) FROM dual;
· Best possible retension with current size of Undo(@undo7)
SELECT dbms_undo_adv.best_possible_retention FROM dual;

4-Useful tips for AUM

· You cannot use UNDO tablespaces for other purposes than UNDO SEGMENTS and cannot do any operation on system generated undo segments
· Only one UNDO tablespace can be used at the instance level
alter system set undo_tablespace=undo_rbs1;

· If you choose to use AUM, you have no chance to manage any undo or rollback even on an non UNDO tablespace.

· For Real Application Clusters environments.
a)All instances within Real Application Cluster environments must run in the same undo mode.
b)Set the global parameter UNDO_MANAGEMENT to AUTO in your server parameter file.
c)Set the UNDO_TABLESPACE parameter to assign the appropriate undo tablespace to each respective instance. Each instance requires its own undo
tablespace. If you do not set the UNDO_TABLESPACE parameter, each instance uses the first available undo tablespace


· The undo segments in AUM(@undo4)
select USN,RSSIZE,HWMSIZE,OPTSIZE,SHRINKS,segment_name from v$rollstat,dba_rollback_segs where usn=segment_id;

· Automatic tuning will help to avoid ORA-01555, but if your UNDO tablespace has autoextend off, then you might get into a situation where active DML needs more space--not reusing expired UNDO segments. The database will be under space pressure and Oracle will give higher priority to finishing the DML, and not queries. In that case, users might get ORA-01555 errors, and for this special scenario, you will see the following entry in alert log

“system is under space pressure, now=XXXXXXX”

Tuesday, September 18, 2007

Oracle Supplemental Logging

Supplemental Logging enhancements are aimed at improving streams and other data sharing facilities.(e.g Logminer) .Includes additional information in redo stream

A-Database Supplemental Logging

Minimal supplemental logging can be enabled using:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Minimal supplemental logging can be disabled using:
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;


Database supplement logging can be enabled :

1-For all columns :This option specifies that when a row is updated, all the columns of that row (except for columns of type LOB, LONG, LONG RAW, and user-defined types) are placed in the redo log file.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

All columns are included with the exception of:LONG,LOB,LONG RAW,Abstract Data Types,Collections

2-For primary key columns
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

3-For unique columns:This option causes the Oracle database to place all columns of a row's foreign key in the redo log file, if any column belonging to the foreign key is modified

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

4-For foreign key columns
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;


When you enable identification key logging at the database level, minimal supplemental logging is enabled implicitly.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;


Database supplement logging can be disabled
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

To monitor database level supplemental logging:
SELECT
supplemental_log_data_min,
supplemental_log_data_pk,
supplemental_log_data_ui,
supplemental_log_data_fk,
supplemental_log_data_all
FROM v$database;


B-Log Groups:Table supplemental logging specifies, at the table level, which columns are to be supplementally logged. You can use identification key logging or user-defined conditional and unconditional supplemental log groups to log supplemental information.


Implemented as constraints.If no name specified for log group then system constraint name will be allocated e.g SYS_C005223
Log Groups can be

1-Unconditional Supplemental Log Groups : The before-images of specified columns are logged any time a row is updated, regardless of whether the update affected any of the specified columns. This can be referred to as an ALWAYS log group

To specify an unconditional supplemental log group for primary key column(s):
ALTER TABLE t1 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
To specify an unconditional supplemental log group that includes all table columns:
ALTER TABLE t1 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
To specify an unconditional supplemental log group that includes selected columns:
ALTER TABLE t1 ADD SUPPLEMENTAL LOG GROUP t1_g1 (c1,c3) ALWAYS;




2-Conditional Supplemental Log Groups - The before-images of all specified columns are logged only if at least one of the columns in the log group is updated

To specify a conditional supplemental log group for unique key column(s) and/or bitmap index column(s):
ALTER TABLE t1 ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
To specify a conditional supplemental log group that includes all foreign key columns:
ALTER TABLE t1 ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
To specify a conditional supplemental log group that includes selected columns:
ALTER TABLE t1 ADD SUPPLEMENTAL LOG GROUP t1_g1 (c1,c3);



In Oracle 10.2, minimal supplemental logging must be enabled at database level before supplemental logging can be enabled at table level




To drop a supplemental log group:
ALTER TABLE t1 DROP SUPPLEMENTAL LOG GROUP t1_g1;
To drop supplemental logging of data use:
ALTER TABLE t1 DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
ALTER TABLE t1 DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE t1 DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER TABLE t1 DROP SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

Supplemental Logging views: DBA_LOG_GROUPS,DBA_LOG_GROUP_COLUMNS

Monday, September 17, 2007

How to Deinstall and Reinstall XML Database (XDB)

A-REMOVAL STEPS

1. Shutdown and restart the database
2. Connect as sysdba and run the catnoqm.sql script
SQL> set echo on
SQL> spool xdb_removal.log
SQL> @?/rdbms/admin/catnoqm.sql
3.minumums
shared_pool_size =150 MB
java_pool_size =150 MB
and
XDB tablespace must have 150m
5. Shutdown the database immediate, and startup the database normal

B-INSTALL STEPS

1. Connect as sysdba and run the catqm.sql script

The catqm.sql script requires the following parameters be passed to it when
run:
A. XDB user password
B. XDB user default tablespace
C. XDB user temporary tablespace
Therefore the syntax to run catqm.sql will be:
SQL> catqm.sql A B C
SQL> set echo on
SQL> spool xdb_install.log
SQL>@?/rdbms/admin/catqm.sql XDB XDB TEMP


The Following Step is for Release 9.2.x ONLY skip to step 3 if running 10.1.x or above

2.Reconnect to SYS again and run the following to load the XDB java library.

SQL>@?/rdbms/admin/catxdbj.sql

3. If the following line is not already apart of the database system parameters (init.ora/spfile).
NOTE: PLEASE REPLACE ,instanceid1,2 etc with your actual values

a. Non-RAC
dispatchers="(PROTOCOL=TCP) (SERVICE=XDB)"
b. RAC
instanceid1.dispatchers="(PROTOCOL=TCP) (SERVICE=XDB)"
instanceid2.dispatchers="(PROTOCOL=TCP) (SERVICE=XDB)"
etc ...
c.If you are not using the default Listener ensure you have set LOCAL_LISTENER in the (init.ora/spfile)
as prescribed for RAC/NON-RAC instances or the end points will not register.

4. Check for any invalid XDB owned objects:
SQL> select count(*) from dba_objects where owner='XDB' and status='INVALID';

5. Check DBA_REGISTRY for XDB status:
SQL> select comp_name, status, version from DBA_REGISTRY where comp_name= 'Oracle XML Database'

6. Restart database and listener to enable Oracle XML DB protocol access

How to disable use of Flash Recovery Area for Archivelogs and Backups

1-Archive to another file system location in addition to the Flash Recovery Area

SQL>create pfile=’init.ora’ from spfile;

add the following line to the init.ora:

log_archive_dest_n=’’ eg

log_archive_dest_1='LOCATION=D:\oracle\product\10.2.0\oradata\V102\Arch'

Restart the instance using the amended pfile and recreate the spfile:

SQL>startup pfile=’init.ora’;

SQL>create spfile from pfile;

2-Archive to another file system location instead of the Flash Recovery Area

Create a parameter file (as above)

Add the following line to the init.ora:

log_archive_dest_n (as above)

Remove the following parameter in the init.ora:

log_archive_dest_10

Restart the instance using the amended pfile and recreate the spfile

3-Avoid use of the Flash Recovery Area altogether (not recommended)
Create a parameter file (as above)

Add the following line to the init.ora:

log_archive_dest_n (as above) or

log_archive_dest=''

Remove the following parameter in the init.ora:

log_archive_dest_10

db_recovery_file_dest
db_recovery_file_dest_size

Restart the instance using the amended pfile and recreate the spfile.


Ref:Metalink Note:297397.1

Thursday, September 13, 2007

Tune Checkpoint

Checkpoint

A Checkpoint is a database event which synchronizes the modified data blocks in memory with the datafiles on disk

Oracle writes the dirty buffers to disk only on certain conditions

-A shadow process must scan more than one-quarter of the db_block_buffer parameter.
-Every three seconds.
-When a checkpoint is produced.

A checkpoint is realized when
-Redo switches
-when (LOG_CHECKPOINT_INTERVAL* size of IO OS blocks) is written to redo logfile
-LOG_CHECKPOINT_TIMEOUT is reached
-ALTER SYSTEM CHECKPOINT command


A checkpoint performs the following three operations:
-Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
It's the DBWR that writes all modified databaseblocks back to the datafiles.
-The latest SCN is written (updated) into the datafile header.
-The latest SCN is also written to the controlfiles.

Tuning checkpoints involves four key initialization parameters

- FAST_START_MTTR_TARGET
- LOG_CHECKPOINT_INTERVAL
- LOG_CHECKPOINT_TIMEOUT
- LOG_CHECKPOINTS_TO_ALERT



SELECT SUBSTR(NAME,1,30) nme , SUBSTR(VALUE,1,50) value
from v$parameter
where name in ('log_checkpoint_interval','log_checkpoint_timeout','fast_start_io_target','fast_start_mttr_target');





FAST_START_MTTR_TARGET :Enables you to specify the number of seconds the database takes to perform crash recovery of a single instance. Based on internal statistics, incremental checkpoint automatically adjusts the checkpoint target to meet the requirement of FAST_START_MTTR_TARGET.
You can select V$INSTANCE_RECOVERY the status of the estimated and target MTTR

select ESTIMATED_MTTR,TARGET_MTTR from V$INSTANCE_RECOVERY;

from above query we can decide the initial fast_start_mttr_target value

alter system set log_checkpoint_interval=0 scope=both;
alter system set log_checkpoint_timeout=0 scope=both;
alter system set fast_start_io_target=0 scope=both;
alter system set fast_start_mttr_target=30 scope=both; #30 is example
alter system set log_checkpoints_to_alert = true;

Then query the v$MTTR_TARGET_ADVICE in order to find optimal fast_start_mttr_target


Note:When you enable fast-start checkpointing, remove or disable (set to 0)
the following initialization parameters:
- LOG_CHECKPOINT_INTERVAL
- LOG_CHECKPOINT_TIMEOUT
- FAST_START_IO_TARGET


LOG_CHECKPOINT_TIMEOUT:LOG_CHECKPOINT_TIMEOUT specifies the amount of time, in seconds, that has passed since the incremental checkpoint at the position where the last write to the redo log occurred.

LOG_CHECKPOINT_INTERVAL :specifies the maximum number of redo blocks the incremental checkpoint target should lag the current log tail.
If FAST_START_MTTR_TARGET is specified, LOG_CHECKPOINT_INTERVAL should not be set or set to 0.
On most Unix systems the operating system block size is 512 bytes. This means that setting LOG_CHECKPOINT_INTERVAL to a value of 10,000 would
mean the incremental checkpoint target should not lag the current log tail by more than 5,120,000 (5M) bytes. . If the size of your redo log is 20M, you are taking 4 checkpoints for each log.

LOG_CHECKPOINTS_TO_ALERT
LOG_CHECKPOINTS_TO_ALERT lets you log your checkpoints to the alert file. Doing so is useful for determining whether checkpoints are occurring at the desired frequency.

REDO LOG NUMBER AND SIZE
A checkpoint occurs at every log switch. If a previous checkpoint is already in progress, the checkpoint forced by the log switch will override the current checkpoint
so look alert.log or guery v$log_history the occurance of log switch
If redo logs switch every 3 minutes, you will see performance degradation. This indicates the redo logs are not sized large enough to efficiently handle the transaction load.

CHECK ERROR MESSAGES IN ALERT.LOG
Check “Cannot allocate new log” and “Checkpoint not complete” messages in alert.log .
This situation may be encountered if DBWR writes too slowly, or if a log switch happens before the log is completely full,
or if log file sizes are too small.
When the database waits on checkpoints,redo generation is stopped until the log switch is done.
tune the log number and size

SYSTEM WAITS
we can query the system waits from v_$system_event or from statspacks or from 10g awr reports if log switch errors happed

select
substr(e.event, 1, 40) event,
e.time_waited,
e.time_waited / decode(
e.event,
'latch free', e.total_waits,
decode(
e.total_waits - e.total_timeouts,
0, 1,
e.total_waits - e.total_timeouts
)
) average_wait
from
sys.v_$system_event e
where event like 'log file switch%';

Monday, September 10, 2007

How to convert a tablespace to ASSM(Automatic Segment Space Managment)

The segment space management that you specify at tablespace creation time applies to all segments subsequently created in the tablespace.
You cannot change the segment space management mode of a tablespace.

Friday, September 7, 2007

alert for flash_recovery_area

If you have take the following error in alert.log then
You have following choices to free up space from flash recovery area:1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard, then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating system command was used to delete files, then use RMAN CROSSCHECK and DELETE EXPIRED commands.
This means you do not have enough space in db_recovery_file_dest_sizecheck V$RECOVERY_FILE_DEST
KTP5>SELECT * FROM V$RECOVERY_FILE_DEST;
NAME--------------------------------------------------------------------------------------
SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
----------- ---------- ----------------- ---------------
/kondor2/oracle/oraInventory/flash_recovery_area 2,147E+09 2,147E+09 0 85
also we can look to dba_outstanding_alerts
KTP5>SELECT object_type, message_type, message_level, 2 reason, suggested_action 3 FROM dba_outstanding_alerts;

OBJECT_TYPE MESSAGE_TYPE MESSAGE_LEVEL
---------------------------------------------------------------- ------------ -------------
REASON
----------------------------------------------------------------------------------------------------
SUGGESTED_ACTION----------------------------------------------------------------------------------------------------
RECOVERY AREA Warning 1db_recovery_file_dest_size of 2147483648 bytes is 97.37% used and has 56553472 remaining bytes availYou have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard, then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating system command was used to delete files, then use RMAN CROSSCHECK and DELETE EXPIRED commands.

so we can increase DB_RECOVERY_FILE_DEST_SIZE
KTP5>alter system set DB_RECOVERY_FILE_DEST_SIZE=3g scope=both;

You may also need to consider changing your backup retention policy rman
RMAN> connect target
connected to target database: KTP5 (DBID=4032373710)
RMAN> configure retention policy to recovery window of 7 days;
RMAN> delete expired backup;

Thursday, September 6, 2007

How to Enable and Disable Automatic Statistics Collection in 10G

The Automatic Statistics collection feature is enabled by default in 10G.
You can verify this by checking the following :

SQL> SELECT STATE FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB';
STATE
---------------
SCHEDULED

To Disable the automatic statistics collection in 10G , you can execute the following procedure

SQL >Exec DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');

In order to enable statistics collection you will follow following steps

SQL> SELECT STATE FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB';

STATE
---------------
DISABLED

SQL> Exec DBMS_SCHEDULER.enable('GATHER_STATS_JOB');

ORACLE TNS DEFAULT PORTS

1521 :
Default port for the TNS Listener. This port number may change in the future as Oracle has officially registered ports 2483 and 2484 (SSL).
1522 – 1540:
Commonly used ports for the TNS Listener 1575 Default port for the Oracle Names Server
1630:
Default port for the Oracle Connection Manager – client connections
1830:
Default port for the Oracle Connection Manager – administrative connections
2481:
Default port for Oracle JServer/JVM listener 2482 Default port for Oracle JServer/JVM listener using SSL
2483:
New officially registered port for the TNS Listener 2484 New officially registered port for the TNS Listener using SSL

Wednesday, September 5, 2007

Protect the Listener by password

We can set password by two ways

a) Cleartext Password:
Add PASSWORDS_< your_listener_name > entry to your existing listener.ora file and restart the listener
PASSWORDS_listener1 = (p1,p2)

Example of a listener stop operation

LSNRCTL > set current_listener listener1
LSNRCTL > set password p1
LSNRCTL > stop

b)Encrypted Password
- Comment out PASSWORD_ line if cleartext password is set.
- Restart listener.
- Run lsnrctl
LSNRCTL > set current_listener LSNRCTL > set save_config_on_stop on
LSNRCTL > change_password
Old password: < enter >
New password: < enter_your_password >
Reenter new password: < reenter_your_password >
Example
LSNRCTL > change_password
Old password: < enter >
New password: e1
Reenter new password: e1
Just hit < enter > key for old password since no previuos password is set. The passwords you entered will not be echoed.

- Stop the listener

LSNRCTL > set password
Password: < enter_your_password_here >
LSNRCTL > stop

Example of a listener process
LSNRCTL > set password
Password: e1
LSNRCTL > stop

- Check your listener.ora file if PASSWORDS_< listener1 > exists

How to see Oracle hidden parameters

we can see the Oracle Hiden parameters with the following sql .
Don't change them without the advice of Oracle Support

SELECT a.ksppinm "Parameter",b.ksppstvl "Session Value",c.ksppstvl "Instance Value"
FROM x$ksppi a,x$ksppcv b,x$ksppsv c
WHERE
   a.indx = b.indx
  and a.indx = c.indx
  and a.ksppinm LIKE '/_%' escape '/'
;