Wednesday, September 11, 2013

Quick configuration of TSM Data Protection for Oracle on an AIX


  1. Prerequests
    1. TDP for  Oracle  must  be  installed before
    2. Policy,domain,backup copy,storage pools  ...etc  must be defined  on TSM server  side
    3. TDP for  Oracle  node must  be  registred  before on server  side ( e.x node name   =dbgenomt e.x domain name  = DM_TEST_ORACLE  )

      REG NODE dbgenomt  password_1234  dom=DM_TEST_ORACLE  backdel=yes
     
  2. Link the Oracle target database instance with Data Protection for Oracle by performing the following steps: (with  oracle user  )
    1. Change the LIBPATH environment variable to include $ORACLE_HOME/lib before /usr/lib. If you have a LD_LIBRARY_PATH, ensure that this has the $ORACLE_HOME/lib before /usr/lib.
    2. Ensure the SBT_LIBRARY parameter is not set
    3. Shut down all Oracle instances that use $ORACLE_HOME
    4. Link  TDP library file directly to the Oracle directory
      cd /usr/tivoli/tsm/client/oracle/bin64/
      ln -s /usr/tivoli/tsm/client/oracle/bin64/libobk64.a $ORACLE_HOME/lib/libobk.a
    5. Start the Oracle instances
  3. Configure  tdpo.opt   file  under  /usr/tivoli/tsm/client/oracle/bin64
    1. copy  tdpo.opt.smp64  as tdpo.opt
       
      cd /usr/tivoli/tsm/client/oracle/bin64
      cp  tdpo.opt.smp64   tdpo.opt
       
    2. change  and  open * character t for   following lines 

      dsmi_orc_config  /usr/tivoli/tsm/client/oracle/bin64/dsm.opt
      dsmi_log              
  4.   create  dsm.opt  in the same  directory  that inclused  node name  (e.x  dbgenomt )
    cd /usr/tivoli/tsm/client/oracle/bin64
    echo   "SErvername dbgenomt"  >dsm.opt 
  5. dsm.sys file 
    1. create  a  symbolic link  in order  to  have only one copy  of  dsm.sys file
      ln -s /usr/tivoli/tsm/client/ba/bin64/dsm.sys /usr/tivoli/tsm/client/api/bin64/dsm.sys
    2. Edit the dsm.sys file to include another server stanza with the following options (e.x node name  dbgenomt ) and  x.x.x.x is  the  IP  adress  of  IP address of the Tivoli Storage Manager


      SErvername dbgenomt    nodename dbgenomt    QUERYSCHEDPERIOD 1
          TCPNODELAY NO
          RETRYPERIOD 10
          ERRORLOGNAME "/tmp/dbtmp/dbgenomt_dsmerror.log"    SCHEDLOGNAME "/tmp/dbtmp/dbgenomt_dsmsched.log"
          SCHEDMODE POLLING
          SCHEDLOGRETENTION 4 D
          ERRORLOGRETENTION 4 D
          PASSWORDACCESS GENERATE
          passworddir  /genomtest/genomt    COMMmethod  TCPIP
          tcpserveraddress x.x.x.x    tcpport 1500
          TXNBYTELIMIT 2097152
          managedservices webclient
  6. Make sure the Oracle user has the following permissions
    1.  Read (r) permission to the /usr/tivoli/tsm/client/oracle/bin64 and /usr/tivoli/tsm/client/api/bin64 directories
    1. Read permission (r-) to the tdpo.opt, dsm.opt, and dsm.sys files located in the /usr/tivoli/tsm/client/oracle/bin and /usr/tivoli/tsm/client/api/bin directories
  7.  Change to the /usr/tivoli/tsm/client/oracle/bin64 directory and run the tdpoconf password command (as Oracle user) to generate the password file

    cd /usr/tivoli/tsm/client/oracle/bin64
    tdpoconf  password 
  8. Run the tdpoconf showenvironment command to confirm proper configuration
    tdpoconf showenvironment
  9. You can  take  backup (e.x script )
    run
    {
       allocate channel t1 type 'sbt_tape' parms
                'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
          backup
          filesperset 5
          format 'df_%t_%s_%p'
          (database);
       }

Friday, August 2, 2013

11g Release 2 RMAN Backup Compression

Oracle Compression
With Oracle 11g Oracle Advanced Compression provides comprehensive data compression capabilities to compress all types of data, backups, and network traffic in an application transparent manner.The Oracle Advanced Compression option contains the following features:
 
  • Fast RMAN Compression
  • Data Guard Network Compression
  • Data Pump Compression
  • OLTP Table Compression
  • SecureFile Compression and Deduplication
  • Flashback Data Archive (Total Recall)
Rman Compression
For RMAN backup compression evels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backup througput and the degree of compression afforded. Unfortunately  use of LOW, MEDIUM and HIGH requires the "Advanced Compression license"

To configure RMAN to use compression at all you can use

RMAN>CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;

followed  by   one of the following

RMAN>CONFIGURE COMPRESSION ALGORITHM 'BASIC';
RMAN>CONFIGURE COMPRESSION ALGORITHM 'LOW';
RMAN>CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';
RMAN>CONFIGURE COMPRESSION ALGORITHM 'HIGH'

Here are the results 

Method Time(sec) Backup Size(GB)
BASIC 266 1.258
LOW 66 1.637
MEDIUM 228 1.437
HIGH 668 1.166
NONE 74 3.241
 
 
Conclusion
Note  that  without compression backup time is  the best  but file size is  nearly  database  size

Also there is another factor that I did not take into account .The CPU load of these methods.we know that the complexity of compression level will generate CPU load on server.
Because of many factors (CPU source,disk speed ..)influencing backup speed and backup size you have to test backup and backup compression in your environment yourself.And you must consider your CPU nd disk performance .Also "Advanced Compression license" cost

Tuesday, March 19, 2013

Configure NTP (Network Time Protocol) clients on AIX

How to configure NTP on client with a NTP server .Our NTP servers  are   time_server01 and  time_server02

  1. Verify that you have a server suitable for synchronization offset  must  be smaller  then 1000 ms 
    #ntpdate -d time_server01
  2. Check if it is  running  or  not (configured  before)
    #lssrc -ls xntpd
  3. Specify your ntp servers in /etc/ntp.conf  .Comment out the “broadcastclient” line (if applicable )
    #vi /etc/ntp.conf

    server    time_server01 prefer
    server    time_server02
    driftfile /etc/ntp.drift
    tracefile /etc/ntp.trace
    #broadcastclient
  4. Be sure  that  you can rach time servers .if you define  a server  name  (not an IP be sure that you can  reach it ).You can add it  in /etc/hosts

    #vi /etc/hosts
    10.1.1.10       time_server01
    10.1.1.11       time_server02

    #ping  time_server01
    #nslookup time_server01
  5. start  stop
    #stopsrc -s xntpd
    #startsrc -s xntpd
  6. check  if it is  running  and  Sys peer should display the IP address or name of your xntp server. This process may take up to 15 minutes

    #lssrc -ls xntpd
    #lssrc -ls xntpd|grep "Sys peer"
    #ntpq -p
  7. Uncomment xntpd from /etc/rc.tcpip so it will start on a reboot.

    # vi /etc/rc.tcpip
    Uncomment the following line:
    start /usr/sbin/xntpd "$src_running"

Friday, February 22, 2013

Disk performance test with IBM ndisk64 tool

ndisk64 is  an IBM free tool to measure  IO performance of   your  disk .
You can download  and  get information  from http://www.ibm.com/developerworks/wikis/display/WikiPtype/nstress
  • I have  created  7   different 10g files on all mout points in order  to  spread   IO
    dd if=/dev/zero of=/datac1/bigfile1 bs=1m count=10240
    dd if=/dev/zero of=/datac2/bigfile2 bs=1m count=10240
    dd if=/dev/zero of=/datac3/bigfile3 bs=1m count=10240
    dd if=/dev/zero of=/datac4/bigfile4 bs=1m count=10240
    dd if=/dev/zero of=/datac5/bigfile5 bs=1m count=10240
    dd if=/dev/zero of=/datac6/bigfile6 bs=1m count=10240
    dd if=/dev/zero of=/datac7/bigfile7 bs=1m count=10240
  • Put the names  of the files  to   a  input file

    echo "/datac1/bigfile1" >filelist
    echo "/datac2/bigfile2">>filelist
    echo "/datac3/bigfile3">>filelist
    echo "/datac4/bigfile4">>filelist
    echo "/datac5/bigfile5">>filelist
    echo "/datac6/bigfile6">>filelist
    echo "/datac7/bigfile7">>filelist
  • Identify  some  system values and  also  define some assumptions   about  your  system my assumptions and system   values  are

    Block size=8k
    Read-WriteRatio: 70:30 = read mostly(OLTP)
    Timed duration of the test in seconds =120
     Mutliple processes used to generate =3
  • Start  test  for different multiple  process
       /home/sturgut/ndisk64 -F filelist  -S -r70 -b 8k -t 120  -M3
    Command: /home/sturgut/ndisk64 -F filelist -S -r70 -b 8k -t 120 -M3
            Synchronous Disk test (regular read/write)
            No. of processes = 3
            I/O type         = Sequential
            Block size       = 8192
            Read-WriteRatio: 70:30 = read mostly
            Sync type: none  = just close the file
            Number of files  = 7
            File size        = 33554432 bytes = 32768 KB = 32 MB
            Run time         = 120 seconds
            Snooze %         = 0 percent
    ----> Running test with block Size=8192 (8KB) ...
    Proc - <-----disk io----=""> | <-----throughput------> RunTime
     Num -     TOTAL   IO/sec |    MB/sec       KB/sec  Seconds
       1 -   2523632  21030.3 |    164.30    168242.34 120.00
       2 -   2442193  20351.6 |    159.00    162813.09 120.00
       3 -   2097104  17475.9 |    136.53    139807.03 120.00
    TOTALS   7062929  58857.8 |    459.83 Seq procs=  3 read= 70% bs=  8KB
  • In another session Monitor  IO service times using # iostat -RDTl
  • Increase  the different multiple process  and monitor  the  service time .Create a IOPS vs. IO service time chart
  • Increase the number of threads to get a peak IOPS
  • Be sure your queue_depth is >= number of threads
  • More than queue_depth x 2 threads won’t increase thruput

Tuesday, July 17, 2012

Disable or Enable all Oracle jobs

There  are two  kinds  of  jobs  dbms_job and  dbms_scheduler  jobs
  1. Disable
    • dbms_job
      alter system set  job_queue_processes=0;
    • dbms_scheduler
      exec dbms_scheduler.set_scheduler_attribute('scheduler_disabled','true');
  2. Enable
    • dbms_job
      alter system set job_queue_processes=10;
    • dbms_scheduler
      exec dbms_scheduler.set_scheduler_attribute('scheduler_disabled','true');

Tuesday, January 10, 2012

ORA-08104: this index object 136450 is being online built or rebuilt

Problem:While running an online index rebuild your session was killed  so  second attemp will give  following error.

alter index MARDATA.COR_VERGI_NO_HAR_X2  rebuild online
tablespace MARBASINDEX_1M storage(initial 1m next 1m) compress 2;

ERROR at line 1:

ORA-08104: this index object 136450 is being online built or rebuilt
Cause:
you have not installed the patch for Bug 3805539 or are not running on a release that includes this fix.So smon is   very slow   in cleaning up 

Solution:Use  dbms_repair.online_index_clean to  clean  up .

declare
  result boolean ;
begin
result:=DBMS_REPAIR.ONLINE_INDEX_CLEAN(dbms_repair.all_index_id,dbms_repair.lock_wait);
end ;
/

After  then you can   rebuild index.

Thursday, November 17, 2011

How to flush a single SQL from shared pool

Use  DBMS_SHARED_POOL.purge
exec  DBMS_SHARED_POOL.purge('ADRESS,HASH_VALUE','C',1);
Example :
 select address,hash_value, executions, loads, version_count,
 invalidations, parse_calls
 from v$sqlarea
 where sql_text like 'UPDATE OTPL_BASVURU_KULLANICI SET ORDER_ID%';

ADDRESS HASH_VALUE EXECUTIONS LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS
---------------- ---------- ---------- ---------- ------------- ------------- -----------
070000012EC6FB98 3630826165 1 1 1 0 1
alter session set events '5614566 trace name context forever';    --for  10.2.0.4 or 10.2.0.5
 EXEC SYS.DBMS_SHARED_POOL.purge('070000012EC6FB98,3630826165','C',1);


or another   flush  sql 

alter session set events '5614566 trace name context forever';


select  ' EXEC SYS.DBMS_SHARED_POOL.purge('''||address||','||to_char(hash_value)||''',''C'',1);'
from  v$sqlarea where hash_value=&hash_value;


Note1:Be careful usage  of   'ADRESS,HASH_VALUE'      not  'ADRESS','HASH_VALUE'

Note2:The create statement for this package can be found in the $ORACLE_HOME/rdbms/admin/dbmspool.sql script

Note3:DBMS_SHARED_POOL.PURGE is available from 11.1. In 10.2.0.4, it is available through the fix for Bug 5614566. However, the fix is event protected. You need to set the event 5614566 to make use of purge. Unless the event is set, dbms_shared_pool.purge will have no effect.


event="5614566 trace name context forever"   #in init.ora for 10.2.0.4 or 10.2.0.5
or 
alter session set events '5614566 trace name context forever';