Friday, September 7, 2007

How to Resize Redo Logs

Here is a newly created database redo log information:

SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- -------------------
1 1 17 52428800 1 NO INACTIVE 996238 06/09/2007 22:01:59
2 1 18 52428800 1 NO INACTIVE 1006432 06/09/2007 22:13:32
3 1 19 52428800 1 NO CURRENT 1036439 07/09/2007 09:56:44

SQL> select * from v$logfile;
GROUP# STATUS TYPE MEMBER
---------- ------- ------- -------------------------------------------------
3 ONLINE /database/data/redo03.log
2 ONLINE /database/data/redo02.log
1 ONLINE /database/data/redo01.log

Here is how i changed this to five 200M redo logs:

SQL> alter database add logfile group 4 ('/database/data/redo04.log') size 200M;
SQL> alter database add logfile group 5 ('/database/data/redo05.log') size 200M;

while running following sql commands, if you hit an error like this:

ORA-01623: log 3 is current log for instance RPTDB (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: '/database/data/redo03.log'

you should run " alter system switch logfile;" until current log is 4 or 5. Then execute "alter system checkpoint;"

SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;

then move (or maybe drop) old redo logs

mv /database/data/redo01.log /database/data/redo01_old.log
mv /database/data/redo02.log /database/data/redo02_old.log
mv /database/data/redo03.log /database/data/redo03_old.log

finally

SQL> alter database add logfile group 1 ('/database/data/redo01.log') size 200M;
SQL> alter database add logfile group 2 ('/database/data/redo02.log') size 200M;
SQL> alter database add logfile group 3 ('/database/data/redo03.log') size 200M;

Thursday, September 6, 2007

How to determine tables with highest data change rate at Oracle 10G

DBA_HIST_SEG_STAT system view can be used to see physical read-write statistics on segments. Statistics are taken by regular snaps. Following sql script returns tables with higher block change rate, bigger then 20000.

select SNAP_ID,OWNER ,OBJECT_NAME, OBJECT_TYPE, DB_BLOCK_CHANGES_DELTA
from dba_objects a,
(select SNAP_ID,obj#,DB_BLOCK_CHANGES_DELTA from DBA_HIST_SEG_STAT where DB_BLOCK_CHANGES_DELTA > 20000 order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc) b
where a.object_id=b.obj# and object_type='TABLE'
order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc

select c.SNAP_ID,c.END_INTERVAL_TIME SNAP_TIME,a.OWNER ,a.OBJECT_NAME, a.OBJECT_TYPE, b.DB_BLOCK_CHANGES_DELTA
from dba_objects a,
(select SNAP_ID,obj#,DB_BLOCK_CHANGES_DELTA from DBA_HIST_SEG_STAT where DB_BLOCK_CHANGES_DELTA > 20000 order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc) b,
DBA_HIST_SNAPSHOT c
where a.object_id=b.obj# and object_type='TABLE' and b.SNAP_ID=c.SNAP_ID
order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc

Saturday, August 25, 2007

ANALYZE & DBMS_STATS

ANALYZE:
analyze table owner.table_name compute statistics for table for all indexes for all indexed columns size 254;

analyze table owner.table_name partition(partition_name) compute statistics for table for all LOCAL indexes for all indexed columns size 254;

DBMS_STATS:
exec DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'owner', TABNAME => 'table_name', PARTNAME => null, ESTIMATE_PERCENT => 100, BLOCK_SAMPLE => null, METHOD_OPT => 'FOR ALL INDEXED COLUMNS size 254', DEGREE => null, GRANULARITY => 'ALL', CASCADE => TRUE, STATTAB => null, STATID => null, STATOWN => null);

ownname Schema of table to analyze.
tabname Name of table.
partname Name of partition.
estimate_percent Percentage of rows to estimate (NULL means compute) The valid range is [0.000001,100]. Use the constant DBMS_STATS.AUTO_SAMPLE_SIZE to have Oracle determine the best sample size for good statistics.
block_sample Whether or not to use random block sampling instead ofrandom row sampling. Random block sampling is more efficient, but if the data is not randomly distributed on disk, then the sample values may be somewhat correlated. Only pertinent when doing an estimate statistics.
method_opt Accepts:

  • FOR ALL [INDEXED HIDDEN] COLUMNS [size_clause]
  • FOR COLUMNS [size clause] columnattribute[size_clause] [,columnattribute [size_clause]...], where size_clause is defined as: ize_clause := SIZE {integer REPEAT AUTO SKEWONLY}
  • integer—Number of histogram buckets. Must be in the range [1,254].
  • REPEAT—Collects histograms only on the columns that already have histograms.
  • AUTO—Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
  • SKEWONLY—Oracle determines the columns to collect histograms based on the data distribution of the columns.
degree Degree of parallelism. NULL means use the table default value specified by the DEGREE clause in the CREATE TABLE or ALTER TABLE statement. Use the constant DBMS_STATS.DEFAULT_DEGREE to specify the default value based on the initialization parameters.
granularity Granularity of statistics to collect (only pertinent if the table is partitioned).

  • DEFAULT: Gather global- and partition-level statistics.
  • SUBPARTITION: Gather subpartition-level statistics.
  • PARTITION: Gather partition-level statistics.
  • GLOBAL: Gather global statistics.
  • ALL: Gather all (subpartition, partition, and global) statistics.
cascade Gather statistics on the indexes for this table. Index statistics gathering is not parallelized. Using this option is equivalent to running the GATHER_INDEX_STATS procedure on each of the table’s indexes.
stattab User stat table identifier describing where to save the current statistics.
statid Identifier (optional) to associate with these statistics within stattab
statown Schema containing stattab (if different than ownname).
no_invalidate Dependent cursors are not invalidated if this parameter is set to TRUE. When the ’cascade’ argument is specified, this parameter is not relevant with certain types of indexes.

PROCEDURE TO RECREATE PARTITIONED TABLE STATS

exec DBMS_SESSION.SET_NLS('NLS_SORT','BINARY');    -- necessary for a AIX Bug

-- Backup statistics (my backup table is BAKIM.STATTAB)
truncate table bakim.stattab;
exec DBMS_STATS.EXPORT_TABLE_STATS({owner},{table_name},null,'STATTAB','AAA',TRUE,'BAKIM');

declare

cursor cr_x is
select TABLE_owner,table_name,partition_name from dba_tab_partitions where
table_name in ('{table_name}') --and last_analyzed <>trunc(sysdate)
;
begin
for l in cr_x loop
DBMS_STATS.EXPORT_TABLE_STATS(l.TABLE_owner,l.table_name,l.partition_name,'STATTAB','AAA',TRUE,'BAKIM');
 end loop;
end;

-- Delete general table statistics. We want partition statistics to be escalated to table statistics
exec DBMS_STATS.DELETE_TABLE_STATS ({owner},{table_name},null,null,null,TRUE,TRUE,TRUE,null);

-- Gather partition statitstics
declare

cursor cr_x is
select TABLE_owner,table_name,partition_name from dba_tab_partitions where
table_name in ('{table_name}') --and last_analyzed <>trunc(sysdate)
;
Begin
for l in cr_x loop
DBMS_STATS.GATHER_TABLE_STATS ( l.TABLE_owner,l.table_name,l.partition_name,null,null,'FOR ALL INDEXED COLUMNS SIZE 254', NULL, 'PARTITION',TRUE, null, null, null);
end loop;
end;

Oracle 10G enterprise manager (emca)

start stop:
  • emctl start dbconsole
  • emctl stop dbconsole
To creare repository if not installed yet:
  • emca -config dbcontrol db -repos create -host {hostname} -SID {Oracle SID} -PORT 1521
To uninstall previously created repository (metalink DocID: 278100.1 )

1- Logon SQLPLUS as user SYS or SYSTEM
2- EXEC sysman.emd_maintenance.remove_em_dbms_jobs;
3- EXEC sysman.setEMUserContext('',5);
4- REVOKE dba FROM sysman;
5-
DECLARE
CURSOR c1 IS
SELECT owner, synonym_name name
FROM dba_synonyms
WHERE table_owner = 'SYSMAN';
BEGIN
FOR r1 IN c1 LOOP
IF r1.owner = 'PUBLIC' THEN
EXECUTE IMMEDIATE 'DROP PUBLIC SYNONYM 'r1.name;
ELSE EXECUTE IMMEDIATE 'DROP SYNONYM 'r1.owner'.'r1.name;
END IF;
END LOOP;
END;
6- DROP USER mgmt_view CASCADE;
7- DROP ROLE mgmt_user;
8- DROP USER sysman CASCADE;
9- Then login shell and execute following:
emca -deconfig dbcontrol db -repos drop

Shell commands to create and drop em repository

$ORACLE_HOME/sysman/admin/emdrep/bin/RepManager -connect (DESCRIPTION= (ADDRESS_LIST= (ADDRESS= (PROTOCOL=TCP) (HOST={hostname}) (PORT=1521))) (CONNECT_DATA=(SERVICE_NAME={ORACLE SID}))) -repos_user SYSMAN -action drop -verbose -output_file $ORACLE_HOME/cfgtoollogs/emca/{ORACLE_SID}/emca_repos_drop_2006-01-18_03-02-57-PM.log
$ORACLE_HOME/sysman/admin/emdrep/bin/RepManager -connect (DESCRIPTION= (ADDRESS_LIST= (ADDRESS= (PROTOCOL=TCP) (HOST={hostname}) (PORT=1521))) (CONNECT_DATA=(SERVICE_NAME={ORACLE SID}))) -repos_user SYSMAN -action create -verbose -output_file $ORACLE_HOME/cfgtoollogs/emca/{ORACLE_SID}/emca_repos_drop_2006-01-18_03-02-57-PM.log

Monday, August 20, 2007

Batch scripts to set up asyncronous standby log shipping for oracle in windows

This setup consists of setting up manual compressed log shipping for oracle using windows batch scripts.

Prod site:
* Create following directories in log_archive_dest: OLD, TMP
* Guessing archives are created with ARC extension. Other case scripts should be modified
* Reserve a domain user account and grant him access to both prod and standby servers. For easy, i granted him local administrator account in both servers but u may try to restrict access. If there is no domain available, then create local user account with the same user name & password on both prod and standby servers.
* You should download gnu awk and arj32 utility from net and place then on somewhere at %PATH%
* Share log_archive_dest folder of the standby and schedule script on prod with the user account you reserved
* You should modify {...} parameters for your environment

logmove.bat
set ORACLE_SID={ORACLE_SID}
set LOG_DEST={log_archive_dest}
set SCRIPT_DEST={where to place this script}
set LOGFILE=%SCRIPT_DEST%\logmove.log
set TEMP_FILE=%SCRIPT_DEST%\logmove.tmp

dir /OD %LOG_DEST%\*.ARC|findstr .ARC|awk "{print $4}">%TEMP_FILE%
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @arj32 a -e %LOG_DEST%\TMP\%%i.ARJ %LOG_DEST%\%%i >> %LOGFILE%
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @move /Y %LOG_DEST%\%%i %LOG_DEST%\OLD\ >> %LOGFILE%
dir /OD %LOG_DEST%\TMP\*.ARC.ARJ |findstr .ARC.ARJ |awk "{print $4}">%TEMP_FILE%
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @move /Y %LOG_DEST%\TMP\%%i \\{standby server}\archive >> %LOGFILE%
rem del /Q %TEMP_FILE% >> %LOGFILE%

Standby Site:
* Guessing archives are created with ARC extension. Other case scripts should be modified
* U should download gnu awk and arj32 utility from net and place then on somewhere at %PATH%
* You should modify {...} parameters for your environment
* Schedule script with the user account you reserved

logapply.bat
set ORACLE_SID=
{ORACLE_SID}
set LAGTIME=0.0417
rem set LAGTIME=0
set SCRPTLOC=
{where to place this script}
set LOG_DEST={log_archive_dest} set TEMP_FILE=%SCRPTLOC%\logapply.tmp
set LOGFILE=%SCRPTLOC%\logapply.log
dir /OD %LOG_DEST%\*.ARC.ARJ |find ".ARC.ARJ" |awk "{print $5}">%TEMP_FILE%
{Drive of
{log_archive_dest}: for example C:, D: ...}
cd %LOG_DEST%
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @arj32 e -n %%i >> %LOGFILE%
sqlplus /nolog @%SCRPTLOC%\standby_recover.sql %LAGTIME% 1 >> %LOGFILE%
sqlplus /nolog @c:\temp\recover.sql >> %LOGFILE%
if {%ERRORLEVEL%} == {0} (
echo Database %ORACLE_SID% is available >> %SCRPTLOC%\sby.log
) else (
echo Database %ORACLE_SID% is NOT available >> %SCRPTLOC%\sby.log
)
del /Q c:\temp\recover.sql
del /Q %TEMP_FILE%

standby_recover.sql
connect sys/{sys password} as sysdba
set trimspool on linesize 200 head off feedback off sqlprompt "--" verify off
spool c:\temp\recover.sql
select 'connect sys/
{sys password} as sysdba'||chr(10)||
'set autorecovery on'||chr(10)||
'recover standby database until time '''||
to_char(sysdate - &1,'YYYY-MM-DD:HH24:MI:SS')||''';'||chr(10)||
'exit;' from dual;
spool off
exit

logremove.bat
set ORACLE_SID=
{ORACLE_SID}
set SCRPTLOC={where to place this script}
set LOG_DEST={log_archive_dest}
set TEMP_FILE=%SCRPTLOC%\tmp.txt
set TEMP_FILE2=%SCRPTLOC%\tmp2.txt
set LOGFILE=%SCRPTLOC%\logremove.log
set LASTLOG=99999
echo.|time|date|find "current" >> %LOGFILE%
echo.|sqlplus /nolog @%SCRPTLOC%\lastarchno.sql |find "MAKEGREP"|awk "{print $2}" > %TEMP_FILE%
echo Set and Check if last archive no is valid >> %LOGFILE%
set LAST_ARCHNO=NONE
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @set LAST_ARCHNO=%%i
echo last archive is %LAST_ARCHNO% >> %LOGFILE%
rem
rem Following checks if log is recycled to beginning, then exits script. This stuation should be handled manually
rem
for %%i in (%LOG_DEST%\*%LASTLOG%.ARC) do goto end
rem
rem Following checks if last archive log is valid
rem
for %%i in (%LOG_DEST%\*%LAST_ARCHNO%.ARC) do goto cont
echo Last archive is not valid >> %LOGFILE%
goto end
:cont
echo %LAST_ARCHNO% > %SCRPTLOC%\lastarchno.txt
echo deleting archives >> %LOGFILE%
dir /O-N %LOG_DEST%\*.ARC |find ".ARC" |awk "{print $5}">%TEMP_FILE%
:repeat
set CURRENT_FILE=NONE
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @set CURRENT_FILE=%%i
if "%CURRENT_FILE%" == "NONE" goto end
call set TMP_STR=%%CURRENT_FILE:%LAST_ARCHNO%=%%
if NOT "%TMP_STR%" == "%CURRENT_FILE%" goto end
type %TEMP_FILE%|find /v "%CURRENT_FILE%" > %TEMP_FILE2%
type %TEMP_FILE2% > %TEMP_FILE%
set CURRENT_FILE=%LOG_DEST%\%CURRENT_FILE%
echo deleting %CURRENT_FILE% and %CURRENT_FILE%.ARJ >> %LOGFILE%
del /Q %CURRENT_FILE% >> %LOGFILE%
del /Q %CURRENT_FILE%.ARJ >> %LOGFILE%
goto repeat
:end
del /Q %TEMP_FILE% >> %LOGFILE%
del /Q %TEMP_FILE2% >> %LOGFILE%

lastarchno.sql
connect sys/{sys password} as sysdba
set trimspool on linesize 200 head off feedback off sqlprompt "--" verify off
select 'MAKEGREP' grep,max(sequence#) from v$log_history;
exit

Prod site:
* Share script folder of the standby where lastarchno.txt is in and schedule script on prod with the user account you reserved

logremoveprod.bat
set ORACLE_SID=
{ORACLE_SID}
set LOG_DEST={log_archive_dest}\OLD
set SCRIPT_DEST=
{where to place this script}
set LOGFILE=%SCRIPT_DEST%\logremoveprod.log
set TEMP_FILE=%SCRIPT_DEST%\logremoveprod.tmp
set TEMP_FILE2=%SCRIPT_DEST%\logremoveprod2.tmp
set LASTLOG=199999
echo.|time|date|find "current" >> %LOGFILE%
set LAST_ARCHNO=NONE
for /F "eol=; tokens=1" %%i in (\\
{standby server}\scripts\lastarchno.txt) do @set LAST_ARCHNO=%%i
echo last archive is %LAST_ARCHNO% >> %LOGFILE%
rem
rem Following checks if log is recycled to beginning, then exits script. This stuation should be handled manually
rem
for %%i in (%LOG_DEST%\*%LASTLOG%.ARC) do goto end
rem
rem Following checks if last archive log is valid
rem
for %%i in (%LOG_DEST%\*%LAST_ARCHNO%.ARC) do goto cont
echo Last archive is not valid >> %LOGFILE%
goto end
:cont
echo checking archive validity >> %LOGFILE%
dir /O-N %LOG_DEST%\*.ARC |find ".ARC" |awk "{print $4}">%TEMP_FILE%
:rpt
set CURRENT_FILE=NONE
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @set CURRENT_FILE=%%i
if "%CURRENT_FILE%" == "NONE" goto hata
call set TMP_STR=%%CURRENT_FILE:%LAST_ARCHNO%=%%
if NOT "%TMP_STR%" == "%CURRENT_FILE%" goto cnt
type %TEMP_FILE%|find /v "%CURRENT_FILE%" > %TEMP_FILE2%
type %TEMP_FILE2% > %TEMP_FILE%
goto rpt
:hata
echo Applied last archive does not exists in list >> %LOGFILE%
goto end
:cnt
echo deleting archives >> %LOGFILE%
dir /O-N %LOG_DEST%\*.ARC |find ".ARC" |awk "{print $4}">%TEMP_FILE%
:repeat
set CURRENT_FILE=NONE
for /F "eol=; tokens=1" %%i in (%TEMP_FILE%) do @set CURRENT_FILE=%%i
if "%CURRENT_FILE%" == "NONE" goto end
call set TMP_STR=%%CURRENT_FILE:%LAST_ARCHNO%=%%
if NOT "%TMP_STR%" == "%CURRENT_FILE%" goto end
type %TEMP_FILE%|find /v "%CURRENT_FILE%" > %TEMP_FILE2%
type %TEMP_FILE2% > %TEMP_FILE%
set CURRENT_FILE=%LOG_DEST%\%CURRENT_FILE%
echo deleting %CURRENT_FILE% >> %LOGFILE%
del /Q %CURRENT_FILE% >> %LOGFILE%
goto repeat
:end
del /Q %TEMP_FILE% >> %LOGFILE%
del /Q %TEMP_FILE2% >> %LOGFILE%