Monday, April 28, 2008

How to correct UNNAMED{nnnnn} datafile problem

When standby_file_management is auto and file path which is added to the primary database does not exist on standby site, then standby database creates a dummy datafile such as

{path}/UNNAMEDnnnnn

To correct this problem, you should set standby_file_management to manual and
create standby datafile :

alter system set standby_file_management=manual;
alter database create datafile '{path}/UNNAMEDnnnnn' as '{standby file path}/{data file name}.dbf' size xxxM;

Thursday, April 17, 2008

How to change oracle dba group in unix


cd $ORACLE_HOME/rdbms/lib
cat config.[cs]
The line '#define SS_DBA_GRP "group"' should name the chosen DBA group. If you wish to change the DBA group change the group name shown in this file. Eg: Change:
#define SS_DBA_GRP "dba" to:
#define SS_DBA_GRP "mygroup"
To effect any changes to the DBA group and to be sure you are using the group defined in this file relink the Oracle executable as below. Be sure to shutdown all databases before relinking:
rm config.o
make -f ins_rdbms.mk config.o ioracle


Wednesday, April 16, 2008

Getting sql text from hash value

Here is a function and sql script which uses this function to display sql text from its hash value:

create or replace function user.hashsql(hnum number) return user.sqltable PIPELINED is
sqlline varchar2(64);
sqlt user.sqlline:=user.sqlline('');
tmptxt varchar(2000);
ln int;
ind int;
cursor cor_sql is
select SQL_TEXT from v$sqltext_with_newlines where HASH_VALUE=hnum order by PIECE;
begin
open cor_sql;
fetch cor_sql into sqlline;
while cor_sql%FOUND loop
-- SELECT REPLACE(sqlline,chr(32),'') into sqlline from dual;
ln := length(sqlline);
ind:=1;
while (ind <>
if substr(sqlline,ind,1) not in (chr(10),chr(13)) then
ind := ind+1;
else
tmptxt := tmptxt||substr(sqlline,1,ind);
sqlt.line:=(tmptxt);
PIPE ROW(sqlt);
sqlline:=substr(sqlline,1,ind+1);
tmptxt:='';
ln := length(sqlline);
ind := 1;
end if;
end loop;
tmptxt:=tmptxt||sqlline;
if length(tmptxt) > 100 then
sqlt.line:=(substr(tmptxt,1,100));
PIPE ROW(sqlt);
tmptxt:=substr(tmptxt,101,length(tmptxt)+1);
end if;
fetch cor_sql into sqlline;
end loop;
close cor_sql;
sqlt.line:=tmptxt;
PIPE ROW(sqlt);
RETURN;
end;
/

set linesize 2000
select * from table(user.hashsql(&hash_value))

PL/SQL script to generate rcp and rename datafile commands for physical standby

This is a PL/SQL script used to generate rcp and rename datafile commands in order to create standby database. If
* there are more then 1k datafiles
* source and destination paths are different
*
source and destination disk sizes are different
(as in my case) it might be useful. However you should modify code to make it suitable for your case:

create table user.cmds (RCP varchar2(1000), MV
varchar2(1000));

declare
cursor c_sql is
select name, bytes/(1024*1024) bsize from v$datafile order by name;
sqlline c_sql%ROWTYPE;
tsize number;
cnt int;
begin
open c_sql;
fetch c_sql into sqlline;
tsize:=0;
cnt:=1;
execute immediate 'truncate table
user.cmds';
while c_sql%FOUND loop
tsize:=tsize+sqlline.bsize;
if cnt < 10 then
insert into
user.cmds values('rcp '||sqlline.name||' userp@server:/userdata0'||cnt||'/userp/','alter database rename file '''||sqlline.name||''' to ''/userdata0'||cnt||'/userp/'||substr(sqlline.name,19,255)||''';');
--dbms_output.put_line('alter database rename file '''||sqlline.name||''' to ''/userdata0'||cnt||'/userp/'||substr(sqlline.name,19,255)||''';');
else
insert into
user.cmds values('rcp '||sqlline.name||' userp@server:/userdata'||cnt||'/userp/','alter database rename file '''||sqlline.name||''' to ''/userdata'||cnt||'/userp/'||substr(sqlline.name,19,255)||''';');
end if;
if tsize > 95000 then
tsize:=0;
cnt:=cnt+1;
end if;
fetch c_sql into sqlline;
end loop;
commit;
close c_sql;
end;
/

Tuesday, April 15, 2008

Windows (VBS) and UNIX (Shell) scripts to delete applied archive logs

In a dataguard configuration, archive logs should stay on production archive log directory until they are copied and applied to the standby database. Then they should be backed up and removed in order to prevent archive log directories from getting full. Following scripts are written by me to automate this procedure:

Unix:
There should be a file "
$HOME/stby/archive_last_deleted " which is used to keep sequence number of the last deleted archive log file. At first you can create it with vi and write 1 in it.

Following scheduled script "$HOME/stby/archclean.sh" runs on standby site to delete applied archived logs:

#/bin/ksh

if [ "$hosttype" == "SunOS" ]
then
userid=`/usr/xpg4/bin/id -u -n`
else
userid=`/usr/bin/id -u -n`
fi
. ~$userid/.profile

sqlplus '/ AS SYSDBA' > $HOME/stby/out/archclean.out1 <
set lines 2000
select 'MAKEGREP' GREP,count(1) from V\$MANAGED_STANDBY where PROCESS='MRP0' and SEQUENCE# <> 0;
exit;
EOF
IS_APPLY_RUNNING=`cat $HOME/stby/out/archclean.out1|grep MAKEGREP|awk '{print $2}'`
date >> $HOME/stby/out/archclean.log
if [[ $IS_APPLY_RUNNING -eq 1 ]];then
echo "Apply process (MRP0) is running..." >> $HOME/stby/out/archclean.log
sqlplus '/ AS SYSDBA' > $HOME/stby/out/archclean.out1 <
set lines 2000
SELECT 'MAKEGREP' GREP,SEQUENCE#-1 FROM V\$MANAGED_STANDBY where PROCESS in ('MRP0') and SEQUENCE# <> 0;
EOF
MAX_SEC_TO_DELETE=`cat $HOME/stby/out/archclean.out1|grep MAKEGREP|awk '{print $2}'`
LAST_DELETED=`cat $HOME/stby/archive_last_deleted`
NEXTTODELETE=$((LAST_DELETED+1))
echo "Archive Sequence Range to Clean: $NEXTTODELETE - $MAX_SEC_TO_DELETE" >> $HOME/stby/out/archclean.log
ARCHIVE_FORMAT=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_FORMAT|cut -d= -f2`
ARCHIVE_DIR=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_DIR|cut -d= -f2`
while [[ $MAX_SEC_TO_DELETE -gt $NEXTTODELETE ]];do
NEXTARCHIVE=`awk 'BEGIN {str="'$ARCHIVE_FORMAT'" ; sub("SEQUENCE","'$NEXTTODELETE'",str) ; print str}'`
FILECOUNT=`ls -l $ARCHIVE_DIR|grep $NEXTARCHIVE|grep ".arc"|wc -l 2>/dev/null`
if [[ $FILECOUNT -eq 1 ]];then
echo "removing $ARCHIVE_DIR/$NEXTARCHIVE" >> $HOME/stby/out/archclean.log
/usr/bin/rm -f $ARCHIVE_DIR/$NEXTARCHIVE
fi
echo $NEXTTODELETE>$HOME/stby/archive_last_deleted
NEXTTODELETE=$(($NEXTTODELETE+1))
done
else
echo "Apply process (MRP0) is not running..." >> $HOME/stby/out/archclean.log
fi

At production site my backup procedure updates "$HOME/stby/backupseq" with the last backed up archive sequence number.
There should be a file "$HOME/stby/archive_last_deleted " which is used to keep sequence number of the last deleted archive log file. At first you can create it with vi and write 1 in it. My following scheduled script "$HOME/stby/archcleanprod.sh" deletes all archived which are backed up and applied to standby database:

#/bin/ksh

if [ "$hosttype" == "SunOS" ]
then
userid=`/usr/xpg4/bin/id -u -n`
else
userid=`/usr/bin/id -u -n`
fi
. ~$userid/.profile

sqlplus / > $HOME/stby/out/archcleanprod.out1 <
set lines 2000
select 'MAKEGREP' GREP,APPLIED_SEQ#-10 from V\$ARCHIVE_DEST_STATUS where DEST_ID=2;
exit;
EOF
MAX_SEC_APPLIED=`cat $HOME/stby/out/archcleanprod.out1|grep MAKEGREP|awk '{print $2}'`
MAX_SEQ_NMBCOPY_2=`cat $HOME/
stby/backupseq`
if [[ $MAX_SEQ_NMBCOPY_2 -gt $MAX_SEC_APPLIED ]];then
MAX_SEC_TO_DELETE=$MAX_SEC_APPLIED
else
MAX_SEC_TO_DELETE=$MAX_SEQ_NMBCOPY_2
fi
LAST_DELETED=`cat $HOME/stby/archive_last_deleted`
NEXTTODELETE=$((LAST_DELETED+1))
date >> $HOME/stby/archcleanprod.log
echo "Archive Sequence Range to Clean: $NEXTTODELETE - $MAX_SEC_TO_DELETE" >> $HOME/stby/archcleanprod.log
ARCHIVE_FORMAT=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_FORMAT|cut -d= -f2`
ARCHIVE_DIR=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_DIR|cut -d= -f2`
while [[ $MAX_SEC_TO_DELETE -gt $NEXTTODELETE ]];do
NEXTARCHIVE=`awk 'BEGIN {str="'$ARCHIVE_FORMAT'" ; sub("SEQUENCE","'$NEXTTODELETE'",str) ; print str}'`
FILECOUNT=`ls -l $ARCHIVE_DIR|grep $NEXTARCHIVE|grep ".arc"|wc -l 2>/dev/null`
if [[ $FILECOUNT -eq 1 ]];then
echo "removing $ARCHIVE_DIR/$NEXTARCHIVE" >> $HOME/stby/archcleanprod.log
/usr/bin/rm -f $ARCHIVE_DIR/$NEXTARCHIVE
fi
echo $NEXTTODELETE>$HOME/stby/archive_last_deleted
NEXTTODELETE=$(($NEXTTODELETE+1))
done


windows:
There is a configuration file "E:\dba\orahome\mntdir\config\dba.xml" (you should replace { and } with the ones used in standard xml. I can not use them in this post, corrupts page)

{?xml version="1.0" encoding="utf-8" ?}
{config}
{general}
{archive_format}ARC_0628272535_001_SEQUENCE.ARC{/archive_format}
{archive_dir}E:\oracle\oradata\archive{/archive_dir}
{/general}
{stby}
{prodUser}dbuser{/prodUser}
{prodPass}password{/prodPass}
{prodTNS}PROD{/prodTNS}
{/stby}
{/config}

There is a file "E:\dba\orahome\stby\archive_last_deleted.txt" in which last deleted archived log's sequence number is kept. At first it can be set as 1.
My scheduled v
bs script "E:\dba\orahome\stby\stby_archclean.vbs" uses configuration file to connect primary site, query dataguard status and delete applied archived logs at standby server:

'********************* CODE TO PARSE DBA.XML FILE ****************************
Set oXMLDoc = CreateObject("MSXML2.DOMDocument.3.0")
oXMLDoc.async = False
If Not oXMLDoc.load("mntdir\config\dba.xml") Then
Set oErr = oXMLDoc.parseError
sErrMsg = "XML Parsing Error. File: " & oErr.url & " Reason : " & oErr.reason & " Line: " & oErr.line & ", Character: " & oErr.linepos & ",Text: " & oErr.srcText
Err.Raise 999, , sErrMsg
End If
'******************************************************************************

Set WshShell = CreateObject("WScript.Shell")
pUSER = oXMLDoc.selectSingleNode("/config/stby/
prodUser").text
pPASS = oXMLDoc.selectSingleNode("/config/stby/
prodPass").text
ARCHIVE_FORMAT = oXMLDoc.selectSingleNode("/config/general/archive_format").text
ARCHIVE_DIR = oXMLDoc.selectSingleNode("/config/general/archive_dir").text
pTNS = oXMLDoc.selectSingleNode("/config/stby/prodTNS").text

Set objFSO = CreateObject("Scripting.FileSystemObject")
set archive_last_deleted = objFSO.OpenTextFile("stby\archive_last_deleted.txt", 1) 'for reading
DIM LAST_DELETED
if not archive_last_deleted.AtEndOfStream then
LAST_DELETED = cInt(archive_last_deleted.ReadLine)
else
WScript.Echo "Error on stby\archive_last_deleted file"
WScript.exit
end if
archive_last_deleted.close
NEXTTODELETE=LAST_DELETED+1



'ADO Connection object
Set con = CreateObject("ADODB.Connection")
con.CursorLocation = 3
If con.State = 1 Then con.close
con.Open "PROVIDER=MSDAORA;DATA SOURCE="&pTNS&";USER ID="&pUSER&";PASSWORD="&pPASS&";"

'ADO Command object
Set cmd = CreateObject("ADODB.Command")
cmd.CommandText="select APPLIED_SEQ#-3 from v$ARCHIVE_DEST_STATUS where DEST_ID=2 "
cmd.CommandType=1
cmd.ActiveConnection=con

'ADO Recordset object
set rs=CreateObject("ADODB.Recordset")
Set rs=cmd.Execute


if rs.EOF then
WScript.Echo "Error: No APPLIED_SEQ# record for DEST_ID 2"
rs.Close
con.Close
WScript.exit
else
DIM MAX_SEC_APPLIED
rs.MoveFirst
MAX_SEC_APPLIED = cInt(rs.Fields(0))
rs.Close
con.Close
end if

WScript.Echo "Archive Sequence Range to Clean:" & cStr(NEXTTODELETE) & " - " & cStr(MAX_SEC_APPLIED)
do while MAX_SEC_APPLIED > NEXTTODELETE
WScript.Echo "removing... " & ARCHIVE_DIR & "\" & Replace(ARCHIVE_FORMAT,"SEQUENCE",cStr(NEXTTODELETE))
'WshShell.run("del /Q " & ARCHIVE_DIR & "\" & Replace(ARCHIVE_FORMAT,"SEQUENCE",cStr(NEXTTODELETE)))
objFSO.DeleteFile ARCHIVE_DIR & "\" & Replace(ARCHIVE_FORMAT,"SEQUENCE",cStr(NEXTTODELETE))
set archive_last_deleted = objFSO.OpenTextFile("stby\archive_last_deleted.txt", 2) 'for writing
archive_last_deleted.Write cStr(NEXTTODELETE)
archive_last_deleted.close
NEXTTODELETE = NEXTTODELETE+1
loop

I ran exactly the same script at prod side, however i added my backup procedure just before the delete.


Wednesday, April 9, 2008

How to find active sqls

Following script can be used to determine long running active sqls with elapsed time and wait event:

set lines 233
select /*+ ordered */ to_char(s.sid,'99999'),
substr(username,1,8),
substr(t.sql_text,1,60),
nvl(last_call_et,0) TIME_sec,
t.hash_value,
substr(w.event,1,30) WAIT_EV,
substr(machine,1,10) MACHINE
from v$session s,v$sqltext t,v$session_wait w
where s.sql_address=t.address
and s.sql_HASH_VALUE=t.HASH_VALUE
and w.sid=s.sid
and username between 'AAAAAAAAA' and 'ZZZZZZZZZ'
and s.status='ACTIVE'
and t.piece=0
--and upper(sql_text) not like 'BEGIN%'
order by TIME_SEC
/

In order to find sqls with high executions and little elapsed times

1-) Take a snap of v$sqlarea:

create table tmp as select sql_id, sql_text, executions, ELAPSED_TIME, CPU_TIME, DISK_READS, DIRECT_WRITES, BUFFER_GETS from v$sqlarea;


2-) Track sqls depending on executions:

select a.sql_id,substr(b.sql_text,1,50),
b.executions-a.executions,
b.ELAPSED_TIME - a.ELAPSED_TIME,
b.CPU_TIME - a.CPU_TIME,
b.DISK_READS - a.DISK_READS,
b.DIRECT_WRITES - a.DIRECT_WRITES,
b.BUFFER_GETS - a.BUFFER_GETS
from tmp a, v$sqlarea b
where a.sql_id=b.sql_id and b.executions-a.executions > 100 order by b.executions-a.executions
/

CREATING A JOB FOR ACTIVE SQL EXECUTIONS HISTORYOne day i need to monitor sql executions over time in order to determine batch-OLTP transactions over time. You may wonder why i did not use AWR report, EM or something else.. because usually i spent much more time on those tools to find what i need.

One important note, because my focus was sql execution counts, i put "a.sql_id=b.sql_id and b.executions-a.executions > 10000" condition in my sql withing procedure sp_sqlarea_snap_history in order to eliminate low execution sqls to be ignored. If your focus on something else (for example buffer gets or CPU) then i suggest change this condition which suits your case. Sometimes a sql executed a few times may consume much more resource then an other sql executes thousand times....

create table sqlarea_snap as select sysdate snap_time, sql_id, sql_text, executions, ELAPSED_TIME, CPU_TIME, DISK_READS, DIRECT_WRITES, BUFFER_GETS from v$sqlarea where 1=2;

create table sqlarea_snap_history
(SNAP_BEGIN_TIME DATE,
SNAP_END_TIME DATE,
SQL_ID VARCHAR2(13),
SQL_TEXT VARCHAR2(1000),
EXECUTIONS_DELTA NUMBER,
TPS NUMBER,
ELAPSED_TIME_DELTA NUMBER,
CPU_TIME_DELTA NUMBER,
DISK_READS_DELTA NUMBER,
DIRECT_WRITES_DELTA NUMBER,
BUFFER_GETS_DELTA NUMBER);

create or replace procedure sp_sqlarea_snap_history
as
begin
EXECUTE IMMEDIATE 'insert into sqlarea_snap_history select a.SNAP_TIME, sysdate, a.sql_id, b.sql_text, b.executions-a.executions, '
'trunc((b.executions-a.executions)/((sysdate-snap_time)*24*60*60)), b.ELAPSED_TIME - a.ELAPSED_TIME, b.CPU_TIME - a.CPU_TIME, '
'b.DISK_READS - a.DISK_READS, b.DIRECT_WRITES - a.DIRECT_WRITES, b.BUFFER_GETS - a.BUFFER_GETS from sqlarea_snap a, v$sqlarea b '
'where a.sql_id=b.sql_id and b.executions-a.executions > 10000 order by b.executions-a.executions';
EXECUTE IMMEDIATE 'drop table sqlarea_snap';
EXECUTE IMMEDIATE 'create table sqlarea_snap as select sysdate snap_time, sql_id, sql_text, executions, ELAPSED_TIME, CPU_TIME, '
'DISK_READS, DIRECT_WRITES, BUFFER_GETS from v$sqlarea ';
end;
/


exec dbms_job.isubmit(20,'sp_sqlarea_snap_history;',sysdate,'sysdate+1/144',false);

Finally, here is the query

select SNAP_BEGIN_TIME, EXECUTIONS_DELTA, TPS, SQL_TEXT from sqlarea_snap_history where SNAP_BEGIN_TIME > trunc(sysdate) order by SNAP_BEGIN_TIME desc, EXECUTIONS_DELTA desc;