Thursday, October 30, 2008

Commit from a Procedure/Function Executed Through a Database Link, ORA-02064: distributed operation not supported

It is not supported to issue a commit from a procedure with OUT parameters or a function with return value, when the procedure/function is executed through a database link.

Here is a test case, working fine:
1-) At remote database create following procedure:

create or replace procedure erkan_trunc
is
begin
insert into XXX values(1,1,'erkan saka'); -- Create a transaction at remote db
EXECUTE IMMEDIATE 'TRUNCATE TABLE ERKAN.XXX'; -- This is a DDL! which also autocommits
end;
/

2-) At source database create following procedure

create or replace procedure erkan_dene
is
begin
insert into erkan.A values(1); -- Create a transaction at local (source) db
erkan_trunc@{db_link}; -- remote procedure call
rollback; -- End local transaction
end;
/

3-) Working fine:

SQL>exec erkan_dene

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

Here is an other test case with ORA-02064: distributed operation not supported:
1-) At remote database create following procedure:

create or replace function erkan_trunc2
RETURN NUMBER  -- This is the reason causing ORA-02064
is
begin
insert into XXX values(1,1,'erkan saka');
EXECUTE IMMEDIATE 'TRUNCATE TABLE ERKAN.XXX';
RETURN (0);
end;
/


2-) At source run following PL/SQL

SQL>declare
  2     de_return number;
  3  begin
  4     de_return := erkan_trunc2@{db_link};
  5             dbms_output.put_line('de_return='||de_return);
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-02064: distributed operation not supported
ORA-06512: at "ERKAN.ERKAN_TRUNC2", line 6
ORA-06512: at line 4


Elapsed: 00:00:00.01



Tuesday, October 7, 2008

ORA-02064: distributed operation not supported

If a commit/rollback or autocommit(DDL) operation is executed within a procedure or function which is called remotely via database link then ORA-02064 may occur. Exact explanation is:
It is not supported to issue a commit from a procedure with OUT parameters or a function with return value, when the procedure/function is executed through a database link.
I tested following two senarios. First works fine while second fails with ORA-02064:

Senario 1-)

* Create following procedure at remote database:
create or replace procedure erkan_trunc
is
begin
insert into XXX values(1,1,'erkan'); -- in order to create a local transaction at remote database
EXECUTE IMMEDIATE 'TRUNCATE TABLE ERKAN.XXX'; -- autocommits
end;
/

* Create following procedure at source database:
create or replace procedure erkan_dene
is
begin
insert into erkan.A values(1); --in order to create a local transaction in source database
BEGIN
erkan_trunc@remote_tns;
EXCEPTION
       WHEN OTHERS
       THEN
dbms_output.put_line('Error while truncate: ' || SUBSTR (SQLERRM, 1, 100););
END;
rollback;
end;
/

* Works fine:
SQL>exec erkan_dene
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.03

if i hold lock on table XXX at remote database...
SQL>exec erkan_dene
Error while truncate: ORA-00054: resource busy and acquire with NOWAIT specified
ORA-06512: at "ERKAN.ERKAN_TRUNC", line 5

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

Senario 2-)

* Create following function at remote database:
create or replace function erkan_trunc2
RETURN NUMBER
is
begin
insert into XXX values(1,1,'erkan');
EXECUTE IMMEDIATE 'TRUNCATE TABLE ERKAN.XXX';
RETURN (0);
end;
/

* Run following pl/sql script at source database:
SQL>declare
  2     de_return number;
  3  begin
  4     de_return := erkan_trunc2@remote_tns;
  5             dbms_output.put_line('de_return='||de_return);
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-02064: distributed operation not supported
ORA-06512: at "ERKAN.ERKAN_TRUNC2", line 6
ORA-06512: at line 4

Elapsed: 00:00:00.01

Tuesday, September 16, 2008

ORA-38029: object statistics are locked

If this error occurs while trying to analyze a table then run following to unlock the statistics:

exec DBMS_STATS.UNLOCK_TABLE_STATS('{owner}','{table name}');

You can see list of all locked tables by running following query:

select owner, table_name, stattype_locked from dba_tab_statistics where stattype_locked is not null and owner not in ('SYS','SYSTEM');

or

select 'exec DBMS_STATS.UNLOCK_TABLE_STATS('''||owner||''','''||table_name||''');' from dba_tab_statistics where stattype_locked is not null and owner not in ('SYS','SYSTEM')

For further detail, refer to metalink doc id 433240.1

Wednesday, September 10, 2008

LOCALLY MANAGED TABLESPACE with SPACE MANAGEMENT AUTO

CREATE TABLESPACE DATA_1M DATAFILE
'/data01/dbfs/data_1m01.dbf' SIZE 2048M AUTOEXTEND OFF
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
SEGMENT SPACE MANAGEMENT AUTO;
/

alter tablespace DATA_1M add datafile '/data01/dbfs/data_1m02.dbf' SIZE 1024M AUTOEXTEND ON NEXT 512M MAXSIZE 2048M;


alter database datafile '/data01/dbfs/data_1m02.dbf' AUTOEXTEND ON NEXT 1024M MAXSIZE 3072M;

Friday, August 29, 2008

ARCHIVED_SEQ# column of v$ARCHIVE_DEST_STATUS is not being updated

Check and correct fal_client and fal_server system parameters at standby database:

alter system set fal_client= scope=both;
alter system set fal_server= scope=both;

Thursday, August 28, 2008

Dataguard Swithover

switchover commands:

Prod Site:
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN;

Standby Site:
alter database commit to switchover to primary;
alter database open ;

Old Prod Site:
SHUTDOWN;
STARUP MOUNT;
alter database recover managed standby database using current logfile disconnect from session;

Monday, August 18, 2008

Identity (Autoincrement) Column on Oracle

You can do this by using a sequence and before insert trigger on oracle:

create table ABC (id int, data varchar2(200));

create sequence INCABC
start with 1
increment by 1
maxvalue 999999999999
minvalue 1
nocyclecache 10
order;

create or replace trigger ABC_IDENTITY
before insert on ABC
FOR EACH ROW
begin
select INCABC.nextval into :new.id from dual;
end;
/

However in order to insert table, all column names except "id" should be provided in insert statement like:

insert into ABC (data) values ('abc');

or a dummy value should be provided for "id" column:

insert into abc values('abc',5);

other case following error occurs:

SQL> insert into abc values('abc');
insert into abc values('abc')
ERROR at line 1:
ORA-00947: not enough values

Thursday, August 14, 2008

sqlplus / as sysdba, ORA-01031: insufficient privileges

Follow following steps to correct problem:
1-) go to $ORACLE_HOME/rdbms/lib
2-) correct dba OS user group name in config.s file:
a) type "id" command in shell in order to get group name of your OS user
b)correct group name in config.s file. It is written just after the ".string"
.csect H.12.NO_SYMBOL{RO}, 3
.string "dbat"
# End csect H.12.NO_SYMBOL{RO}
.csect H.14.NO_SYMBOL{RO}, 3
.string "dbat"
# End csect H.14.NO_SYMBOL{RO}
c) run following commands to backup & compile:
mv config.o config.o.org
make -f ins_rdbms.mk ioracle

Note: sometimes dba OS dba group is correct in config.s file so you dont need to change anything and compile (next step) solves the problem.

Error: OiifbStackLayout: can't understand "FİLL LEFT TOP"

This error may occur while running oracle installer if LANG environment variable is set to tr_TR. When this problem occurs, you see curruptions on GUI screen. Running following command before running installer corrects the problem:

export LANG=en_EN

Monday, April 28, 2008

How to correct UNNAMED{nnnnn} datafile problem

When standby_file_management is auto and file path which is added to the primary database does not exist on standby site, then standby database creates a dummy datafile such as

{path}/UNNAMEDnnnnn

To correct this problem, you should set standby_file_management to manual and
create standby datafile :

alter system set standby_file_management=manual;
alter database create datafile '{path}/UNNAMEDnnnnn' as '{standby file path}/{data file name}.dbf' size xxxM;

Thursday, April 17, 2008

How to change oracle dba group in unix


cd $ORACLE_HOME/rdbms/lib
cat config.[cs]
The line '#define SS_DBA_GRP "group"' should name the chosen DBA group. If you wish to change the DBA group change the group name shown in this file. Eg: Change:
#define SS_DBA_GRP "dba" to:
#define SS_DBA_GRP "mygroup"
To effect any changes to the DBA group and to be sure you are using the group defined in this file relink the Oracle executable as below. Be sure to shutdown all databases before relinking:
rm config.o
make -f ins_rdbms.mk config.o ioracle


Wednesday, April 16, 2008

Getting sql text from hash value

Here is a function and sql script which uses this function to display sql text from its hash value:

create or replace function user.hashsql(hnum number) return user.sqltable PIPELINED is
sqlline varchar2(64);
sqlt user.sqlline:=user.sqlline('');
tmptxt varchar(2000);
ln int;
ind int;
cursor cor_sql is
select SQL_TEXT from v$sqltext_with_newlines where HASH_VALUE=hnum order by PIECE;
begin
open cor_sql;
fetch cor_sql into sqlline;
while cor_sql%FOUND loop
-- SELECT REPLACE(sqlline,chr(32),'') into sqlline from dual;
ln := length(sqlline);
ind:=1;
while (ind <>
if substr(sqlline,ind,1) not in (chr(10),chr(13)) then
ind := ind+1;
else
tmptxt := tmptxt||substr(sqlline,1,ind);
sqlt.line:=(tmptxt);
PIPE ROW(sqlt);
sqlline:=substr(sqlline,1,ind+1);
tmptxt:='';
ln := length(sqlline);
ind := 1;
end if;
end loop;
tmptxt:=tmptxt||sqlline;
if length(tmptxt) > 100 then
sqlt.line:=(substr(tmptxt,1,100));
PIPE ROW(sqlt);
tmptxt:=substr(tmptxt,101,length(tmptxt)+1);
end if;
fetch cor_sql into sqlline;
end loop;
close cor_sql;
sqlt.line:=tmptxt;
PIPE ROW(sqlt);
RETURN;
end;
/

set linesize 2000
select * from table(user.hashsql(&hash_value))

PL/SQL script to generate rcp and rename datafile commands for physical standby

This is a PL/SQL script used to generate rcp and rename datafile commands in order to create standby database. If
* there are more then 1k datafiles
* source and destination paths are different
*
source and destination disk sizes are different
(as in my case) it might be useful. However you should modify code to make it suitable for your case:

create table user.cmds (RCP varchar2(1000), MV
varchar2(1000));

declare
cursor c_sql is
select name, bytes/(1024*1024) bsize from v$datafile order by name;
sqlline c_sql%ROWTYPE;
tsize number;
cnt int;
begin
open c_sql;
fetch c_sql into sqlline;
tsize:=0;
cnt:=1;
execute immediate 'truncate table
user.cmds';
while c_sql%FOUND loop
tsize:=tsize+sqlline.bsize;
if cnt < 10 then
insert into
user.cmds values('rcp '||sqlline.name||' userp@server:/userdata0'||cnt||'/userp/','alter database rename file '''||sqlline.name||''' to ''/userdata0'||cnt||'/userp/'||substr(sqlline.name,19,255)||''';');
--dbms_output.put_line('alter database rename file '''||sqlline.name||''' to ''/userdata0'||cnt||'/userp/'||substr(sqlline.name,19,255)||''';');
else
insert into
user.cmds values('rcp '||sqlline.name||' userp@server:/userdata'||cnt||'/userp/','alter database rename file '''||sqlline.name||''' to ''/userdata'||cnt||'/userp/'||substr(sqlline.name,19,255)||''';');
end if;
if tsize > 95000 then
tsize:=0;
cnt:=cnt+1;
end if;
fetch c_sql into sqlline;
end loop;
commit;
close c_sql;
end;
/

Tuesday, April 15, 2008

Windows (VBS) and UNIX (Shell) scripts to delete applied archive logs

In a dataguard configuration, archive logs should stay on production archive log directory until they are copied and applied to the standby database. Then they should be backed up and removed in order to prevent archive log directories from getting full. Following scripts are written by me to automate this procedure:

Unix:
There should be a file "
$HOME/stby/archive_last_deleted " which is used to keep sequence number of the last deleted archive log file. At first you can create it with vi and write 1 in it.

Following scheduled script "$HOME/stby/archclean.sh" runs on standby site to delete applied archived logs:

#/bin/ksh

if [ "$hosttype" == "SunOS" ]
then
userid=`/usr/xpg4/bin/id -u -n`
else
userid=`/usr/bin/id -u -n`
fi
. ~$userid/.profile

sqlplus '/ AS SYSDBA' > $HOME/stby/out/archclean.out1 <
set lines 2000
select 'MAKEGREP' GREP,count(1) from V\$MANAGED_STANDBY where PROCESS='MRP0' and SEQUENCE# <> 0;
exit;
EOF
IS_APPLY_RUNNING=`cat $HOME/stby/out/archclean.out1|grep MAKEGREP|awk '{print $2}'`
date >> $HOME/stby/out/archclean.log
if [[ $IS_APPLY_RUNNING -eq 1 ]];then
echo "Apply process (MRP0) is running..." >> $HOME/stby/out/archclean.log
sqlplus '/ AS SYSDBA' > $HOME/stby/out/archclean.out1 <
set lines 2000
SELECT 'MAKEGREP' GREP,SEQUENCE#-1 FROM V\$MANAGED_STANDBY where PROCESS in ('MRP0') and SEQUENCE# <> 0;
EOF
MAX_SEC_TO_DELETE=`cat $HOME/stby/out/archclean.out1|grep MAKEGREP|awk '{print $2}'`
LAST_DELETED=`cat $HOME/stby/archive_last_deleted`
NEXTTODELETE=$((LAST_DELETED+1))
echo "Archive Sequence Range to Clean: $NEXTTODELETE - $MAX_SEC_TO_DELETE" >> $HOME/stby/out/archclean.log
ARCHIVE_FORMAT=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_FORMAT|cut -d= -f2`
ARCHIVE_DIR=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_DIR|cut -d= -f2`
while [[ $MAX_SEC_TO_DELETE -gt $NEXTTODELETE ]];do
NEXTARCHIVE=`awk 'BEGIN {str="'$ARCHIVE_FORMAT'" ; sub("SEQUENCE","'$NEXTTODELETE'",str) ; print str}'`
FILECOUNT=`ls -l $ARCHIVE_DIR|grep $NEXTARCHIVE|grep ".arc"|wc -l 2>/dev/null`
if [[ $FILECOUNT -eq 1 ]];then
echo "removing $ARCHIVE_DIR/$NEXTARCHIVE" >> $HOME/stby/out/archclean.log
/usr/bin/rm -f $ARCHIVE_DIR/$NEXTARCHIVE
fi
echo $NEXTTODELETE>$HOME/stby/archive_last_deleted
NEXTTODELETE=$(($NEXTTODELETE+1))
done
else
echo "Apply process (MRP0) is not running..." >> $HOME/stby/out/archclean.log
fi

At production site my backup procedure updates "$HOME/stby/backupseq" with the last backed up archive sequence number.
There should be a file "$HOME/stby/archive_last_deleted " which is used to keep sequence number of the last deleted archive log file. At first you can create it with vi and write 1 in it. My following scheduled script "$HOME/stby/archcleanprod.sh" deletes all archived which are backed up and applied to standby database:

#/bin/ksh

if [ "$hosttype" == "SunOS" ]
then
userid=`/usr/xpg4/bin/id -u -n`
else
userid=`/usr/bin/id -u -n`
fi
. ~$userid/.profile

sqlplus / > $HOME/stby/out/archcleanprod.out1 <
set lines 2000
select 'MAKEGREP' GREP,APPLIED_SEQ#-10 from V\$ARCHIVE_DEST_STATUS where DEST_ID=2;
exit;
EOF
MAX_SEC_APPLIED=`cat $HOME/stby/out/archcleanprod.out1|grep MAKEGREP|awk '{print $2}'`
MAX_SEQ_NMBCOPY_2=`cat $HOME/
stby/backupseq`
if [[ $MAX_SEQ_NMBCOPY_2 -gt $MAX_SEC_APPLIED ]];then
MAX_SEC_TO_DELETE=$MAX_SEC_APPLIED
else
MAX_SEC_TO_DELETE=$MAX_SEQ_NMBCOPY_2
fi
LAST_DELETED=`cat $HOME/stby/archive_last_deleted`
NEXTTODELETE=$((LAST_DELETED+1))
date >> $HOME/stby/archcleanprod.log
echo "Archive Sequence Range to Clean: $NEXTTODELETE - $MAX_SEC_TO_DELETE" >> $HOME/stby/archcleanprod.log
ARCHIVE_FORMAT=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_FORMAT|cut -d= -f2`
ARCHIVE_DIR=`cat $HOME/mntdir/config/dba.ini|grep -v "##"|grep ARCHIVE_DIR|cut -d= -f2`
while [[ $MAX_SEC_TO_DELETE -gt $NEXTTODELETE ]];do
NEXTARCHIVE=`awk 'BEGIN {str="'$ARCHIVE_FORMAT'" ; sub("SEQUENCE","'$NEXTTODELETE'",str) ; print str}'`
FILECOUNT=`ls -l $ARCHIVE_DIR|grep $NEXTARCHIVE|grep ".arc"|wc -l 2>/dev/null`
if [[ $FILECOUNT -eq 1 ]];then
echo "removing $ARCHIVE_DIR/$NEXTARCHIVE" >> $HOME/stby/archcleanprod.log
/usr/bin/rm -f $ARCHIVE_DIR/$NEXTARCHIVE
fi
echo $NEXTTODELETE>$HOME/stby/archive_last_deleted
NEXTTODELETE=$(($NEXTTODELETE+1))
done


windows:
There is a configuration file "E:\dba\orahome\mntdir\config\dba.xml" (you should replace { and } with the ones used in standard xml. I can not use them in this post, corrupts page)

{?xml version="1.0" encoding="utf-8" ?}
{config}
{general}
{archive_format}ARC_0628272535_001_SEQUENCE.ARC{/archive_format}
{archive_dir}E:\oracle\oradata\archive{/archive_dir}
{/general}
{stby}
{prodUser}dbuser{/prodUser}
{prodPass}password{/prodPass}
{prodTNS}PROD{/prodTNS}
{/stby}
{/config}

There is a file "E:\dba\orahome\stby\archive_last_deleted.txt" in which last deleted archived log's sequence number is kept. At first it can be set as 1.
My scheduled v
bs script "E:\dba\orahome\stby\stby_archclean.vbs" uses configuration file to connect primary site, query dataguard status and delete applied archived logs at standby server:

'********************* CODE TO PARSE DBA.XML FILE ****************************
Set oXMLDoc = CreateObject("MSXML2.DOMDocument.3.0")
oXMLDoc.async = False
If Not oXMLDoc.load("mntdir\config\dba.xml") Then
Set oErr = oXMLDoc.parseError
sErrMsg = "XML Parsing Error. File: " & oErr.url & " Reason : " & oErr.reason & " Line: " & oErr.line & ", Character: " & oErr.linepos & ",Text: " & oErr.srcText
Err.Raise 999, , sErrMsg
End If
'******************************************************************************

Set WshShell = CreateObject("WScript.Shell")
pUSER = oXMLDoc.selectSingleNode("/config/stby/
prodUser").text
pPASS = oXMLDoc.selectSingleNode("/config/stby/
prodPass").text
ARCHIVE_FORMAT = oXMLDoc.selectSingleNode("/config/general/archive_format").text
ARCHIVE_DIR = oXMLDoc.selectSingleNode("/config/general/archive_dir").text
pTNS = oXMLDoc.selectSingleNode("/config/stby/prodTNS").text

Set objFSO = CreateObject("Scripting.FileSystemObject")
set archive_last_deleted = objFSO.OpenTextFile("stby\archive_last_deleted.txt", 1) 'for reading
DIM LAST_DELETED
if not archive_last_deleted.AtEndOfStream then
LAST_DELETED = cInt(archive_last_deleted.ReadLine)
else
WScript.Echo "Error on stby\archive_last_deleted file"
WScript.exit
end if
archive_last_deleted.close
NEXTTODELETE=LAST_DELETED+1



'ADO Connection object
Set con = CreateObject("ADODB.Connection")
con.CursorLocation = 3
If con.State = 1 Then con.close
con.Open "PROVIDER=MSDAORA;DATA SOURCE="&pTNS&";USER ID="&pUSER&";PASSWORD="&pPASS&";"

'ADO Command object
Set cmd = CreateObject("ADODB.Command")
cmd.CommandText="select APPLIED_SEQ#-3 from v$ARCHIVE_DEST_STATUS where DEST_ID=2 "
cmd.CommandType=1
cmd.ActiveConnection=con

'ADO Recordset object
set rs=CreateObject("ADODB.Recordset")
Set rs=cmd.Execute


if rs.EOF then
WScript.Echo "Error: No APPLIED_SEQ# record for DEST_ID 2"
rs.Close
con.Close
WScript.exit
else
DIM MAX_SEC_APPLIED
rs.MoveFirst
MAX_SEC_APPLIED = cInt(rs.Fields(0))
rs.Close
con.Close
end if

WScript.Echo "Archive Sequence Range to Clean:" & cStr(NEXTTODELETE) & " - " & cStr(MAX_SEC_APPLIED)
do while MAX_SEC_APPLIED > NEXTTODELETE
WScript.Echo "removing... " & ARCHIVE_DIR & "\" & Replace(ARCHIVE_FORMAT,"SEQUENCE",cStr(NEXTTODELETE))
'WshShell.run("del /Q " & ARCHIVE_DIR & "\" & Replace(ARCHIVE_FORMAT,"SEQUENCE",cStr(NEXTTODELETE)))
objFSO.DeleteFile ARCHIVE_DIR & "\" & Replace(ARCHIVE_FORMAT,"SEQUENCE",cStr(NEXTTODELETE))
set archive_last_deleted = objFSO.OpenTextFile("stby\archive_last_deleted.txt", 2) 'for writing
archive_last_deleted.Write cStr(NEXTTODELETE)
archive_last_deleted.close
NEXTTODELETE = NEXTTODELETE+1
loop

I ran exactly the same script at prod side, however i added my backup procedure just before the delete.


Wednesday, April 9, 2008

How to find active sqls

Following script can be used to determine long running active sqls with elapsed time and wait event:

set lines 233
select /*+ ordered */ to_char(s.sid,'99999'),
substr(username,1,8),
substr(t.sql_text,1,60),
nvl(last_call_et,0) TIME_sec,
t.hash_value,
substr(w.event,1,30) WAIT_EV,
substr(machine,1,10) MACHINE
from v$session s,v$sqltext t,v$session_wait w
where s.sql_address=t.address
and s.sql_HASH_VALUE=t.HASH_VALUE
and w.sid=s.sid
and username between 'AAAAAAAAA' and 'ZZZZZZZZZ'
and s.status='ACTIVE'
and t.piece=0
--and upper(sql_text) not like 'BEGIN%'
order by TIME_SEC
/

In order to find sqls with high executions and little elapsed times

1-) Take a snap of v$sqlarea:

create table tmp as select sql_id, sql_text, executions, ELAPSED_TIME, CPU_TIME, DISK_READS, DIRECT_WRITES, BUFFER_GETS from v$sqlarea;


2-) Track sqls depending on executions:

select a.sql_id,substr(b.sql_text,1,50),
b.executions-a.executions,
b.ELAPSED_TIME - a.ELAPSED_TIME,
b.CPU_TIME - a.CPU_TIME,
b.DISK_READS - a.DISK_READS,
b.DIRECT_WRITES - a.DIRECT_WRITES,
b.BUFFER_GETS - a.BUFFER_GETS
from tmp a, v$sqlarea b
where a.sql_id=b.sql_id and b.executions-a.executions > 100 order by b.executions-a.executions
/

CREATING A JOB FOR ACTIVE SQL EXECUTIONS HISTORYOne day i need to monitor sql executions over time in order to determine batch-OLTP transactions over time. You may wonder why i did not use AWR report, EM or something else.. because usually i spent much more time on those tools to find what i need.

One important note, because my focus was sql execution counts, i put "a.sql_id=b.sql_id and b.executions-a.executions > 10000" condition in my sql withing procedure sp_sqlarea_snap_history in order to eliminate low execution sqls to be ignored. If your focus on something else (for example buffer gets or CPU) then i suggest change this condition which suits your case. Sometimes a sql executed a few times may consume much more resource then an other sql executes thousand times....

create table sqlarea_snap as select sysdate snap_time, sql_id, sql_text, executions, ELAPSED_TIME, CPU_TIME, DISK_READS, DIRECT_WRITES, BUFFER_GETS from v$sqlarea where 1=2;

create table sqlarea_snap_history
(SNAP_BEGIN_TIME DATE,
SNAP_END_TIME DATE,
SQL_ID VARCHAR2(13),
SQL_TEXT VARCHAR2(1000),
EXECUTIONS_DELTA NUMBER,
TPS NUMBER,
ELAPSED_TIME_DELTA NUMBER,
CPU_TIME_DELTA NUMBER,
DISK_READS_DELTA NUMBER,
DIRECT_WRITES_DELTA NUMBER,
BUFFER_GETS_DELTA NUMBER);

create or replace procedure sp_sqlarea_snap_history
as
begin
EXECUTE IMMEDIATE 'insert into sqlarea_snap_history select a.SNAP_TIME, sysdate, a.sql_id, b.sql_text, b.executions-a.executions, '
'trunc((b.executions-a.executions)/((sysdate-snap_time)*24*60*60)), b.ELAPSED_TIME - a.ELAPSED_TIME, b.CPU_TIME - a.CPU_TIME, '
'b.DISK_READS - a.DISK_READS, b.DIRECT_WRITES - a.DIRECT_WRITES, b.BUFFER_GETS - a.BUFFER_GETS from sqlarea_snap a, v$sqlarea b '
'where a.sql_id=b.sql_id and b.executions-a.executions > 10000 order by b.executions-a.executions';
EXECUTE IMMEDIATE 'drop table sqlarea_snap';
EXECUTE IMMEDIATE 'create table sqlarea_snap as select sysdate snap_time, sql_id, sql_text, executions, ELAPSED_TIME, CPU_TIME, '
'DISK_READS, DIRECT_WRITES, BUFFER_GETS from v$sqlarea ';
end;
/


exec dbms_job.isubmit(20,'sp_sqlarea_snap_history;',sysdate,'sysdate+1/144',false);

Finally, here is the query

select SNAP_BEGIN_TIME, EXECUTIONS_DELTA, TPS, SQL_TEXT from sqlarea_snap_history where SNAP_BEGIN_TIME > trunc(sysdate) order by SNAP_BEGIN_TIME desc, EXECUTIONS_DELTA desc;

Monday, March 24, 2008

How to create a logon trigger for session tracing

first create log table:

create table dba_user.sessiontrace(LDATE date, MACHINE  VARCHAR2(64), PROGRAM VARCHAR2(48), OSUSER  VARCHAR2(30), MODULE  VARCHAR2(64), SID NUMBER, SERIAL# NUMBER, SPID VARCHAR2(24));

grant necessary permissions to dba_user:

grant alter session to dba_user;
grant select on v_$session to dba_user;
grant select on v_$process to dba_user;

Create procedure to set session trace on:


CREATE OR REPLACE PROCEDURE
dba_user.logon_actions IS
CURSOR user_prog IS
SELECT s.machine,s.program,s.osuser,s.module,s.sid, s.serial#,p.spid  FROM v$session s, v$process p where s.audsid=sys_context('USERENV','SESSIONID') and s.paddr=p.addr;
user_rec user_prog%ROWTYPE;
begin
OPEN user_prog;
FETCH user_prog INTO user_rec;
insert into sessiontrace values(sysdate, user_rec.machine,user_rec.program,user_rec.osuser,user_rec.module,user_rec.sid,user_rec.serial#, user_rec.spid);
commit;
execute immediate 'alter session set sql_trace=true';
CLOSE user_prog;
END;
/

 
Finally create login trigger to trace user:

CREATE OR REPLACE TRIGGER
dba_user.ON_LOGIN_OF_DBUSER
AFTER LOGON ON DBUSER
.SCHEMA
begin
dba_user.logon_actions;
end;
/

Friday, March 14, 2008

An Example of Transportable Tablespaces

(set oracle environment for source DB)
At source DB:
connect / as sysdba
create tablespace data4 datafile
'/path1/data401.dbf' size 10m autoextend on next 10m,
'/path1/data402.dbf' size 10m autoextend on next 10m;
create tablespace indx4 datafile
'/path1/indx401.dbf' size 10m autoextend on next 10m;
create user tc identified by tc default tablespace data4
temporary tablespace temp quota unlimited on data4 quota unlimited on indx4;
grant create session, create table to tc;
create table tc.testtab (nr number, txt varchar2(10)) tablespace data4;
insert into tc.testtab values (1, 'line 1');
commit;
create index tc.i_testtab on tc.testtab(nr) tablespace indx4;
select owner'.'segment_name "OWNER.SEGMENT_NAME", segment_type, tablespace_name from dba_segments where tablespace_name in ('DATA4','INDX4');
alter tablespace data4 read only;
alter tablespace indx4 read only;
execute dbms_tts.transport_set_check('data4,indx4', TRUE);
select * from transport_set_violations;
exit;

At Shell:
exp "'/ as sysdba'" file=exp_tts.dmp log=exp_tts.log transport_tablespace=y tablespaces=data4,indx4
cp /path1/data401.dbf /path2/data401.dbf
cp /path1/data402.dbf /path2/data402.dbf
cp /path1/indx401.dbf /path2/indx401.dbf
cd /path2
chmod g+w *

(set oracle environment for destination DB)
At destination DB:
connect / as sysdba
create user tcnew identified by tcnew default tablespace tools temporary tablespace temp;
grant create session, create table, resource to tcnew;

At Shell:
imp "'/ as sysdba'" file=exp_tts.dmp log=imp_tts.log fromuser=tc touser=tcnew transport_tablespace=y datafiles=/path2/data401.dbf,/path2/data402.dbf,/path2/indx401.dbf

At destination DB:
connect / as sysdba
alter tablespace data4 read write;
alter tablespace indx4 read write;
alter user tcnew quota unlimited on data4;
alter user tcnew quota unlimited on indx4;
revoke resource from tcnew;
select owner'.'segment_name "OWNER.SEGMENT_NAME", segment_type, tablespace_name from dba_segments where tablespace_name in ('DATA4','INDX4');

How to determine full scan sqls?

Here is a sql script to determine full scan sqls:

SELECT /*+ ordered */
DISTINCT s.hash_value, s.buffer_gets, s.disk_reads, s.executions,
s.sql_text,
'SELECT * FROM V$SQL_PLAN WHERE ADDRESS='''
|| s.address
|| ''' AND HASH_VALUE='
|| s.hash_value
FROM v$sql_plan p, v$sqlarea s
WHERE p.options LIKE '%FULL%'
AND s.address = p.address
AND s.hash_value = p.hash_value
AND object_name <> 'DUAL'
-- AND EXECUTIONS > 10
ORDER BY s.buffer_gets DESC

Tuesday, March 4, 2008

Sql Script to Generate AWR Report

Here is a sql script to generate awr report using sqlplus:

select DBID,INSTANCE_NUMBER,SNAP_ID,END_INTERVAL_TIME, BEGIN_INTERVAL_TIME from dba_hist_snapshot order by BEGIN_INTERVAL_TIME;
undefine DBID
undefine INSID
undefine STARTSNAPID
undefine ENDSNAPID
column report_name new_value report_name;
select 'awr_'||(select NAME from v$database where DBID = &&DBID)||'_'||&&INSID||'_'||
(select to_char(END_INTERVAL_TIME,'yyyymmddhh24mi') from dba_hist_snapshot where SNAP_ID=&&STARTSNAPID)||'_'||
(select to_char(END_INTERVAL_TIME,'yyyymmddhh24mi') from dba_hist_snapshot where SNAP_ID=&&ENDSNAPID)
||'.html'
report_name
from dual;
set termout off
spool &&report_name
select output from table(sys.dbms_workload_repository.awr_report_html(&DBID,&INSID,&STARTSNAPID,&ENDSNAPID));
spool off
set termout on

Setting up Dataguard Configuration for Oracle 10G

I assume that physical standby database is previously created, configured and up-to-date.

Prod configuration
If you use spfile, set scope=both at the end of all alter commands. Other case, dont forget to alter parameters in init.ora file
1-)Be sure logging is enabled at database level:ALTER DATABASE FORCE LOGGING;
2-)Change db_unique_name parameter (in init.ora or spfile) if necessary. Default is ORACLE_SID. You may leave it as it is. Change requires restart to apply.
* alter system set STANDBY_FILE_MANAGEMENT=AUTO;
* alter system set log_archive_dest_state_1=ENABLE;
* alter system set log_archive_dest_state_2=DEFER;
* alter system set log_archive_dest_1='LOCATION={archive_path_prod} MANDATORY REOPEN=60 VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME={db_unique_name_prod}';
TNS_stby is the TNS name configured for standby database. I strongly suggest configuring a new TNS listener on a different port reserverd for dataguard configuration. We will set it later at standby database.
* alter system set log_archive_dest_2='SERVICE={TNS_stby} LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME={db_unique_name_stby}';
* alter system set FAL_SERVER={TNS_stby};
* alter system set FAL_CLIENT={TNS_prod}; -- TNS_prod: tns name for primary database.
3-)Be sure that remote_login_passwordfile is set to EXCLUSIVE (show parameter remote_login_passwordfile)
4-)On platforms other than Windows, create a password file, and set the password for the SYS user to the same password used by the SYS user on the primary database. The password for the SYS user on every database in a Data Guard configuration must be identical for redo transmission to succeed. See Oracle Database Administrator's Guide.
6-)Add number of redologgroups+1 standby logfiles to the prod database. These redo logs wont be used unless a swithover is performed.
ALTER DATABASE ADD STANDBY LOGFILE GROUP 20 ('{redo_path}/stbyredo01.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 21 ('{redo_path}/stbyredo02.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 22 ('{redo_path}/stbyredo03.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 23 ('{redo_path}/stbyredo04.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 24 ('{redo_path}/stbyredo05.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 25 ('{redo_path}/stbyredo06.log') SIZE 52428800;

if there is RAC configuration then add logfiles for thread 2:

ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 26 ('{redo_path}/stbyredo07.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 27 ('{redo_path}/stbyredo08.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 28 ('{redo_path}/stbyredo09.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 29 ('{redo_path}/stbyredo10.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 30 ('{redo_path}/stbyredo11.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 31 ('{redo_path}/stbyredo12.log') SIZE 52428800;

Standby configuration:
If you use spfile, set scope=both at the end of all alter commands. Other case, dont forget to alter parameters in init.ora file
1-) change db_unique_name parameter (in init.ora or spfile) if necessary. It can not be the same with production. I assume it is {db_unique_name_stby}
* alter system set STANDBY_FILE_MANAGEMENT=AUTO;
* alter system set LOG_ARCHIVE_DEST_STATE_1=ENABLE;
* alter system set LOG_ARCHIVE_DEST_STATE_2=ENABLE;
* alter system set LOG_ARCHIVE_DEST_1='LOCATION={archive_path_stby} VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME={db_unique_name_stby}';
TNS_prod is the TNS name configured for prod database. I strongly suggest configuring a new TNS listener on a different port reserverd for dataguard configuration.
* alter system set LOG_ARCHIVE_DEST_2='SERVICE={TNS_prod} LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME={db_unique_name_prod}';
* alter system set FAL_SERVER={TNS_prod};
* alter system set FAL_CLIENT={TNS_stby};
2-)Be sure that remote_login_passwordfile is set to EXCLUSIVE (show parameter remote_login_passwordfile)
3-)Configure listener at prod site so be sure that "tnsping TNS_prod" is working.
4-)Configure listener at standby site so be sure that "tnsping TNS_stby" is working.
5-)On platforms other than Windows, create a password file, and set the password for the SYS user to the same password used by the SYS user on the primary database. The password for the SYS user on every database in a Data Guard configuration must be identical for redo transmission to succeed. See Oracle Database Administrator's Guide.
6-)Add number of redologgroups+1 standby logfiles to the standby database.
ALTER DATABASE ADD STANDBY LOGFILE GROUP 20 ('{redo_path}/stbyredo01.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 21 ('{redo_path}/stbyredo02.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 22 ('{redo_path}/stbyredo03.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 23 ('{redo_path}/stbyredo04.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 24 ('{redo_path}/stbyredo05.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 25 ('{redo_path}/stbyredo06.log') SIZE 52428800;

if there is RAC configuration then add logfiles for thread 2:

ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 20 ('{redo_path}/stbyredo07.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 21 ('{redo_path}/stbyredo08.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 22 ('{redo_path}/stbyredo09.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 23 ('{redo_path}/stbyredo10.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 24 ('{redo_path}/stbyredo11.log') SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 25 ('{redo_path}/stbyredo12.log') SIZE 52428800;


Finally, activating dataguard configuration (prod):
alter systen set log_archive_dest_state_2=ENABLE; -- If you use spfile, set scope=both, Other case, dont forget to update init.ora file

Tuning Network Configuration:
Network Tuning - Oracle Net Session Data Unit (SDU) Size
Prod (tnsnames.ora):
...
( DESCRIPTION =
(SDU=32767)
( ADDRESS = (PROTOCOL = TCP) (Host = ...) (Port = ...) )
( CONNECT_DATA = (SID = ...) )
)

Standby (listener.ora):
SID_LIST_LISTENER_... =
( SID_LIST=
(SID_DESC =
(SDU=32767)
(ORACLE_HOME = ...)
(SID_NAME =...)
)
)

Network Tuning: Setting TCP Socket Buffer Sizes:
BDP= {available bandwidth}*{Network Round Trip Time-RTT}
Socket Buffer Size = 3*BDP

Example for 34Mbit bandwith with 2ms RTT:
= 3* ((34000000*0.02)/8)
=255000

Prod(tnsnames.ora):
... =
( DESCRIPTION =
(SDU=32767)
(SEND_BUF_SIZE=255000)
(RECV_BUF_SIZE=255000)
( ADDRESS = (PROTOCOL = TCP) (Host = ...) (Port = ...) )
( CONNECT_DATA = (SID = ...) )
)

standby(listener.ora):
LISTENER_...=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS= (PROTOCOL= IPC)(KEY= ...))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =...)(PORT = 1521) (SEND_BUF_SIZE=255000) (RECV_BUF_SIZE=255000))
)
)
)

Starting and stopping Managed Recovery (Standby):
start
STARTUP MOUNT;
alter database recover managed standby database using current logfile disconnect from session; -- Applies not only completed redo logs, but also redo log currently being copied
or
alter database recover managed standby database disconnect from session; -- Applies completed redo logs

stop
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SHUTDOWN IMMEDIATE;

Monitoring:
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

A Script to Add Supplemental Logging at Table Level

Here is a script which may be used to generate script which enables supplemental logging at table level for all tables.

declare
out varchar2(2000);
cflag number;
begin
-- You may change this line, i am choosing tables with only 50 to 250 MBs and owner OWNR
for l in ( select segment_name from dba_segments where bytes/(1024*1024) between 50 and 250 and owner in ('OWNR') and SEGMENT_TYPE = 'TABLE' and segment_name not like 'BIN%' order by segment_name)
loop
cflag := 0;
out :='alter table OWNR.'||l.segment_name||' ADD SUPPLEMENTAL LOG GROUP "'||substr(l.segment_name,1,25)||'_SLOG" (';


-- If there is a PK then add its columns
for k in (select COLUMN_NAME from DBA_IND_COLUMNS where INDEX_NAME in (select index_name from dba_constraints where CONSTRAINT_TYPE = 'P' and table_name=l.segment_name) order by COLUMN_POSITION)

loop
if (cflag = 0) then out:=out||k.column_name;
else out:=out||','||k.column_name;
end if;
cflag := 1;
end loop;

-- If there is no PK but there is a UI, then add its columns.
-- Normally there should not be more then one UI on any table. If there is, then is a problem. You may check it with this sql:
-- select TABLE_NAME,count(*) from dba_indexes where INDEX_TYPE='NORMAL' and UNIQUENESS='UNIQUE' and owner='OWNR' and
-- TABLE_NAME not in (select TABLE_NAME from dba_constraints where CONSTRAINT_TYPE = 'P') group by TABLE_NAME having count(*) > 1;

if (cflag = 0) then
for m in (select COLUMN_NAME from DBA_IND_COLUMNS where INDEX_NAME in (select index_name from dba_indexes where INDEX_TYPE='NORMAL' and UNIQUENESS='UNIQUE' and owner='OWNR' and table_name=l.segment_name) order by COLUMN_POSITION)

loop
if (cflag = 0) then out:=out||m.column_name;
else out:=out||','||m.column_name;
end if;
cflag := 1;
end loop;
end if;
-- If there is no PK and UI, then add all columns to supplemental logging
if (cflag = 0) then
for n in (select COLUMN_NAME from dba_tab_columns where table_name=l.segment_name order by COLUMN_ID)
loop
if (cflag = 0) then out:=out||n.column_name;
else out:=out||','||n.column_name;
end if;
cflag := 1;
end loop;
end if;

out := out||') ALWAYS;';
dbms_output.put_line(out);
end loop;
END;
/

Tuesday, January 8, 2008

STANDBY_FILE_MANAGEMENT MANUAL, ORA-01110

If STANDBY_FILE_MANAGEMENT system parameter is MANUAL and a datafile is added to primary database, the same datafile should be added manually to the standby database after ORA-01110 error. However, by setting STANDBY_FILE_MANAGEMENT=AUTO, you can allow standby database to create its own datafile automatically whenever a datafile is added to the primary.

SQL> alter system set STANDBY_FILE_MANAGEMENT=MANUAL;
SQL> alter database create datafile '...DBF' as '...DBF' size ...M;
SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO;

SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=spfile; or change/add initora parameter.