Here is a newly created database redo log information:
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- -------------------
1 1 17 52428800 1 NO INACTIVE 996238 06/09/2007 22:01:59
2 1 18 52428800 1 NO INACTIVE 1006432 06/09/2007 22:13:32
3 1 19 52428800 1 NO CURRENT 1036439 07/09/2007 09:56:44
SQL> select * from v$logfile;
GROUP# STATUS TYPE MEMBER
---------- ------- ------- -------------------------------------------------
3 ONLINE /database/data/redo03.log
2 ONLINE /database/data/redo02.log
1 ONLINE /database/data/redo01.log
Here is how i changed this to five 200M redo logs:
SQL> alter database add logfile group 4 ('/database/data/redo04.log') size 200M;
SQL> alter database add logfile group 5 ('/database/data/redo05.log') size 200M;
while running following sql commands, if you hit an error like this:
ORA-01623: log 3 is current log for instance RPTDB (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: '/database/data/redo03.log'
you should run " alter system switch logfile;" until current log is 4 or 5. Then execute "alter system checkpoint;"
SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;
then move (or maybe drop) old redo logs
mv /database/data/redo01.log /database/data/redo01_old.log
mv /database/data/redo02.log /database/data/redo02_old.log
mv /database/data/redo03.log /database/data/redo03_old.log
finally
SQL> alter database add logfile group 1 ('/database/data/redo01.log') size 200M;
SQL> alter database add logfile group 2 ('/database/data/redo02.log') size 200M;
SQL> alter database add logfile group 3 ('/database/data/redo03.log') size 200M;
Friday, September 7, 2007
Thursday, September 6, 2007
How to determine tables with highest data change rate at Oracle 10G
DBA_HIST_SEG_STAT system view can be used to see physical read-write statistics on segments. Statistics are taken by regular snaps. Following sql script returns tables with higher block change rate, bigger then 20000.
select SNAP_ID,OWNER ,OBJECT_NAME, OBJECT_TYPE, DB_BLOCK_CHANGES_DELTA
from dba_objects a,
(select SNAP_ID,obj#,DB_BLOCK_CHANGES_DELTA from DBA_HIST_SEG_STAT where DB_BLOCK_CHANGES_DELTA > 20000 order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc) b
where a.object_id=b.obj# and object_type='TABLE'
order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc
select c.SNAP_ID,c.END_INTERVAL_TIME SNAP_TIME,a.OWNER ,a.OBJECT_NAME, a.OBJECT_TYPE, b.DB_BLOCK_CHANGES_DELTA
from dba_objects a,
(select SNAP_ID,obj#,DB_BLOCK_CHANGES_DELTA from DBA_HIST_SEG_STAT where DB_BLOCK_CHANGES_DELTA > 20000 order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc) b,
DBA_HIST_SNAPSHOT c
where a.object_id=b.obj# and object_type='TABLE' and b.SNAP_ID=c.SNAP_ID
order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc
select SNAP_ID,OWNER ,OBJECT_NAME, OBJECT_TYPE, DB_BLOCK_CHANGES_DELTA
from dba_objects a,
(select SNAP_ID,obj#,DB_BLOCK_CHANGES_DELTA from DBA_HIST_SEG_STAT where DB_BLOCK_CHANGES_DELTA > 20000 order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc) b
where a.object_id=b.obj# and object_type='TABLE'
order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc
select c.SNAP_ID,c.END_INTERVAL_TIME SNAP_TIME,a.OWNER ,a.OBJECT_NAME, a.OBJECT_TYPE, b.DB_BLOCK_CHANGES_DELTA
from dba_objects a,
(select SNAP_ID,obj#,DB_BLOCK_CHANGES_DELTA from DBA_HIST_SEG_STAT where DB_BLOCK_CHANGES_DELTA > 20000 order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc) b,
DBA_HIST_SNAPSHOT c
where a.object_id=b.obj# and object_type='TABLE' and b.SNAP_ID=c.SNAP_ID
order by snap_id desc, DB_BLOCK_CHANGES_DELTA desc
Subscribe to:
Posts (Atom)