In order to trace an oracle error ORA-XXXX, first login database using a privileged account and enable trace:
alter system set events 'XXXX trace name errorstack level 1';
(XXXX is the oracle error number not beginning with zero. You may set trace level up to 3 for more detailed trace)
After trace enabled if ORA-XXXX occurs, you can find a trace file generated in trace (dump) directory.
In order to disable trace:
alter system set events 'XXXX trace name errorstack off';
Generated trace file is more like a memory dump then an error stack. So you may not find what you are looking for...easily...
Thursday, September 29, 2011
Thursday, July 7, 2011
How to configure 10g database with 11g RAC SCAN
If you have a 10g database in a 11.2g cluster and want to connect 10g database using SCAN then you should update local/remote listener parameters of 10g database manually. Here is an example for 2-node RAC:
mycluster-crs-scan.mydomain.com : My scan name (DNS)
mynode1-vip.mydomain.com: vip of node 1
MYSID1: Instance Name running on node 1
mynode2-vip.mydomain.com: vip of node 2
MYSID2: Instance Name running on node 2
alter system set remote_listener='mycluster-crs-scan.mydomain.com:1521' scope=both SID='*';
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mynode1-vip.mydomain.com)(PORT=1521))))' scope=both SID='MYSID1';
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mynode2-vip.mydomain.com)(PORT=1521))))' scope=both SID='MYSID2';
mycluster-crs-scan.mydomain.com : My scan name (DNS)
mynode1-vip.mydomain.com: vip of node 1
MYSID1: Instance Name running on node 1
mynode2-vip.mydomain.com: vip of node 2
MYSID2: Instance Name running on node 2
alter system set remote_listener='mycluster-crs-scan.mydomain.com:1521' scope=both SID='*';
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mynode1-vip.mydomain.com)(PORT=1521))))' scope=both SID='MYSID1';
alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mynode2-vip.mydomain.com)(PORT=1521))))' scope=both SID='MYSID2';
Thursday, March 3, 2011
How to find CPU-core information on Sun Solaris
Here is 4 CPU, 16 cores...
$ prtdiag -v
System Configuration: Sun Microsystems sun4u SPARC Enterprise M9000 Server
System clock frequency: 960 MHz
Memory size: 20480 Megabytes
==================================== CPUs ====================================
CPU CPU Run L2$ CPU CPU
LSB Chip ID MHz MB Impl. Mask
--- ---- ---------------------------------------- ---- --- ----- ----
00 2 16, 17, 18, 19 2280 5.0 6 146
02 1 72, 73, 74, 75 2280 5.0 6 146
03 2 112, 113, 114, 115 2280 5.0 6 146
04 3 152, 153, 154, 155 2280 5.0 6 146
$ prtdiag -v
System Configuration: Sun Microsystems sun4u SPARC Enterprise M9000 Server
System clock frequency: 960 MHz
Memory size: 20480 Megabytes
==================================== CPUs ====================================
CPU CPU Run L2$ CPU CPU
LSB Chip ID MHz MB Impl. Mask
--- ---- ---------------------------------------- ---- --- ----- ----
00 2 16, 17, 18, 19 2280 5.0 6 146
02 1 72, 73, 74, 75 2280 5.0 6 146
03 2 112, 113, 114, 115 2280 5.0 6 146
04 3 152, 153, 154, 155 2280 5.0 6 146
Tuesday, February 1, 2011
Incremental statistics
You should gather statistics on table level, not on partition level ("partname" should not be used). Oracle itself will determine which partitions are modified and updates global statistics scanning only modified partitions.
In order to gether incremental statistics on a table :
1-) Set incremental statistic pref for the table
exec dbms_stats.set_table_prefs('MYSCHEMA','MYTABLE','INCREMENTAL' ,'TRUE') ;
2-) Chek if it is set (should return TRUE)
select dbms_stats.get_prefs('INCREMENTAL','MYSCHEMA','MYTABLE') from dual;
4-) Also check if PUBLISH attibute is true. default is true
select dbms_stats.get_prefs('PUBLISH','MYSCHEMA','MYTABLE') from dual;
3-) Gather statistics on a partition using GRANULARITY set to AUTO and AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT
Here is a query to check column level incremental statistics is enabled or not:
select o.name, c.name, decode(bitand(h.spare2,8),8 ,'yes','no') incremental
from sys.hist_head$ h,sys.obj$ o, sys.col$ c
where h.obj#=o.obj#
and o.obj# = c.obj#
and h.intcol#=c.intcol#
and o.name='TDM_KOBI_NBSM_DAILY'
and o.subname is null;
For official documentation:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/stats.htm (13.3.1.3 Statistics on Partitioned Objects)
Setting Global/Database prefs:
1-) In order to set database stat gathering preferences excluding oracle system tables:
exec DBMS_STATS.SET_DATABASE_PREFS('METHOD_OPT','FOR ALL COLUMNS SIZE 254');
2-) In order to set database stat gathering preferences globally:
exec DBMS_STATS.SET_GLOBAL_PREFS('METHOD_OPT','FOR ALL COLUMNS SIZE 254');
to see preference value:
select dbms_stats.get_prefs('METHOD_OPT') from dual;
Preferences:
AUTOSTATS_TARGET
CASCADE
DEGREE
ESTIMATE_PERCENT
METHOD_OPT
NO_INVALIDATE
GRANULARITY
PUBLISH
INCREMENTAL
STALE_PERCENT
In order to gether incremental statistics on a table :
1-) Set incremental statistic pref for the table
exec dbms_stats.set_table_prefs('MYSCHEMA','MYTABLE','INCREMENTAL' ,'TRUE') ;
2-) Chek if it is set (should return TRUE)
select dbms_stats.get_prefs('INCREMENTAL','MYSCHEMA','MYTABLE') from dual;
4-) Also check if PUBLISH attibute is true. default is true
select dbms_stats.get_prefs('PUBLISH','MYSCHEMA','MYTABLE') from dual;
3-) Gather statistics on a partition using GRANULARITY set to AUTO and AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT
Here is a query to check column level incremental statistics is enabled or not:
select o.name, c.name, decode(bitand(h.spare2,8),8 ,'yes','no') incremental
from sys.hist_head$ h,sys.obj$ o, sys.col$ c
where h.obj#=o.obj#
and o.obj# = c.obj#
and h.intcol#=c.intcol#
and o.name='TDM_KOBI_NBSM_DAILY'
and o.subname is null;
For official documentation:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/stats.htm (13.3.1.3 Statistics on Partitioned Objects)
Setting Global/Database prefs:
1-) In order to set database stat gathering preferences excluding oracle system tables:
exec DBMS_STATS.SET_DATABASE_PREFS('METHOD_OPT','FOR ALL COLUMNS SIZE 254');
2-) In order to set database stat gathering preferences globally:
exec DBMS_STATS.SET_GLOBAL_PREFS('METHOD_OPT','FOR ALL COLUMNS SIZE 254');
to see preference value:
select dbms_stats.get_prefs('METHOD_OPT') from dual;
Preferences:
AUTOSTATS_TARGET
CASCADE
DEGREE
ESTIMATE_PERCENT
METHOD_OPT
NO_INVALIDATE
GRANULARITY
PUBLISH
INCREMENTAL
STALE_PERCENT
Thursday, January 6, 2011
Disabling Parallel Load Balancing
Normally, parallel load balancing is a good feature which distributes parallel sessions among rac nodes depending on node workloads. However there is a little problem with this algorithm. Node load statistics are calculated on very little time intervals (i don't remember now, let say in a few milliseconds). What happens if a big batch allocates large number of parallel sessions within this time period? All parallel sessions are allocated depending on this constant statistic which may lead one node to be overloaded.
Let me explain with an example. Let say there is 3-node rac cluster and we will run a big batch which will open 128 parallel sessions. Let assume before running batch 2 nodes are 50% loaded, 1 is idle (<3%). When we run batch, oracle starts distributing parallel sessions to nodes depending on their workload and he may decided to open all parallel sessions on idle node. This is because all parallel sessions are opened in a very short time interval in which node load statistics are not updated so oracle decides to open every connection in idle node which seems best suitable. The result 2 nodes with 50% load and 1 with 100%.
In order to eliminate this, you may set following variable on batch session:
alter session set "_parallel_load_balancing"=FALSE;
which disables load balancing and distributes all sessions one by one to all nodes ignoring workload.
However, you should be very careful while using this parameter. Because if one node is already 100% loaded and if you set this parameter, oracle will try to open 1/3 of the 128 parallel session on this already 100% loaded node which may kill node. However load balancing would prevent opening new sessions on loaded node.
Let me explain with an example. Let say there is 3-node rac cluster and we will run a big batch which will open 128 parallel sessions. Let assume before running batch 2 nodes are 50% loaded, 1 is idle (<3%). When we run batch, oracle starts distributing parallel sessions to nodes depending on their workload and he may decided to open all parallel sessions on idle node. This is because all parallel sessions are opened in a very short time interval in which node load statistics are not updated so oracle decides to open every connection in idle node which seems best suitable. The result 2 nodes with 50% load and 1 with 100%.
In order to eliminate this, you may set following variable on batch session:
alter session set "_parallel_load_balancing"=FALSE;
which disables load balancing and distributes all sessions one by one to all nodes ignoring workload.
However, you should be very careful while using this parameter. Because if one node is already 100% loaded and if you set this parameter, oracle will try to open 1/3 of the 128 parallel session on this already 100% loaded node which may kill node. However load balancing would prevent opening new sessions on loaded node.
Subscribe to:
Posts (Atom)