Monday, October 12, 2015

ODI-10165: The RunAsUser MyUser is not a valid user from external user identity store


This error occured after i switched authentication mode of ODI repository to external (Microsoft Active Directory) authentication. While developers could run scenarios successfully using configured agent, ODI-10165 was occuring for me.

The problem is that while i was switching authentication mode of ODI to external authentication i added my user to supervisor user list. My user was authenticating as superuser bu not in the security/users list. This causes problem. After i added my user to security/users, problem solved.

Thursday, September 17, 2015

Using Debug Mode in Sql Developer with KERBEROS Debug

Sql developer debug mode can be enabled with 2 steps:

1-) Set Log Directory path in Tools->Preferences->Environment



2-) At sqldeveloper\sqldeveloper\bin directory edit sqldeveloper.conf file, comment out "IncludeConfFile  sqldeveloper-nondebug.conf" and "IncludeConfFile  sqldeveloper-debug.conf"

#IncludeConfFile  sqldeveloper-nondebug.conf
IncludeConfFile  sqldeveloper-debug.conf



Restart sql developer. You will see that debug logs will be created under log directory with the name format SQLDeveloper_YYYYMMDDHH24MISS.log

In order to add kerberos debugging you shoud add following java option to sqldeveloper.conf file at sqldeveloper\sqldeveloper\bin directory

AddVMOption -Dsun.security.krb5.debug=true

However kerberos debug information will not be written to sql developer debug log file. It is send to stdout. In order to get kerberos debug output you should start sql developer from command line. Set screen buffer size of cmd window to some value higher then the system default so some debug output will not be lost because of small buffer size. 


Sql Developer: How to set up Hive connection with Kerberos authentication on Windows client

Windows by default has restriction to retrieve TGT session key. Following registry values should be set in order to allow applications:

Windows 2003, Windows Vista, and later:
• Key:HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Kerberos\Parameters
• Value Name: allowtgtsessionkey
• Value Type: REG_DWORD
• Value: 0x01
Windows XP and Windows 2000:
• KeyHKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Kerberos
• Value Name: allowtgtsessionkey
• Value Type: REG_DWORD
• Value: 0x01

Sql developer 4.0.3 and later supports hive connection. My version is 4.1.1.19

Hive JDBC drivers should be downloaded and registered with sql developer as Third Party JDBC drivers. My hadoop distribution is cloudera version is 2.5.4. I downloaded the driver from following URL:
Here is how it looks like:

"Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files " should be downloaded. You should extract (overwrite) files local_policy.jar and US_export_policy.jar into sqldeveloper/jdk/jre/lib/security directory. 

krb5.conf is also needed by java for kerberos authentication. if KRB5_CONFIG is not set then it first looks into jre/lib/security then %WINDIR% directory. You should create this file under sqldeveloper/jdk/jre/lib/security (the same directory you put local_policy.jar and US_export_policy.jar files). Parameters in this file depends on your KDC configuration. In general /etc/krb5.conf file on any hadoop cluster node works fine on windows too. My krb5.conf file looks like this:

[libdefaults]
default_realm = MYDOMAIN.COM
default_tgs_enctypes = aes256-cts aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc
default_tkt_enctypes = aes256-cts aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc
permitted_enctypes = aes256-cts aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc
dns_lookup_realm = true
dns_lookup_kdc = true
passwd_check_s_address = false
noaddresses = true
udp_preference_limit = 1
ccache_type = 3
kdc_timesync = 1
[domain_realm]
.mydomain.com = MYDOMAIN.COM
mydomain.com = MYDOMAIN.COM
kdcserver.mydomain.com = MYDOMAIN.COM
[realms]

Now, you can create a Hive connection with Kerberos authentication on Windows client:




Thursday, March 5, 2015

Oracle Enterprise Manager HTTPS Access Certificate Problem


In order to avoid certification problem for oracle enterprise manager https access, read following doc

Installing Browser Certificates

Tuesday, February 24, 2015

Moving Datafile from File System to ASM


If datafile is created on file system instead of ASM, following procedure can be applied to move datafile to ASM


SQL > select NAME,STATUS, ENABLED from v$datafile where TS#=nn

NAME                                                                       STATUS         ENABLED
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn         ONLINE         READ WRITE
+DATA/DBNAME/datafile/TS_NAME .nnn.nnn         ONLINE         READ WRITE

SQL > ALTER TABLESPACE TS_NAME ADD DATAFILE 'DATA' SIZE 1M AUTOEXTEND ON NEXT 1M MAXSIZE 10M;

NAME                                                                              STATUS         ENABLED
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn               ONLINE         READ WRITE
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn               ONLINE         READ WRITE
/u01/app/.../DBNAME/dbs/DATA                             ONLINE         READ WRITE  <- asm="" at="" created="" file="" font="" instead="" of="" system=""> 

Take the datafile to offline:

SQL> alter database datafile '/u01/app/.../DBNAME/dbs/DATA' offline drop;
Database altered.

SQL > select NAME, STATUS, ENABLED from v$datafile where TS#=nn

NAME                                                                        STATUS            ENABLED
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn          ONLINE         READ WRITE
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn          ONLINE         READ WRITE
/u01/app/.../DBNAME/dbs/DATA                        RECOVER       READ WRITE


Copy datafile to ASM using RMAN. If you use asmcmd for file copy then file will be created as non-oracle managed file (with alias)

RMAN> COPY DATAFILE ' /u01/app/.../DBNAME/dbs/DATA' TO '+DATA';

Starting backup at ....
allocated channel...
...
channel ORA_DISK_1: starting datafile copy
input datafile file number=nnnnn name= /u01/app/.../DBNAME/dbs/DATA
output file name=+DATA/DBNAME/datafile/TS_NAME.nnn.nnn  tag=...
channel ORA_DISK_1: datafile copy complete, elapsed time: ...
Finished backup ...

Update file location

SQL> alter tablespace TS_NAME  rename  datafile '/u01/app/.../DBNAME/dbs/DATA' to '+DATA/DBNAME/datafile/TS_NAME.nnn.nnn';
Tablespace altered.

Recover datafile and bring it online

SQL> recover datafile '+DATA/DBNAME/datafile/TS_NAME.nnn.nnn';
Media recovery complete.

SQL > alter database datafile '+DATA/DBNAME/datafile/TS_NAME.nnn.nnn' online;
Database altered.

SQL > select NAME,CREATION_TIME, STATUS, ENABLED from v$datafile where TS#=nn

NAME                                                                        STATUS            ENABLED
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn          ONLINE         READ WRITE
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn          ONLINE         READ WRITE
+DATA/DBNAME/datafile/TS_NAME.nnn.nnn          ONLINE         READ WRITE



Thursday, February 19, 2015

ORA-00942 while creating materialized view


I hit following problem while creating the materialized view with option refresh fast although   SCHEMA2.TABNAME  exists and SCHEMA1 user has select privilege on  it

CREATE MATERIALIZED VIEW SCHEMA1.MVIEW ....
SELECT .... FROM SCHEMA2.TABNAME....

ORA-12018: following error encountered during code generation for  "SCHEMA1"."MVIEW "
ORA-00942: table or view does not exist

Problem is not the source table itself but the materialized view log table created before. In order to solve the problem you should identify  materialized view log table and give select permission to  materialized view owner

select LOG_TABLE from ALL_MVIEW_LOGS where log_owner=' SCHEMA2' and master='TABNAME'

grant select on  SCHEMA2.LOG_TABLE to  SCHEMA1;




Tuesday, January 27, 2015

ODI Object Queries


Following query prints mapping folder path:

set serveroutput on

declare
mapping_name varchar2(200):='MAPPING_NAME';
folder_id int;
parent_folder_id int;
folder_name varchar(200);
begin
DBMS_OUTPUT.ENABLE(1000000);
dbms_output.put_line('Mapping: '||mapping_name);
select i_folder into folder_id from SNP_MAPPING where name=mapping_name;
select i_folder,folder_name,par_i_folder into folder_id, folder_name,parent_folder_id from snp_folder where i_folder=folder_id;
while parent_folder_id is not null
loop
dbms_output.put_line('Folder: '||folder_name);
select i_folder,folder_name,par_i_folder into folder_id, folder_name,parent_folder_id from snp_folder where i_folder=parent_folder_id;
end loop;
dbms_output.put_line('Folder: '||folder_name);
end;



Following query prints package folder path:


set serveroutput on
declare
pck_name varchar2(200):='PACKAGE_NAME';
folder_id int;
parent_folder_id int;
folder_name varchar(200);
begin
DBMS_OUTPUT.ENABLE(1000000);
dbms_output.put_line('Pck: '||pck_name);
select i_folder into folder_id from SNP_PACKAGE where pack_name=pck_name;
select i_folder,folder_name,par_i_folder into folder_id, folder_name,parent_folder_id from snp_folder where i_folder=folder_id;
while parent_folder_id is not null
loop
dbms_output.put_line('Folder: '||folder_name);
select i_folder,folder_name,par_i_folder into folder_id, folder_name,parent_folder_id from snp_folder where i_folder=parent_folder_id;
end loop;
dbms_output.put_line('Folder: '||folder_name);
end;



Monday, January 5, 2015

ODI 10g: ORA-00001: unique constraint (PK_EXP_TXT) violated

There are 2 type of logging in SNP_EXP_TXT table. First type has the I_TXT column value ending with "101" and second type has value ending with "010". Two types have their own incremental sequences. First type is for persistent definitions. There are not many rows with this type so this type of rows will not cause trouble. However second type(ending with "010") is used for logging and history text. There may be a lot of rows in this type. When I_TXT value reaches "9999999010" (actually 9999999, ignore last 3 digits) it cycles back to "1010", then this problem may occur if old "010" records have not been purged.


following query will print 9999999010 as result:


select max(I_TXT) from SNP_EXP_TXT where mod(I_TXT,1000)=10


following query prints how many rows each type has:


select mod(I_TXT,1000), count(*) from SNP_EXP_TXT group by mod(I_TXT,1000) order by 1


Let say the row with the I_TXT has first_date value X, then following query will print the value until with cycled sequence will go until "ORA-00001: unique constraint (PK_EXP_TXT) violated " occurs:


select min(I_TXT) from SNP_EXP_TXT where mod(I_TXT,1000)=10 and first_date < X


And to find out last row:


select max(I_TXT) from SNP_EXP_TXT where mod(I_TXT,1000)=10 and first_date > X


Following query prints current sequence number:


select * from SNP_ID where id_tbl= upper('snp_exp_txt')


Here it is explained that there is a “bug” in Oracle Data Integrator that doesn’t clean the SNP_EXP_TXT when the “parent” records are deleted. Following query can be used to find useless recods (i added time and type conditions):


select * from snp_exp_txt where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10
 and not exists
          (select 0
             from (     select distinct I_TXT_TASK_MESS I_TXT
                          from SNP_SESS_TASK_LOG
                   union all
                        select distinct I_TXT_SCEN I_TXT
                          from SNP_SCEN
                   union all
                        select distinct I_TXT_SESS_MESS I_TXT
                          from SNP_SCEN_REPORT
                   union all
                        select distinct I_TXT_DESCRIPTION I_TXT
                          from SNP_SCEN_FOLDER
                   union all
                        select distinct I_TXT_SESS_MESS I_TXT
                          from SNP_SESSION
                   union all
                       select distinct I_TXT_SESS_PARAMS I_TXT
                         from SNP_SESSION 
                   union all
                       select distinct I_TXT_STEP_MESS I_TXT
                         from SNP_STEP_REPORT
                   union all
                        select distinct I_TXT_STEP_MESS I_TXT
                          from SNP_STEP_LOG
                   union all
                        select distinct I_TXT_VAR I_TXT
                          from SNP_VAR_SESS
                   union all
                        select distinct I_TXT_DEF_T I_TXT
                          from SNP_VAR_SESS
                   union all
                        select distinct I_TXT_VAR I_TXT
                          from SNP_VAR_SCEN
                   union all
                         select distinct I_TXT_DEF_T I_TXT
                           from SNP_VAR_SCEN
                   union all
                         select DISTINCT I_TXT as I_TXT     
                          FROM SNP_TXT
                   union all
                         select DISTINCT I_TXT_VAR_IN as I_TXT
                          FROM SNP_VAR
                   union all
                         select DISTINCT I_TXT_VAR_T as I_TXT
                          FROM SNP_VAR_DATA
                  ) UNION_TABLES
              where UNION_TABLES.I_TXT = snp_exp_txt.I_TXT)



You can delete these records (don't forget to take a backup). However, also there may be records which have references. To find out these you can execute previous query with "exists" instead of "not exists". If so look at the following tables to find corresponding records:


select * from SNP_EXP_TXT where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10 and I_TXT in ( select I_TXT_SESS_MESS from snp_session)



select * from SNP_EXP_TXT where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10 and I_TXT in ( select I_TXT_SESS_PARAMS from snp_session)



select * from SNP_EXP_TXT where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10 and I_TXT in ( select I_TXT_VAR from snp_var_sess)



select * from SNP_EXP_TXT where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10 and I_TXT in ( select I_TXT_DEF_T from snp_var_sess)



select * from SNP_EXP_TXT where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10 and I_TXT in ( select I_TXT_VAR_T from SNP_VAR_DATA)



Sum of all records from these queries should be equal to the number of records returned from the query with unions executed with "exists " instead of "not exists". If not look further in repository tables...


All of them except SNP_VAR_DATA should have no effect on scenario execution. However SNP_VAR_DATA is important because if a variable reused, it takes last value from SNP_EXP_TXT history record. If you delete this, variable cannot get its last value.


In order to find out which variables has history in our range and what are their last values:


select VAR_NAME, max(last_date) from SNP_VAR_DATA where VAR_NAME in (
select VAR_NAME from SNP_VAR_DATA where I_TXT_VAR_T in (select I_TXT from snp_exp_txt where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10))
group by VAR_NAME order by 2 desc



Variables which have history in our range but refreshed recently should not be effected by deleting the rows in SNP_EXP_TXT within our time range. However if variable has not been refreshed recently (max(last_date) is within our time range) then these variable should be refreshed manually.


Then it is safe to delete referenced rows from SNP_EXP_TXT table. History records will print null for these expressions (do not forget to tale backup)


delete from snp_exp_txt where last_date between to_date('01.01.2013 00:00:00','dd.mm.yyyy hh24:mi:ss') and 
to_date('01.01.2014 00:00:00','dd.mm.yyyy hh24:mi:ss') and mod(I_TXT,1000)=10