Monday, July 5, 2010

Creating New 11gr2 database on existing 11gr2 RAC

It is assumed that 11gr2 cluster has been installed. ASM is configured and disks are prepared for new database.

1-) Create new OS user for new database. New user should be in the same group with grid's OS user (Using different os users for databases is optional, not a must)

2-) Set ssh trust between the nodes for the newly created os user:

From each node, logged in as new user:


mkdir ~/.ssh
chmod 755 ~/.ssh
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa

Example:
$ mkdir ~/.ssh
$ chmod 755 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (.../.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:

Your identification has been saved in .../.ssh/id_rsa.
Your public key has been saved in .../.ssh/id_rsa.pub.
The key fingerprint is:
XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX os_user@my.domain.com

$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (.../.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:

Your identification has been saved in .../.ssh/id_dsa.
Your public key has been saved in .../.ssh/id_dsa.pub.
 The key fingerprint is:
XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX os_user@my.domain.com

Append all public keys to authorized_keys of all nodes:

FIRST NODE

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh os_user@node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh os_user@node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 644 ~/.ssh/authorized_keys

SECOND NODE

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh os_user@node1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh os_user@node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 644 ~/.ssh/authorized_keys

ssh command run at previous step will register each node's ssh identity to the known_hosts file at other node (confirmation appeared when you run ssh first). However, every node should also has a know_hosts entry for itself. In order to do that, run following command at each node and answer yes at confirmation.

ssh os_user@nodeitself ls

Finally, check that each node does a ssh ls to orher node without requiring any password or confirmation.

ssh os_user@othernode ls
ssh os_user@nodeitself ls


3-) edit user profile and set following variables:
 
ORACLE_HOME

RDBMS_HOME
ASM_HOME
CRS_HOME
ORACLE_BASE
PATH
ORACLE_SID
LD_LIBRARY_PATH
4-) Check permisions of ORACLE HOME and BASE so new user has write permission on these (on both nodes)
5-) Set new environments paramaters on both nodes.
6-) Start Installer and install software only... Create database? maybe next time...
 

4 comments:

Anonymous said...

Who taught you to create new oracle home every time you need build new database?
each oracle home can server any number of the database instances (limited by hardware).
Why one need to spend hours to create new users and repeat software installations?
Just start DBCA and create new database in the same oracle home...

Oracle Log said...

You have to patch/upgrade all the databases at the same time if you are using common oracle home. I dont suggest it, especially for mission-critical databases.

Unknown said...

Man, you rock. Love your blo. Thumbs up!!
keygen

Oracle Log said...

Thx.