Contents
1. Introduction: PAGEREF _Toc190769674 \h 2
2. Preparing Access to the New Node PAGEREF _Toc190769675 \h 2
2.1 Create the operating system user and group on the new node PAGEREF _Toc190769676 \h 2
2.2 Configuring the Secure Shell PAGEREF _Toc190769677 \h 2
3. Clone the Oracle Clusterware Home Directory PAGEREF _Toc190769678 \h 5
4. Clone the Automatic Storage Management Home Directory PAGEREF _Toc190769679 \h 6
5. Clone Oracle Software Home Directory PAGEREF _Toc190769680 \h 6
6. Creating a Listener on the New Node PAGEREF _Toc190769681 \h 7
7. Create a new cluster instance on the new node PAGEREF _Toc190769682 \h 8
8. Conclusions PAGEREF _Toc190769683 \h 9
1. Introduction:
This process describes how to add a new node to an existing Oracle Real Application Clusters (Oracle RAC) environment.
2. Preparing Access to the New Node
The following steps needs be followed to prepare the node before it is added to the cluster:
2.1 Create the operating system user and group on the new node
When installing Oracle RAC on UNIX and Linux platforms, the software is installed on one node, and OUI uses the Secure Shell (SSH) to copy the software binary files to the other cluster nodes.
a) If this is the first time Oracle software is being installed on the new node and the dba group does not exist, then create the dba group as follows:
- Login as root user
- # /usr/sbin/groupadd dba
b) If the user that owns the Oracle software does not exist on the new node then create the user as follows:
# useradd -u <UID> –g dba -d <Home> -r oracle
Set the password for the oracle account using the following command. The password should be same as the other nodes.
# passwd oracle
Note: The UID should be same as the UID of oracle from any existing node.
c) Verify that the attributes of the user oracle are identical on all the existing nodes:
# id oracle
2.2 Configuring the Secure Shell
a) Log in to the new node as the oracle user
b) Determine if a .ssh directory exists in the oracle user's home directory. If not, create the .ssh directory and set the directory permission so that only the oracle user has access to the directory, as shown here:
$ ls -a $HOME
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
c) Create the RSA-type public and private encryption keys on the new node as follows:
$/usr/bin/ssh-keygen -t rsa
At the prompts:
Accept the default location for the key file by pressing the Enter key.
When prompted for a pass phrase, enter and confirm a pass phrase that is different from the oracle user's password.
This command creates the public key in the /home/oracle/.ssh/id_rsa.pub file and the private key in the /home/oracle/.ssh/id_rsa file.
d) Create the DSA type public and private keys on the new node as follows:
$ /usr/bin/ssh-keygen -t dsa
At the prompts:
Accept the default location for the key file by pressing the Enter key.
When prompted for a pass phrase, enter and confirm a pass phrase that is different from the oracle user's password.
This command creates the public key in the /home/oracle/.ssh/id_dsa.pub file and the private key in the /home/oracle/.ssh/id_dsa file.
e) Add the Keys to an Authorized Key File
Use Secure Copy (SCP) or Secure FTP (SFTP) to copy the authorized_keys file to the oracle user .ssh directory from any existing cluster node. The following example uses SCP to copy the authorized_keys file to the new node from an existing node.
Log on to an existing node as oracle user
$scp ~/.ssh/authorized_keys <New Node>:<oracle user HOME>/.ssh/
You are prompted to accept an RSA or DSA key. Enter yes, and you see that the node you are copying to is added to the known_hosts file.
When prompted, provide the password for the oracle user, which should be the same on all the nodes in the cluster (Note: this is the user password, not the newly specified passphrase). The authorized_keys file is then copied to the remote node.
Log on to the new node as oracle user where you copied the authorized_keys file. Then change to the .ssh directory, and using the cat command, add the RSA and DSA keys for the new node to authorized_keys file as follows:
$ cat id_rsa.pub >> authorized_keys
$ cat id_dsa.pub >> authorized_keys
Use SCP to copy the authorized_keys file from the new node to all the other existing cluster nodes, overwriting the existing version.
$scp ~/.ssh/authorized_keys <Existing Node>:<oracle user HOME>/.ssh/
From the new node complete the SSH configuration by using the ssh command to retrieve the date on each node in the cluster.
$ ssh <Existing Node> date
The first time you use SSH to connect to one node from another node, you see a message similar to the following:
The authenticity of host 'docrac1(143.46.43.100) can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e. Are you sure you want to continue connecting (yes/no)? yes
Enter yes at the prompt to continue. You should not see this message again when you connect to this node to the other node. If you see any other messages or text, apart from the date, then the installation can fail.
Add the public and private node names for the new node to the /etc/hosts file on the existing nodes
Verify that the new node can be accessed (using the ping command) from the existing nodes
Run the following command on any existing node to verify that the new node has been properly configured:
$ cluvfy stage -pre crsinst -n <New Node Name>
3. Clone the Oracle Clusterware Home Directory
Use Oracle Universal Installer (OUI) to add an Oracle Clusterware home to the new node being added to the Oracle RAC cluster.
Go to the <Cluster Home>/oui/bin directory of an existing node and run the addNode.sh script.
$ cd <Cluster Home>/oui/bin
$ ./addNode.sh
OUI starts and first displays the Welcome window.
Click Next.
The Specify Cluster Nodes to Add to Installation window appears.
Select the new node or nodes that you want to add. click Next.
Verify the entries that OUI displays on the Summary Page and click Next.
Run the rootaddNode.sh script from the <Cluster Home>/install/ directory on the existing node when prompted to do so.
Basically, this script adds the node applications of the new node to the OCR configuration.
Run the orainstRoot.sh script on the new node which is being added if OUI prompts you to do so.
Run the <Cluster Home>/root.sh script on the new node to start Oracle Clusterware on the new node.
Add the new node's Oracle Notification Services (ONS) configuration information to the shared Oracle Cluster Registry (OCR).
- Obtain the ONS port identifier used by the new node, by running the following command from the <Cluster Home>/opmn/conf directory on an existing node node:
$cat ons.config
- After you locate the ONS port number for the new node, you must make sure that the ONS on existing nodes can communicate with the ONS on the new node.
From the <Cluster Home>/bin directory on an existing node, run the Oracle Notification Services configuration utility as shown below, where remote_port is the port number obtained from previous step:
$ ./racgons add_config <New Node>:remote_port
At the end of the cloning process, you should have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root user on the newly configured node:
$CRS_home/bin/cluvfy stage -post crsinst -n docrac3 –verbose
4. Clone the Automatic Storage Management Home Directory
Use Oracle Universal Installer (OUI) to add ASM home to the new node being added to the Oracle RAC cluster.
Go to the $ASM_HOME/oui/bin directory on an existing node and run the addNode.sh script.
When OUI displays the Node Selection window, select the new node to be added then click Next.
Verify the entries that OUI displays on the Summary window, then click Next.
Run the root.sh script on the new node, from the ASM home directory on that node when OUI prompts you to do so.
You now have a copy of the ASM software on the new node.
5. Clone Oracle Software Home Directory
Use Oracle Universal Installer (OUI) to add Oracle Software home to the new node being added to the Oracle RAC cluster.
Go to the $ORACLE_HOME/oui/bin directory on an existing node and run the addNode.sh script.
When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added, then click Next.
Verify the entries that OUI displays in the Cluster Node Addition Summary window, then click Next.
Run the root.sh script on the new node, from the $ORACLE_HOME directory on that node when OUI prompts you to do so.
After completing these steps, you should have an installed Oracle RAC home on the new node.
6. Creating a Listener on the New Node
To service database instance connection requests on the new node, you must create a Listener on that node. Use the Oracle Net Configuration Assistant (NETCA) to create a Listener on the new node. Before beginning this procedure, ensure that your existing nodes have the $ORACLE_HOME environment variable set correctly.
Start the Oracle Net Configuration Assistant by entering netca at the system prompt from the $ORACLE_HOME/bin directory.
Select Listener configuration, and click Next.
NETCA displays the Listener Configuration, Listener window.
Select Add to create a new Listener, then click Next.
NETCA displays the Listener Configuration, Listener Name window.
Accept the default value of LISTENER for the Listener name by clicking Next.
NETCA displays the Listener Configuration, Select Protocols window.
Choose TCP and move it to the Selected Protocols area, then click Next.
NETCA displays the Listener Configuration, TCP/IP Protocol window.
Choose Use the standard port number of 1521, then click Next.
NETCA displays the Real Application Clusters window.
Select Cluster configuration for the type of configuration to perform, then click Next.
NETCA displays the Real Application Clusters, Active Nodes window.
Select the name of the node you are adding, then click Next.
NETCA creates a Listener using the configuration information provided. You can now exit NETCA.
You should now have a Listener named LISTENER running on the new node.
7. Create a new cluster instance on the new node
Start DBCA by entering dbca at the system prompt from the $ORACLE_HOME/bin directory.
Select Oracle Real Application Clusters database, and then click Next.
DBCA displays the Operations window.
Select Instance Management, and then click Next.
DBCA displays the Instance Management window.
Select Add an Instance, then click Next.
DBCA displays the List of Cluster Databases window, which shows the databases and their current status, such as ACTIVE or INACTIVE.
In the List of Cluster Databases window, select the active Oracle RAC database to which you want to add an instance. Enter the user name and password for the database user that has SYSDBA privileges. Click Next.
DBCA will spend a few minutes performing tasks in the background, then it will display the Instance naming and node selection window.
In the Instance naming and node selection window, enter the instance name in the field at the top of this window if the default instance name provided by DBCA does not match your existing instance naming scheme.
Click Next to accept the instance name
DBCA displays the Instance Storage window.
In the Instance Storage window, you have the option of changing the default storage options and file locations for the new database instance. In this example, you accept all the default values and click Finish.
DBCA displays the Summary window.
Review the information in the Summary window, then click OK to start the database instance addition operation. DBCA displays a progress dialog box showing DBCA performing the instance addition operation.
During the instance addition operation, if you are using ASM for your cluster database storage, DBCA detects the need for a new ASM instance on the new node.
When DBCA displays a dialog box, asking if you want to ASM to be extended, click Yes.
After DBCA extends ASM on the new node and completes the instance addition operation, DBCA displays a dialog box asking whether or not you want to perform another operation. Click No to exit DBCA.
8. Conclusions
You should now have a new cluster database instance and ASM instance running on the new node. After you terminate your DBCA session, you should run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges:
$ <Cluster Home>/bin/cluvfy comp admprv -o db_config -d oracle_home -n <New Node Name> -verbose
No comments:
Post a Comment