本文共 3878 字,大约阅读时间需要 12 分钟。
说明:本文为Oracle 11g RAC增删节点指导手册 版本:本文摘自Oracle MOS并对原文进行了修改优化 标签:RAC增删节点、RAC添加节点、RAC增加节点、RAC剔除节点、RAC删除节点 温馨提示:如果您发现本文哪里写的有问题或者有更好的写法请留言或私信我进行修改优化
★ 相关文章
★ 正文(增加节点)
* Steps to add a Node in the Cluster configuration(rac2 will be add)
★ The software has been deployed
- GRID - You will be prompted to run the below scripts on new node. su - root # /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose # scp rac1:/u01/app/11.2.0/grid/crs/install/crsconfig_params rac2:/u01/app/11.2.0/grid/crs/install/ # /u01/app/oraInventory/orainstRoot.sh (This command does not normally need to be executed manually) # /u01/app/11.2.0/grid/root.sh Run both the scripts as root user on new node.- If successful, clusterware daemons, the listener, the ASM instance, etc. should be started by the "root.sh" script
su - grid # crs_stat -t -v # crsctl check crs # crsctl stat res -t- DB
- On new node run the following command to add new node to the db-cluster su - oracle $ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}"- On main-node run the dbca to add new instance on new node.
$ dbca + Oracle RAC Database + Instance Management + Add an Instance + Enter SYS user details and proceed with Instance addition.
★ The software has not been deployed
Make sure all the prechecks are completed before proceeding. $ cluvfy stage -pre crsinst -n rac1,rac2 -verbose $ cluvfy stage -pre nodeadd -n rac2 [-fixup [-fixupdir fixup_dir]] [-verbose]============================================================
-- Extend Clusterware - From an existing node, run "addNode.sh" as grid user to extend the clusterware. $ export IGNORE_PREADDNODE_CHECKS=Y $ cd $ORACLE_HOME/oui/bin $ ./addNode.sh "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac2-priv}"- At the end, if Cluster Node Addition is successful, you will be prompted to run the below scripts on new node.
/u01/app/oraInventory/orainstRoot.sh (This command does not normally need to be executed manually) /u01/app/11.2.0/grid/root.sh Run both the scripts as root user on new node.- If successful, clusterware daemons, the listener, the ASM instance, etc. should be started by the "root.sh" script
# crs_stat -t -v # crsctl check crs # crsctl stat res -t============================================================
-- Extend Oracle Database Software - From an existing node - as the database software owner - run the following command to extend the Oracle database software to the new node. $ cd $ORACLE_HOME/oui/bin $ ./addNode.sh -silent "CLUSTER_NEW_NODES={node2}"If there is error like blow ,you need to run the below command and rerun the above command again.
Error - "SEVERE:The new nodes 'rac2' are already part of the cluster". Analysis - It was picking up same hostname rac2 twice, instead of picking both the hostnames (rac1,rac2). Solution - Oracle Home was not detached from existing node. Run the below command as oracle user on node1. $ cd $ORACLE_HOME/oui/bin $ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1}" Run the command again and it should proceed ahead.- At the end, if Oracle Software Extension is successful, you will be prompted to run the root.sh script on new node.
/u01/app/oracle/products/11.2.0/db/root.sh Run the script as root user on new node.============================================================
-- Add Instance to Clustered Database - From an existing node – as the database software owner – run the following command to add the instance, $ dbca + Oracle RAC Database + Instance Management + Add an Instance + Enter SYS user details and proceed with Instance addition.============================================================
Possible commands: # <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
※ 如果您觉得文章写的还不错, 别忘了在文末给作者点个赞哦 ~
over
转载地址:https://blog.csdn.net/zzt_2009/article/details/107846790 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!