HowtorestoreASMbasedOCRa
(文档 ID 1062983.1)
Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.1.0 to 11.2.0.4 [Release 11.2]
Information in this document applies to any platform.
公司主营业务:成都网站设计、成都做网站、移动网站开发等业务。帮助企业客户真正实现互联网宣传,提高企业的竞争能力。创新互联公司是一支青春激扬、勤奋敬业、活力青春激扬、勤奋敬业、活力澎湃、和谐高效的团队。公司秉承以“开放、自由、严谨、自律”为核心的企业文化,感谢他们对我们的高要求,感谢他们从不同领域给我们带来的挑战,让我们激情的团队有机会用头脑与智慧不断的给客户带来惊喜。创新互联公司推出民乐免费做网站回馈大家。
Goal
It is not possible to directly restore a manual or automatic OCR
backup if the OCR is located in an ASM disk group. This is caused by the
fact that the command 'ocrconfig -restore' requires ASM to be up &
running in order to restore an OCR backup to an ASM disk group. However,
for ASM to be available, the CRS stack must have been successfully
started. For the restore to succeed, the OCR also must not be in use
(r/w), i.e. no CRS daemon must be running while the OCR is being
restored.
A description of the general procedure to restore the OCR can be found in the documentation,
this document explains how to recover from a complete loss of the ASM
disk group that held the OCR and Voting files in a 11gR2 Grid
environment.
Solution
When using an ASM disk group for CRS there are typically 3 different types of files located in the disk group that potentially need to be restored/recreated:
the Oracle Cluster Registry file (OCR)
the Voting file(s)
the shared SPFILE for the ASM instances
The following example assumes that the OCR was located in a single
disk group used exclusively for CRS. The disk group has just one disk
using external redundancy.
Since the CRS disk group has been lost the CRS stack will not be available on any node.
The following settings used in the example would need to be replaced according to the actual configuration:
GRID user: oragrid
GRID home: /u01/app/11.2.0/grid ($CRS_HOME)
ASM disk group name for OCR: CRS
ASM/ASMLIB disk name: ASMD40
Linux device name for ASM disk: /dev/sdh2
Cluster name: rac_cluster1
Nodes: racnode1, racnode2
This document assumes that the name of the OCR diskgroup remains unchanged, however there may be a need to use a different diskgroup name, in which case the name of the OCR diskgroup would have to be modified in /etc/oracle/ocr.loc across all nodes prior to executing the following steps.
1. Locate the latest automatic OCR backup
When using a non-shared CRS home, automatic OCR backups can be located
on any node of the cluster, consequently all nodes need to be checked
for the most recent backup:
$ ls -lrt $CRS_HOME/cdata/rac_cluster1/
-rw------- 1 root root 7331840 Mar 10 18:52 week.ocr
-rw------- 1 root root 7651328 Mar 26 01:33 week_.ocr
-rw------- 1 root root 7651328 Mar 29 01:33 day.ocr
-rw------- 1 root root 7651328 Mar 30 01:33 day_.ocr
-rw------- 1 root root 7651328 Mar 30 01:33 backup02.ocr
-rw------- 1 root root 7651328 Mar 30 05:33 backup01.ocr
-rw------- 1 root root 7651328 Mar 30 09:33 backup00.ocr
2. Make sure the Grid Infrastructure is shutdown on all nodes
Given that the OCR diskgroup is missing, the GI stack will not be
functional on any node, however there may still be various daemon
processes running. On each node shutdown the GI stack using the force
(-f) option:
# $CRS_HOME/bin/crsctl stop crs -f
3. Start the CRS stack in exclusive mode
On the node that has the most recent OCR backup, log on as root and
start CRS in exclusive mode, this mode will allow ASM to start &
stay up without the presence of a Voting disk and without the CRS daemon
process (crsd.bin) running.
11.2.0.1:
# $CRS_HOME/bin/crsctl start crs -excl
...
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded
Please note:
This document
assumes that the CRS diskgroup was completely lost, in which case the
CRS daemon (resource ora.crsd) will terminate again due to the
inaccessibility of the OCR - even if above message indicates that the
start succeeded.
If this is not the case - i.e. if the CRS
diskgroup is still present (but corrupt or incorrect) the CRS daemon
needs to be shutdown manually using:
# $CRS_HOME/bin/crsctl stop res ora.crsd -init
otherwise the subsequent OCR restore will fail.
11.2.0.2 and above:
# $CRS_HOME/bin/crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
...
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'auw2k3'
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
CRS-2676: Start of 'ora.drivers.acfs' on 'racnode1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
IMPORTANT:
A new option '-nocrs'
has been introduced with 11.2.0.2, which prevents the start of the
ora.crsd resource. It is vital that this option is specified, otherwise
the failure to start the ora.crsd resource will tear down
ora.cluster_interconnect.haip, which in turn will cause ASM to crash.
4. Label the CRS disk for ASMLIB use
If using ASMLIB the disk to be used for the CRS disk group needs to stamped first, as user root do:
# /usr/sbin/oracleasm createdisk ASMD40 /dev/sdh2
Writing disk header: done
Instantiating disk: done
5. Create the CRS diskgroup via sqlplus
The disk group can now be (re-)created via sqlplus from the grid user. The compatible.asm attribute must be set to 11.2 in order for the disk group to be used by CRS:
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:47:24 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create diskgroup CRS external redundancy disk 'ORCL:ASMD40' attribute 'COMPATIBLE.ASM' = '11.2';
Diskgroup created.
SQL> exit
6. Restore the latest OCR backup
Now that the CRS disk group is created & mounted the OCR can be restored - must be done as the root user:
# cd $CRS_HOME/cdata/rac_cluster1/
# $CRS_HOME/bin/ocrconfig -restore backup00.ocr
7. Start the CRS daemon on the current node(11.2.0.1 only !)
Now that the OCR has been restored the CRS daemon can be started, this
is needed to recreate the Voting file. Skip this step for 11.2.0.2.0.
# $CRS_HOME/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded
8. Recreate the Voting file
The Voting file needs to be initialized in the CRS disk group:
# $CRS_HOME/bin/crsctl replace votedisk +CRS
Successful addition of voting disk 00caa5b9c0f54f3abf5bd2a2609f09a9.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
9. Recreate the SPFILE for ASM(optional)
Please note:
Starting with 11gR2 ASM can start without a PFILE or SPFILE, so if you are
- not using an SPFILE for ASM
- not using a sharedSPFILE for ASM
- using a shared SPFILE not stored in ASM (e.g. on cluster file system)
this step possibly should be skipped.
Also use extra care in regards to the asm_diskstringparameter as it impacts the discovery of the voting disks.
Please verify the previous settings using the ASM alert log.
Prepare a pfile (e.g. /tmp/asm_pfile.ora) with the ASM startup
parameters - these may vary from the example below. If in doubt consult
the ASM alert log as the ASM instance startup should list all
non-default parameter values. Please note the last startup of ASM (in
step 2 via CRS start) will not have used an SPFILE, so a startup prior
to the loss of the CRS disk group would need to be located.
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oragrid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
Now the SPFILE can be created using this PFILE:
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:52:39 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create spfile='+CRS' from pfile='/tmp/asm_pfile.ora';
File created.
SQL> exit
10. Shutdown CRS
Since CRS is
running in exclusive mode, it needs to be shutdown to allow CRS to run
on all nodes again. Use of the force (-f) option may be required:
# $CRS_HOME/bin/crsctl stop crs -f
...
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'auw2k3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
11. Rescan ASM disks
If using ASMLIB rescan all ASM disks on each node as the root user:
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASMD40"
12. Start CRS
As the root user submit the CRS startup on all cluster nodes:
# $CRS_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
13. Verify CRS
To verify that CRS is fully functional again:
# $CRS_HOME/bin/crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
# $CRS_HOME/bin/crsctl status resource -t
当前文章:HowtorestoreASMbasedOCRa
分享路径:http://scyanting.com/article/jgddco.html