Oracle12c集群启动故障该怎么办
Oracle 12c集群启动故障该怎么办,很多新手对此不是很清楚,为了帮助大家解决这个难题,下面小编将为大家详细讲解,有这方面需求的人可以来学习下,希望你能有所收获。
成都网站建设公司更懂你!创新互联建站只做搜索引擎喜欢的网站!成都网站制作前台采用搜索引擎认可的DIV+CSS架构,全站HTML静态,H5场景定制+CSS3网站,提供:网站建设,微信开发,小程序定制开发,成都商城网站开发,app软件开发,域名注册,服务器租售,网站代托管运营,微信公众号代托管运营。
由于维护人员修改 OracleLinux 7中的/dev/shm大小造成其大小小于Oracle实例的MEMORY_TARGET或者SGA_TARGET而导致集群不能启动(CRS-4535,CRS-4000)
[grid@jtp1 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
检查asm磁盘的权限是否有问题
[root@jtp3 ~]# ls -lrt /dev/asm*
重启crs
[root@jtp1 bin]# ./crsctl stop crs -f
[root@jtp1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
查看crs的alert.log发现磁盘组不能加载
[root@jtp1 ~]# tail -f /u01/app/grid/diag/crs/jtp1/crs/trace/alert.log
locations are on ASM disk groups [CRS], and none of these disk groups are mounted
继续查看 ohasd_orarootagent_root.trc
[root@jtp1 ~]# more /u01/app/grid/diag/crs/jtp1/crs/trace/ohasd_orarootagent_root.trc
Trace file /u01/app/grid/diag/crs/jtp1/crs/trace/ohasd_orarootagent_root.trc
Oracle Database 12c Clusterware Release 12.2.0.1.0 - Production Copyright 1996, 2016 Oracle. All rights reserved.
*** TRACE CONTINUED FROM FILE /u01/app/grid/diag/crs/jtp1/crs/trace/ohasd_orarootagent_root_93.trc ***
2018-04-02 18:42:09.165 : CSSCLNT:3554666240: clsssterm: terminating context (0x7f03c0229390)
2018-04-02 18:42:09.165 : default:3554666240: clsCredDomClose: Credctx deleted 0x7f03c0459470
2018-04-02 18:42:09.166 : GPNP:3554666240: clsgpnp_dbmsGetItem_profile: [at clsgpnp_dbms.c:399] Result: (0) CLSGPNP_OK. (:GPNP00401:)got ASM-Profile.Mode='remote'
2018-04-02 18:42:09.253 : CSSCLNT:3554666240: clsssinit: initialized context: (0x7f03c045c2c0) flags 0x115
2018-04-02 18:42:09.253 : CSSCLNT:3554666240: clsssterm: terminating context (0x7f03c045c2c0)
2018-04-02 18:42:09.254 : CLSNS:3554666240: clsns_SetTraceLevel:trace level set to 1.
2018-04-02 18:42:09.254 : GPNP:3554666240: clsgpnp_dbmsGetItem_profile: [at clsgpnp_dbms.c:399] Result: (0) CLSGPNP_OK. (:GPNP00401:)got ASM-Profile.Mode='remote'
2018-04-02 18:42:09.257 : default:3554666240: Inited LSF context: 0x7f03c04f0420
2018-04-02 18:42:09.260 : CLSCRED:3554666240: clsCredCommonInit: Inited singleton credctx.
2018-04-02 18:42:09.260 : CLSCRED:3554666240: (:CLSCRED0101:)clsCredDomInitRootDom: Using user given storage context for repository access.
2018-04-02 18:42:09.294 : USRTHRD:3554666240: {0:9:3} 8033 Error 4 querying length of attr ASM_DISCOVERY_ADDRESS
2018-04-02 18:42:09.300 : USRTHRD:3554666240: {0:9:3} 8033 Error 4 querying length of attr ASM_DISCOVERY_ADDRESS
2018-04-02 18:42:09.356 : CLSCRED:3554666240: (:CLSCRED1079:)clsCredOcrKeyExists: Obj dom : SYSTEM.credentials.domains.root.ASM.Self.5c82286a084bcf37ffa014144074e5dd.root not found
2018-04-02 18:42:09.356 : USRTHRD:3554666240: {0:9:3} 7755 Error 4 opening dom root in 0x7f03c064c980
检查ASM的alert.log 发现/dev/shm大小小于MEMORY_TARGET大小,并且给出了/dev/shm应该被设置的最小值
[root@jtp1 ~]# tail -f /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
WARNING: ASM does not support ipclw. Switching to skgxp
WARNING: ASM does not support ipclw. Switching to skgxp
WARNING: ASM does not support ipclw. Switching to skgxp
* instance_number obtained from CSS = 1, checking for the existence of node 0...
* node 0 does not exist. instance_number = 1
Starting ORACLE instance (normal) (OS id: 9343)
2018-04-02T18:31:00.187055+08:00
CLI notifier numLatches:7 maxDescs:2301
2018-04-02T18:31:00.193961+08:00
WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 1140850688 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected. Current available is 1073573888 and used is 167936 bytes. Ensure that the mount point is /dev/shm for this directory.
修改/dev/shm的大小可以通过修改/etc/fstab来实现,将/dev/shm的大小修改为12G
[root@jtp1 bin]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 49G 42G 7.9G 85% /
devtmpfs 12G 28K 12G 1% /dev
tmpfs 1.0G 164K 1.0G 1% /dev/shm
tmpfs 1.0G 9.3M 1015M 1% /run
tmpfs 1.0G 0 1.0G 0% /sys/fs/cgroup
/dev/sda1 1014M 141M 874M 14% /boot
[root@jtp1 bin]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sat Mar 18 15:27:13 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root / xfs defaults 0 0
UUID=ca5854cd-0125-4954-a5c4-1ac42c9a0f70 /boot xfs defaults 0 0
/dev/mapper/ol-swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults,size=12G 0 0
tmpfs /run tmpfs defaults,size=12G 0 0
tmpfs /sys/fs/cgroup tmpfs defaults,size=12G 0 0
重启集群后,再次检查集群资源状态恢复正常
--------------------------------------------------------------------------------
[grid@jtp1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.CRS.dg
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.DATA.dg
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.FRA.dg
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.TEST.dg
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.chad
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.net1.network
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.ons
ONLINE ONLINE jtp1 STABLE
ONLINE ONLINE jtp2 STABLE
ora.proxy_advm
OFFLINE OFFLINE jtp1 STABLE
OFFLINE OFFLINE jtp2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE jtp1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE jtp2 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE jtp2 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE jtp2 169.254.237.250 88.8
8.88.2,STABLE
ora.asm
1 ONLINE ONLINE jtp1 Started,STABLE
2 ONLINE ONLINE jtp2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE jtp2 STABLE
ora.jy.db
1 ONLINE OFFLINE STABLE
2 ONLINE OFFLINE STABLE
ora.jtp1.vip
1 ONLINE ONLINE jtp1 STABLE
ora.jtp2.vip
1 ONLINE ONLINE jtp2 STABLE
ora.mgmtdb
1 ONLINE ONLINE jtp2 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE jtp2 STABLE
ora.scan1.vip
1 ONLINE ONLINE jtp1 STABLE
ora.scan2.vip
1 ONLINE ONLINE jtp2 STABLE
ora.scan3.vip
1 ONLINE ONLINE jtp2 STABLE
--------------------------------------------------------------------------------
到此集群恢复正常
看完上述内容是否对您有帮助呢?如果还想对相关知识有进一步的了解或阅读更多相关文章,请关注创新互联行业资讯频道,感谢您对创新互联的支持。
分享文章:Oracle12c集群启动故障该怎么办
浏览地址:http://scyanting.com/article/jphidc.html