Oracle12.2RAConLinuxBestPracticeDocumentation
Oracle 12.2 RAC on RedHat 7.3 Best Practice Documentation
成都创新互联专注于江孜网站建设服务及定制,我们拥有丰富的企业做网站经验。 热诚为您提供江孜营销型网站建设,江孜网站制作、江孜网页设计、江孜网站官网定制、成都小程序开发服务,打造江孜网络公司原创品牌,更为您提供江孜网站排名全网营销落地服务。
# 项目需求建多套RAC,为减低管理成本,故建立一个统一的 12.2 RAC的模板。希望我的模板可以
# 帮助 Oracle 12.2 RAC on RedHat 7.3 这个应用场景的同学,文档已经通过业务验证,可放心使用。
# 但仅仅适合我司业务,如您适用生产系统产生的任何风险概不负责,请谨慎。
# 12.2 RAC模板包含内容如下:安装RAC/升级Opatch并打补丁/安装RAC后优化
# 由于篇幅过长,附上目录,便于同学们的学习。 目录 Oracle 12cR2 RAC on Linux 最佳实践文档3 1.Oracle RAC安装规划3 a)RAC物理架构图3 b)软件环境规划3 c)用户组与文件系统规划3 d)网络规划4 e)存储规划4 2.Oracle RAC所有集群节点准备5 a)硬件检验5 1.操作系统基础环境检验5 2.双网卡绑定6 b)软件检验7 c)操作系统配置与优化8 1.操作系统rpm包需求8 2.操作系统创建软件安装组及用户(oracle,grid)11 3.操作系统创建软件安装目录12 4.操作系统内核参数优化14 5.操作系统内存优化16 6.操作系统共享内存优化17 7.操作系统软件安装用户环境变量(oracle,grid)18 8.操作系统用户(oracle,grid)硬性限制优化22 9.操作系统关闭防火墙和selinux24 10.操作系统安装软件用户(oracle,grid)互信配置25 11.操作系统使用/etc/hosts解析36 12.操作系统时间同步配置NTP38 3.Oracle RAC 共享存储准备38 a)使用udev持久化磁盘权限38 b)powerpath多路径软件聚合LUN成一条路径供系统使用40 c)设置Disk I/O Scheduler on Linux45 4.GI安装46 a)cvuqdisk包需要所有节点均安装46 b)GI软件包直接解压到GI的$ORACLE_HOME下46 c)升级Opatch57 d)GI补丁升级与GI图形安装47 5.RDBMS 软件安装66 a)DB软件使用oracle用户图形安装66 b)升级Opatch73 c)DB补丁升级74 6.ASMCA创建磁盘组75 7.DBCA创建数据库79 8.RAC优化90 a)安全90 1.用户密码策略优化(可根据安全策略自行创建)90 2.优化低端版本的客户端无法登录12c高版本数据库问题(所有节点配置)90 b)实例优化90 1.process进程数优化90 2.实例本地进程并发优化90 3.避免大量library cache lock导致用户不能登录的情况90 4.DBLINK使用insert优化91 5.12.2 RAC 避免DataPump导入出现大量'Library Cache Lock' (Cycle)91 6.增加 db_files91 c)系统优化91 1.大页使用优化91 9.附表93
1. Oracle RAC安装规划
a) RAC物理架构图
b) 软件环境规划
软件:
GI 软件:linuxx64_12201_grid_home.zip
DB 软件:linuxx64_12201_database.zip
GI 补丁:p29301687_122010_Linux-x86-64.zip
DB 补丁:p29314339_122010_Linux-x86-64.zip
Opatch 补丁:p6880880_122010_Linux-x86-64.zip
rpm包:compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm
数据库版本:
12.2.0.1.19
操作系统:
64bit Redhat 7.3
c) 用户组与文件系统规划
用户组和文件系统配置
软件 用户 主组 辅助组 ORACLE_BASE ORACLE_HOME
GI grid oinstall dba,asmadmin,asmdba,asmoper,racdba /oracle/app/grid /oracle/app/12.2.0/grid
DB oracle oinstall dba,oper,asmdba, asmadmin,racdba,backupdba,dgdba,kmdba /oracle/app/oracle $ORACLE_BASE/product/12.2.0/db_1
d) 网络规划
1. 优先双网卡绑定原则
2. 优先私有网络万兆网卡原则
3. 避免使用169.254.*.* 地址原则
/etc/hosts列表如下
名称 对应网卡 IP 用途
bmcdb1 bond0 10.151.115.71 public
bmcdb1-vip 10.151.115.73 VIP
bmcdb1-priv bond1 192.155.1.171 private
bmcdb2 bond0 10.151.115.72 public
bmcdb2-vip 10.151.115.74 VIP
bmcdb2-priv bond1 192.155.1.172 private
scan-bmcdb 10.151.115.75 scan-ip
e) 存储规划
1. 设置disk I/O schedulers为Deadline
# echo deadline > /sys/block/${ASM_DISK}/queue/scheduler
2. 绑定存储LUN属性,保证系统重启属性不变
3. ASM规划表
ASM规划表
磁盘组名 分区 属主 权限 单个LUN大小 冗余 总容量
OCR emcpowera grid:asmadmin 660 18G
emcpowerb grid:asmadmin 660 18G Normal 54G
emcpowerc grid:asmadmin 660 18G
DATA emcpowere grid:asmadmin 660 200G External 200G
FRA emcpowerf grid:asmadmin 660 100G External 100G
OCR 磁盘组:保存CRS信息及Voting disk等信息
DATA磁盘组:保存数据文件、控制文件
FRA磁盘组:保存节点的归档文件
2. Oracle RAC所有集群节点准备
a) 硬件检验
1. 操作系统基础环境检验
# 操作系统发行版
cat /etc/redhat-release
# 内核版本
uname -a
# Memory total
grep MemTotal /proc/meminfo
# SWAP Total
grep SwapTotal /proc/meminfo
# tmp目录大小
df -h /tmp
# 当前内存和swap使用状态
free -g
# 共享内存大小
df -h /dev/shm
# CPU 型号,物理个数,核数,逻辑个数
cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l
cat /proc/cpuinfo| grep "cpu cores"| uniq
cat /proc/cpuinfo| grep "processor"| wc -l
# 节点信息收集省略
2. 双网卡绑定
# 双网卡绑定采用主备模式,双节点保持Bond模式一置,并持续ping,拔物理线进行测试
[oracle@bmcdb1 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-bond0_slave_1
HWADDR=E4:43:4B:72:F7:0C
TYPE=Ethernet
NAME="bond0 slave 1"
UUID=53c72c58-f724-4d50-b27c-71cf218d24ec
DEVICE=em1
>
MASTER=bond0
SLAVE=yes
[oracle@bmcdb1 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-bond0_slave_2
HWADDR=E4:43:4B:72:F7:0D
TYPE=Ethernet
NAME="bond0 slave 2"
UUID=af3db5d3-323c-411f-8070-f5083f8cffbc
DEVICE=em2
>
MASTER=bond0
SLAVE=yes
[oracle@bmcdb1 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1
DEVICE=bond0
BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
IPADDR=10.151.115.71
PREFIX=24
GATEWAY=10.151.115.1
DNS1=10.156.84.62
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=3d289bb5-e405-4674-a7b3-5abc21fea392
>
b) 软件检验
GI 软件:linuxx64_12201_grid_home.zip
DB 软件:linuxx64_12201_database.zip
GI 补丁:p29301687_122010_Linux-x86-64.zip
DB 补丁:p29314339_122010_Linux-x86-64.zip
Opatch 补丁:p6880880_122010_Linux-x86-64.zip
EMC powerpath 软件:EMCPower.LINUX-6.3.0.01.00-001.RHEL7.x86_64.rpm
操作系统RPM包:操作系统镜像或光盘
c) 操作系统配置与优化
1. 操作系统rpm包需求
# 操作系统最低要求
bc
binutils-2.23.52.0.1-12.el7 (x86_64)
compat-libcap1-1.10-3.el7 (x86_64)
compat-libstdc++-33-3.2.3-71.el7 (i686)
compat-libstdc++-33-3.2.3-71.el7 (x86_64)
glibc-2.17-36.el7 (i686)
glibc-2.17-36.el7 (x86_64)
glibc-devel-2.17-36.el7 (i686)
glibc-devel-2.17-36.el7 (x86_64)
ksh
libaio-0.3.109-9.el7 (i686)
libaio-0.3.109-9.el7 (x86_64)
libaio-devel-0.3.109-9.el7 (i686)
libaio-devel-0.3.109-9.el7 (x86_64)
libgcc-4.8.2-3.el7 (i686)
libgcc-4.8.2-3.el7 (x86_64)
libstdc++-4.8.2-3.el7 (i686)
libstdc++-4.8.2-3.el7 (x86_64)
libstdc++-devel-4.8.2-3.el7 (i686)
libstdc++-devel-4.8.2-3.el7 (x86_64)
libxcb-1.9-5.el7 (i686)
libxcb-1.9-5.el7 (x86_64)
libX11-1.6.0-2.1.el7 (i686)
libX11-1.6.0-2.1.el7 (x86_64)
libXau-1.0.8-2.1.el7 (i686)
libXau-1.0.8-2.1.el7 (x86_64)
libXi-1.7.2-1.el7 (i686)
libXi-1.7.2-1.el7 (x86_64)
libXtst-1.2.2-1.el7 (i686)
libXtst-1.2.2-1.el7 (x86_64)
libXrender (i686)
libXrender (x86_64)
libXrender-devel (i686)
libXrender-devel (x86_64)
make-3.82-19.el7 (x86_64)
net-tools-2.0-0.17.20131004git.el7 (x86_64) (for Oracle RAC and Oracle Clusterware)
nfs-utils-1.3.0-0.21.el7.x86_64 (for Oracle ACFS)
smartmontools-6.2-4.el7 (x86_64)
sysstat-10.1.5-1.el7 (x86_64)
1.1 rpm包检验
# 检验系统缺少的rpm包并打印出来
for i in \
bc binutils compat-libcap1 compat-libstdc++-33 glibc glibc-devel ksh libaio libaio-devel \
libgcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst libXrender libXrender-devel gcc-c++ \
make net-tools nfs-utils smartmontools sysstat
do
rpm -q $i &>/dev/null || F="$F $i"
done ;echo $F;unset F
1.2 yum批量安装软件包
# yum批量安装rpm包,安装具体过程省略
yum install bc binutils compat-libcap1 compat-libstdc++ glibc glibc-devel ksh libaio libaio-devel \
libgcc gcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst libXrender libXrender-devel \
make net-tools nfs-utils smartmontools sysstat gcc-c++
# yum配置
# 解压tar包在当前目录
tar -xvf media.tar
# 更名原有的repo
mv /etc/yum.repos.d/redhat.repo /etc/yum.repos.d/redhat.repo_bk
# 创建新本地的repo
vi /etc/yum.repos.d/redhat7.repo
[redhat7.repo]
name=local
baseurl=file:///soft/media
gpgcheck=0
enabled=1
# 清除缓存,查看软件库,更新元数据缓存,查看yum的包
yum clean all #Remove cached data
yum repolist #Display the configured software repositories
yum makecache #Generate the metadata cache
yum list #List a package or groups of packages
1.3 rpm包再次检验
# 还是使用上面的脚本,直到无软件包输出为止
2. 操作系统创建软件安装组及用户(oracle,grid)
groupadd --gid 54321 oinstall
groupadd --gid 54322 dba
groupadd --gid 54323 asmdba
groupadd --gid 54324 asmoper
groupadd --gid 54325 asmadmin
groupadd --gid 54326 oper
groupadd --gid 54327 backupdba
groupadd --gid 54328 dgdba
groupadd --gid 54329 kmdba
groupadd --gid 54330 racdba
useradd --uid 54321 --gid oinstall --groups dba,oper,asmdba,asmadmin,racdba,backupdba,dgdba,kmdba oracle
passwd oracle
useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper,racdba grid
passwd grid
3. 操作系统创建软件安装目录
vi /etc/oraInst.loc
inventory_loc=/oracle/app/oraInventory
inst_group=oinstall
# Create the Oracle Inventory Directory
# To create the Oracle Inventory directory, enter the following commands as the root user:
mkdir -p /oracle/app/oraInventory
chown -R grid:oinstall /oracle/app/oraInventory
chmod -R 775 /oracle/app/oraInventory
# Creating the Grid Infrastructure Base Directory
mkdir -p /oracle/app/grid
chown -R grid:oinstall /oracle/app/grid
chmod -R 775 /oracle/app/grid
# Creating the Oracle Grid Infrastructure Home Directory
mkdir -p /oracle/app/12.2.0/grid
chown -R grid:oinstall /oracle/app/12.2.0/grid
chmod -R 775 /oracle/app/12.2.0/grid
# Creating the Oracle Base Directory
mkdir -p /oracle/app/oracle
mkdir -p /oracle/app/oracle/cfgtoollogs
# needed to ensure that dbca is able to run after the rdbms installation.
chown -R oracle:oinstall /oracle/app/oracle
chmod -R 775 /oracle/app/oracle
# Creating the Oracle RDBMS Home Directory
mkdir -p /oracle/app/oracle/product/12.2.0/db_1
chown -R oracle:oinstall /oracle/app/oracle/product/12.2.0/db_1
chmod -R 775 /oracle/app/oracle/product/12.2.0/db_1
4. 操作系统内核参数优化
# 内存为128G,设置kernel.shmmax=70G,SGA=50G and PGA=10G
# kernel.shmmni该参数是系统共享内存段最大个数
# kernel.shmmax该参数是一个进程共享内存段最大尺度
# SHMMAX应略大于SGA尺寸
# kernel.shmall该参数是共享内存总量
# kernel.shmall大于shmmax/PAGE_SIZE
cat /etc/sysctl.conf
kernel.shmmni = 4096
kernel.shmmax = 75161927680
kernel.shmall = 18350080
kernel.sem = 250 32000 100 128
fs.file-max = 7553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
# 安装数据库实例后算出并添加此参数
vm.nr_hugepages = 25026
# 立即生效(root)
/sbin/sysctl -p
#NOTE: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/ tuning_and_optimizing_red_hat_enterprise_linux_for_oracle_9i_and_10g_databases/index
#NOTE: The latest information on kernel parameter settings for Linux can be found in My Oracle Support ExtNote:169706.1.#
5. 操作系统内存优化
# Oracle一直建议关闭Transparent HugePages来优化内存的使用,
# 故关闭Transparent HugePages
grubby --default-kernel
grubby --args="transparent_hugepage=never" --update-kernel
grubby --info /boot/vmlinuz-3.10.0-514.el7.x86_64 >> /boot/grub2/grub.cfg
cat /sys/kernel/mm/transparent_hugepage/enabled
grep AnonHugePages /proc/meminfo
reboot
# 节点1执行展示结果
[root@bmcdb1 ~]# grubby --default-kernel
/boot/vmlinuz-3.10.0-514.el7.x86_64
[root@bmcdb1 ~]# grubby --args="transparent_hugepage=never" --update-kernel /boot/vmlinuz-3.10.0-514.el7.x86_64
# grubby --info /boot/vmlinuz-3.10.0-514.el7.x86_64 >> /boot/grub2/grub.cfg
cat /boot/grub2/grub.cfg
index=0
kernel=/boot/vmlinuz-3.10.0-514.el7.x86_64
args="ro crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet.UTF-8 transparent_hugepage=never"
root=/dev/mapper/rhel-root
initrd=/boot/initramfs-3.10.0-514.el7.x86_64.img
title=Red Hat Enterprise Linux Server (3.10.0-514.el7.x86_64) 7.3 (Maipo)
# 确认已关闭 Transparent HugePages
[root@bmcdb1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@bmcdb1 ~]# grep AnonHugePages /proc/meminfo
AnonHugePages: 0 kB
# reboot 展示结果忽略
6. 操作系统共享内存优化
# Redhat 7.2以后systemd-logind service引入了新的功能,用户注销时,删除所有IPC对象
# 避免用户中断时系统删除共享SGA内存,
# 导致ASM和database instance down机,故设置RemoveIPC=no
echo "RemoveIPC=no" >> /etc/systemd/logind.conf
systemctl restart systemd-logind
# ALERT: Setting RemoveIPC=yes on Redhat 7.2 and higher Crashes ASM and Database Instances as Well as Any Application That Uses a Shared Memory Segment (SHM) or Semaphores (SEM) (文档 ID 2081410.1)
# 节点1执行展示结果
[root@bmcdb1 ~]# echo "RemoveIPC=no" >> /etc/systemd/logind.conf
[root@bmcdb1 ~]# cat /etc/systemd/logind.conf
……中间展示忽略……
#RemoveIPC=no
RemoveIPC=no
# 设置共享内存
# 通过/etc/fstab设置共享内存
[root@bmcdb1 ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jun 25 01:24:22 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=1d01dfb8-026c-464f-9d8a-7bf9ee5079e5 /boot xfs defaults 0 0
UUID=741B-866B /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel-swap swap swap defaults 0 0
shm /dev/shm tmpfs defaults,size=80G 0
[root@bmcdb2 home]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Jul 1 19:01:01 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=39a4f14f-aab6-4e71-9424-c4fe0610c7fb /boot xfs defaults 0 0
UUID=7E8A-CC6A /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel-swap swap swap defaults 0 0
shm /dev/shm tmpfs defaults,size=80G 0
7. 操作系统软件安装用户环境变量(oracle,grid)
# grid
[grid@Bmcdb1 ~]$ vi .bash_profile
export PS1="[`whoami`@`hostname`:"'$PWD]$'
export JAVA_HOME=/usr/local/java; export JAVA_HOME
export ORACLE_BASE=/oracle/app/grid
export ORACLE_HOME=/oracle/app/12.2.0/grid
export ORACLE_SID=+ASM1
export ORACLE_PATH=/oracle/app/oracle/common/oracle/sql;
export ORACLE_TERM=xterm;
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS";
export NLS_LANG=american_america.ZHS16GBK ;
export TNS_ADMIN=$ORACLE_HOME/network/admin;
export ORA_NLS11=$ORACLE_HOME/nls/data;
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export TEMP=/tmp
export TMPDIR=/tmp
umask=022
# oracle
[oracle@Bmcdb1 ~]$ vi .bash_profile
export PS1="[`whoami`@`hostname`:"'$PWD]$'
export JAVA_HOME=/usr/local/java;
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.2.0/db_1
export ORACLE_SID=bmcdb1;
export ORACLE_UNQNAME=bmcdb;
export ORACLE_PATH=/oracle/app/common/oracle/sql;
export ORACLE_TERM=xterm;
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS";
export NLS_LANG=american_america.ZHS16GBK ;
export TNS_ADMIN=$ORACLE_HOME/network/admin;
export ORA_NLS11=$ORACLE_HOME/nls/data;
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export THREADS_FLAG=native;
export TEMP=/tmp
export TMPDIR=/tmp
umask=022
8. 操作系统用户(oracle,grid)硬性限制优化
# 提升用户硬性限制(Oracle、grid用户默认的硬性限制不满足GI安装和配置)
# limits.conf为PAM模板配置文件;所以应先配置PAM,即配置如下:
echo "session required pam_limits.so" >> /etc/pam.d/login
# RAC未运行之前,无法实际测量vm.nr_hugepages大小。
# 原则:Number Hugepages * Hugepagesize = minimum Memlock < RAM
cat /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
oracle soft memlock 134144000
oracle hard memlock 134144000
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
#NOTE:What is Memlock and How to Calculate the Values for Memlock (文档 ID 2511230.1)
# shell开始文件限制提升
cat /etc/profile
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
9. 操作系统关闭防火墙和selinux
# 关闭防火墙
systemctl status firewalld
systemctl start firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld
# 永久关闭selinux
cat /etc/sysconfig/selinux
SELINUX=disabled
10. 操作系统安装软件用户(oracle,grid)互信配置
# 在节点1配置ssh互信,步骤如下:
# 使用root将数据库软件拷贝至/home/oracle/下:
# 此脚本适合多节点安装优先使用
# chown oracle:oinstall linuxx64_12201_database.zip
# su - oracle
$ unzip linuxx64_12201_database.zip
$ cd database/sshsetup/
$ ./sshUserSetup.sh -user oracle -hosts "node1 node2" -advanced -noPromptPassphrase
$ ./sshUserSetup.sh -user grid -hosts "node1 node2" -advanced -noPromptPassphrase
# 检验互信
# 节点1, bmcdb1上执行检验操作
$ ssh bmcdb1 date;ssh bmcdb2 date;ssh bmcdb1priv date;ssh bmcdb2priv date
# 节点2, bmcdb2上执行检验操作
$ ssh bmcdb1 date;ssh bmcdb2 date;ssh bmcdb1priv date;ssh bmcdb2priv date
11. 操作系统使用/etc/hosts解析
# /etc/hosts解析地址如下
[oracle@bmcdb1 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# PUBLIC-IP
10.151.115.61 bmcdb1
10.151.115.62 bmcdb2
# VIP-IP
10.151.115.63 bmcdb1vip
10.151.115.64 bmcdb2vip
# PRIVATE-IP
192.155.1.161 bmcdb1priv
192.155.1.162 bmcdb2priv
# SCAN-IP
10.151.115.65 scanbmcdb
# 优先使用/etc/hosts,故files在dns前
[root@bmcdb1 ~]# cat /etc/nsswitch.conf | grep hosts
# hosts: db files nisplus nis dns
hosts: files dns myhostname
12. 操作系统时间同步配置NTP
# 每十分钟同步一次时间服务器,使用crontab定时任务调用ntpdate命令同步
检查系统时间
# date
# 公司内部时间服务器
[root@bmcdb1 soft]# crontab -l
*/10 * * * * /usr/sbin/ntpdate 10.151.113.73
[root@bmcdb2 ~]# crontab -l
*/10 * * * * /usr/sbin/ntpdate 10.151.113.73
3. Oracle RAC 共享存储准备
a) powerpath多路径软件聚合LUN成一条路径供系统使用
# powerpath安装及存储磁盘分配等工作请安排专业的存储工程师完成
# 节点1执行展示结果
[root@bmcdb1 ~]# powermt display dev=all
Pseudo name=emcpowera
VPLEX ID=CKM00182201323
Logical device ID=6000144000000010F0127E4709356BC6 [device_Unity0004_New_BmcDB_ocr3_1_vol]
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
16 lpfc sdaw CL1-0B active alive 0 0
16 lpfc sdaq CL1-0A active alive 0 0
16 lpfc sdak CL1-08 active alive 0 0
16 lpfc sdae CL1-09 active alive 0 0
15 lpfc sdy CL1-0F active alive 0 0
15 lpfc sds CL1-0E active alive 0 0
15 lpfc sdm CL1-0D active alive 0 0
15 lpfc sdg CL1-0C active alive 0 0
Pseudo name=emcpowerb
VPLEX ID=CKM00182201323
Logical device ID=6000144000000010F0127E4709356BC4 [device_Unity0004_New_BmcDB_ocr2_1_vol]
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
16 lpfc sdav CL1-0B active alive 0 0
16 lpfc sdap CL1-0A active alive 0 0
16 lpfc sdaj CL1-08 active alive 0 0
16 lpfc sdad CL1-09 active alive 0 0
15 lpfc sdx CL1-0F active alive 0 0
15 lpfc sdr CL1-0E active alive 0 0
15 lpfc sdl CL1-0D active alive 0 0
15 lpfc sdf CL1-0C active alive 0 0
Pseudo name=emcpowerc
VPLEX ID=CKM00182201323
Logical device ID=6000144000000010F0127E4709356BC2 [device_Unity0004_New_BmcDB_ocr1_1_vol]
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
16 lpfc sdau CL1-0B active alive 0 0
16 lpfc sdai CL1-08 active alive 0 0
16 lpfc sdao CL1-0A active alive 0 0
16 lpfc sdac CL1-09 active alive 0 0
15 lpfc sdw CL1-0F active alive 0 0
15 lpfc sdq CL1-0E active alive 0 0
15 lpfc sdk CL1-0D active alive 0 0
15 lpfc sde CL1-0C active alive 0 0
Pseudo name=emcpowerd
VPLEX ID=CKM00182201323
Logical device ID=6000144000000010F0127E4709356BC3 [device_Unity0004_New_BmcDB_dsg_1_vol]
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
16 lpfc sdat CL1-0B active alive 0 0
16 lpfc sdan CL1-0A active alive 0 0
16 lpfc sdah CL1-08 active alive 0 0
16 lpfc sdab CL1-09 active alive 0 0
15 lpfc sdv CL1-0F active alive 0 0
15 lpfc sdp CL1-0E active alive 0 0
15 lpfc sdj CL1-0D active alive 0 0
15 lpfc sdd CL1-0C active alive 0 0
Pseudo name=emcpowere
VPLEX ID=CKM00182201323
Logical device ID=6000144000000010F0127E4709356BC7 [device_Unity0004_New_BmcDB_data1_1_vol]
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
16 lpfc sdas CL1-0B active alive 0 0
16 lpfc sdam CL1-0A active alive 0 0
16 lpfc sdag CL1-08 active alive 0 0
16 lpfc sdaa CL1-09 active alive 0 0
15 lpfc sdu CL1-0F active alive 0 0
15 lpfc sdo CL1-0E active alive 0 0
15 lpfc sdi CL1-0D active alive 0 0
15 lpfc sdc CL1-0C active alive 0 0
Pseudo name=emcpowerf
VPLEX ID=CKM00182201323
Logical device ID=6000144000000010F0127E4709356BC5 [device_Unity0004_New_BmcDB_arch2_1_vol]
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
16 lpfc sdaf CL1-08 active alive 0 0
16 lpfc sdar CL1-0B active alive 0 0
16 lpfc sdal CL1-0A active alive 0 0
16 lpfc sdz CL1-09 active alive 0 0
15 lpfc sdt CL1-0F active alive 0 0
15 lpfc sdn CL1-0E active alive 0 0
15 lpfc sdh CL1-0D active alive 0 0
15 lpfc sdb CL1-0C active alive 0 0
b) 使用udev持久化磁盘权限
# NOTE:How To Set Device Name Using UDEV on OL 7.X (文档 ID 2056427.1)
# 编辑rules规则文件
# 注意:rules文件中uuid区别大小写,
# 故要注意scsi_id -g -u -d $devnode 的输出结果。
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000144000000010f0127e4709356bc5", RUN+="/bin/sh -c 'mknod /dev/asmarchdisk1 b $major $minor; chown grid:asmadmin /dev/asmarchdisk1; chmod 0660 /dev/asmarchdisk1'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000144000000010f0127e4709356bc7", RUN+="/bin/sh -c 'mknod /dev/asmdatadisk1 b $major $minor; chown grid:asmadmin /dev/asmdatadisk1; chmod 0660 /dev/asmdatadisk1'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000144000000010f0127e4709356bc2", RUN+="/bin/sh -c 'mknod /dev/asmocrdisk1 b $major $minor; chown grid:asmadmin /dev/asmocrdisk1; chmod 0660 /dev/asmocrdisk1'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000144000000010f0127e4709356bc4", RUN+="/bin/sh -c 'mknod /dev/asmocrdisk2 b $major $minor; chown grid:asmadmin /dev/asmocrdisk2; chmod 0660 /dev/asmocrdisk2'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000144000000010f0127e4709356bc6", RUN+="/bin/sh -c 'mknod /dev/asmocrdisk3 b $major $minor; chown grid:asmadmin /dev/asmocrdisk3; chmod 0660 /dev/asmocrdisk3'"
# 启动Udev
/sbin/udevadm trigger --type=devices --action=change
# 查看磁盘
[root@pcmdb1 rpm]# /sbin/udevadm trigger --type=devices --action=change
[root@pcmdb1 rpm]# ls -l /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Nov 24 15:19 /dev/asmarchdisk
brw-rw---- 1 grid asmadmin 8, 32 Nov 24 15:19 /dev/asmdatadisk
brw-rw---- 1 grid asmadmin 8, 48 Nov 24 15:19 /dev/asmocrdisk1
brw-rw---- 1 grid asmadmin 8, 64 Nov 24 15:19 /dev/asmocrdisk2
brw-rw---- 1 grid asmadmin 8, 80 Nov 24 15:19 /dev/asmocrdisk3
c) 设置Disk I/O Scheduler on Linux
# 设置I/0 scheduler为[deadline]状态,共享使用
echo deadline > /sys/block/${ASM_DISK}/queue/scheduler
# 节点1执行展示结果,查看当前scheduler状态是否为[deadline]
[root@bmcdb1 ~]# cat /sys/block/emcpowera/queue/scheduler
noop [deadline] cfq
[root@bmcdb1 ~]# cat /sys/block/emcpowerb/queue/scheduler
noop [deadline] cfq
[root@bmcdb1 ~]# cat /sys/block/emcpowerc/queue/scheduler
noop [deadline] cfq
[root@bmcdb1 ~]# cat /sys/block/emcpowerd/queue/scheduler
noop [deadline] cfq
[root@bmcdb1 ~]# cat /sys/block/emcpowere/queue/scheduler
noop [deadline] cfq
[root@bmcdb1 ~]# cat /sys/block/emcpowerf/queue/scheduler
noop [deadline] cfq
4. GI安装
a) cvuqdisk包需要所有节点均安装
[root@bmcdb1 rpm]# ls
cvuqdisk-1.0.10-1.rpm
[root@bmcdb1 rpm]# pwd
/home/oracle/database/rpm
[root@bmcdb1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing... ################################# [100%]
Updating / installing...
1:cvuqdisk-1.0.10-1 ################################# [100%]
[root@bmcdb1 rpm]# scp cvuqdisk-1.0.10-1.rpm root@bmcdb2:/home/oracle
The authenticity of host 'bmcdb2 (10.151.115.72)' can't be established.
ECDSA key fingerprint is e5:25:bb:51:c6:da:76:25:1d:4a:59:a8:e4:6d:92:ef.
Are you sure you want to continue connecting (yes/no) yes
Warning: Permanently added 'bmcdb2' (ECDSA) to the list of known hosts.
root@bmcdb2's password:
cvuqdisk-1.0.10-1.rpm
[root@bmcdb2 rules.d]# cd /home/oracle
You have new mail in /var/spool/mail/root
[root@bmcdb2 oracle]# ls
cvuqdisk-1.0.10-1.rpm
[root@bmcdb2 oracle]# pwd
/home/oracle
[root@bmcdb2 oracle]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing... ################################# [100%]
Updating / installing...
1:cvuqdisk-1.0.10-1 ################################# [100%]
b) GI软件包直接解压到GI的$ORACLE_HOME下
su - grid
unzip -d $ORACLE_HOME /soft/linuxx64_12201_grid_home.zip
# 安装过程省略
# runcluvfy.sh检验系统环境(先解压GI包,后才有runcluvfy.sh脚本)
./runcluvfy.sh stage -pre crsinst -n bmcdb1,bmcdb2 -verbose
# 过程省略
c) 升级Opatch
# 用新的Opatch直接覆盖原有旧的Opatch
# 修改执行权限
[root@bmcdb1 soft]# chmod 775 p6880880_122010_Linux-x86-64.zip
[grid@bmcdb1:/home/grid]$unzip -d /oracle/app/12.2.0/grid/ /soft/p6880880_122010_Linux-x86-64.zip
Archive: /soft/p6880880_122010_Linux-x86-64.zip
inflating: /oracle/app/12.2.0/grid/OPatch/emdpatch.pl
replace /oracle/app/12.2.0/grid/OPatch/oplan/oplan [y]es, [n]o, [A]ll, [N]one, [r]ename: A
# 过程省略
d) GI补丁升级与GI图形安装
# 修改补丁执行权限
[root@bmcdb1 12.2.0.1_grid_2019aprpatch]# chmod a+x p29301687_122010_Linux-x86-64.zip
[root@bmcdb1 12.2.0.1_grid_2019aprpatch]# ls -l p29301687_122010_Linux-x86-64.zip
-rwxr-xr-x 1 root root 1736326653 Jul 18 15:02 p29301687_122010_Linux-x86-64.zip
# 创建补丁安装目录
[grid@bmcdb1:/oracle/app/12.2.0/grid]$mkdir -p /oracle/app/12.2.0/grid/gridpsu
# 解压补丁
[grid@bmcdb1:/oracle/app/12.2.0/grid]$unzip -d /oracle/app/12.2.0/grid/gridpsu /soft/12.2.0.1_grid_2019aprpatch/p29301687_122010_Linux-x86-64.zip
# 应用补丁并安装GI
/usr/bin/xauth: file /home/grid/.Xauthority does not exist
[grid@bmcdb1:/home/grid]$export DISPLAY=10.156.84.215:0.0
[grid@bmcdb1:/home/grid]$xclock
Warning: Missing charsets in String to FontSet conversion
# 使用grid用户补丁和GI图形安装
[grid@bmcdb1:/oracle/app/12.2.0/grid]$./gridSetup.sh -applyPSU /oracle/app/12.2.0/grid/gridpsu/29301687
...图形界面安装省略...
root执行root.sh脚本
# 节点1执行root.sh脚本记录(共19步)
...双节点需都执行成功,具体步骤省略...
GI安装成功
# GI安装成功后,服务状态如下:
[grid@bmcdb1:/home/grid]$crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE bmcdb1 STABLE
ONLINE ONLINE bmcdb2 STABLE
ora.OCR.dg
ONLINE ONLINE bmcdb1 STABLE
ONLINE ONLINE bmcdb2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE bmcdb1 STABLE
ONLINE ONLINE bmcdb2 STABLE
ora.net1.network
ONLINE ONLINE bmcdb1 STABLE
ONLINE ONLINE bmcdb2 STABLE
ora.ons
ONLINE ONLINE bmcdb1 STABLE
ONLINE ONLINE bmcdb2 STABLE
ora.proxy_advm
OFFLINE OFFLINE bmcdb1 STABLE
OFFLINE OFFLINE bmcdb2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE bmcdb1 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE bmcdb1 Started,STABLE
2 ONLINE ONLINE bmcdb2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE bmcdb1 STABLE
ora.bmcdb1.vip
1 ONLINE ONLINE bmcdb1 STABLE
ora.bmcdb2.vip
1 ONLINE ONLINE bmcdb2 STABLE
ora.qosmserver
1 ONLINE ONLINE bmcdb1 STABLE
ora.scan1.vip
1 ONLINE ONLINE bmcdb1 STABLE
--------------------------------------------------------------------------------
检验补丁情况
# 节点1执行opatch检验补丁情况,节点2执行结果忽略。双节点结果相同
[grid@bmcdb1:/oracle/app/12.2.0/grid/OPatch]$./opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.17
Copyright (c) 2019, Oracle Corporation. All rights reserved.
Oracle Home : /oracle/app/12.2.0/grid
Central Inventory : /oracle/app/oraInventory
from : /oracle/app/12.2.0/grid/oraInst.loc
OPatch version : 12.2.0.1.17
OUI version : 12.2.0.1.4
Log file location : /oracle/app/12.2.0/grid/cfgtoollogs/opatch/opatch3019-08-05_16-56-31PM_1.log
Lsinventory Output file location : /oracle/app/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2019-08-05_16-56-31PM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: bmcdb1
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c 12.2.0.1.0
There are 1 products installed in this Oracle Home.
Interim patches (5) :
Patch 26839277 : applied on Mon Aug 05 13:48:23 CST 2019
Unique Patch ID: 21578760
Patch description: "DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"
Created on 21 Sep 2017, 03:13:10 hrs PST8PDT
Bugs fixed:
26584906
Patch 29314339 : applied on Mon Aug 05 13:48:16 CST 2019
Unique Patch ID: 22821655
Patch description: "Database Apr 2019 Release Update : 12.2.0.1.190416 (29314339)"
Created on 24 Mar 2019, 03:28:52 hrs PST8PDT
Bugs fixed:
26362155, 28023399, 25741955, 25873336, 26966616, 27097854, 28617631
……中间省略修改的bug号……
28951382, 28960211, 28987439, 28991884, 28993590, 29027694, 29189889
29250230
Patch 28566910 : applied on Mon Aug 05 13:47:53 CST 2019
Unique Patch ID: 22413436
Patch description: "TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)"
Created on 28 Aug 2018, 23:01:25 hrs PST8PDT
网站栏目:Oracle12.2RAConLinuxBestPracticeDocumentation
标题来源:http://scyanting.com/article/giccoc.html