Redhat 5.8 ORACLE 11gR2 RAC安装文档1-环境配置及准备

时间:2022-07-31 08:48:18

1、实验环境介绍

本实验全部使用物理环境,共使用三台服务器:

其中两台用做RAC节点,一台用来做存储服务器(生产环境有专门存储,因测试环境没有存储服务器,并且为了更加了解整个环境的结构,故用一台服务器单独做存储使用)。

操作系统:    Oracle Enterprise Linux 5 update 5(64位)

存储:            Oracle Enterprise Linux 5 update 5(64位)模拟存储

数据库版本:    Oracle 11gR2 11.2.0.3 (64位)

1.1、实验拓补图:

Redhat 5.8 ORACLE 11gR2 RAC安装文档1-环境配置及准备

由于在测试环境,图中省去了路由器和交换机,RAC两节点之间,通过网线直连方式实现,实际生产环境中,建议通过光纤交换机连接。DNS服务将搭建在node1和node2上(做DNS主从)。

IP规划如下:

 

节点1

节点2

主机名

orcl1 

orcl2 

网卡:Public NIC

eth0 

eth0 

网卡:Private NIC

eth1 

eth1 

IP/网络名:Public

192.168.120.58

192.168.120.59

IP/网络名:VIP

192.168.120.60

192.168.28.61

IP/网络名:Private

192.168.121.58

192.168.121.59

SCAN IP :

192.168.120.62

192.168.120.63

192.168.120.66

注,

生产环境的网络结构与我们测试环境是不同的,通常情况下数据库服务器Private IP以

及与存储连接都是连接光纤接口通过光纤交换机实现彼此间相连的,Private IP相当于两节点的心跳,RAC的核心技术就是Cache fusion ,Cache fusion说直接点就是用来两节点之间同步SGA内存数据块的,所以网络同步的速度一定要优于磁盘到内存的速度,否则RAC就失去最本质的意义。当数据库服务器到存储的连接更加不用说了,理所当然应该用光纤交换机连接。

2、安装前的准备

2.1 服务器准备

操作系统为Redhat 5.8,需要提前安装好,建议安装英文版本,添加中文支持,不要启用UTC时间,选上海时间即可,禁用iptables及SELINUX;

2.2 网络配置

2.2.1 Storage存储

[root@storage ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

HWADDR=00:0c:29:e7:88:c0

BOOTPROTO=static

BROADCAST=192.168.120.255

IPADDR=192.168.120.67

NETMASK=255.255.255.0

NETWORK=192.168.120.0

ONBOOT=yes 

 

[root@storage ~]# cat /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=storage

 

[root@storage ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.28.185 storage 

2.2.2 Node1节点

[root@orcl1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=static

BROADCAST=192.168.120.255

HWADDR=00:0C:29:C1:3E:5F

IPADDR=192.168.120.58

NETMASK=255.255.255.0

NETWORK=192.168.120.0

ONBOOT=yes

[root@orcl1 ~]#  

 

[root@orcl1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=static

BROADCAST=192.168.121.255

HWADDR=00:0C:29:C1:3E:55

IPADDR=192.168.121.58

NETMASK=255.255.255.0

NETWORK=192.168.121.0

ONBOOT=yes 

 

[root@orcl1 ~]# cat /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=orcl1

 

[root@orcl1 ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

 

#storage

192.168.120.67 storage

 

#orcl1

192.168.120.58 orcl1.demo.com orcl1

192.168.120.60 orcl1-vip.demo.com orcl1-vip

192.168.121.58 orcl1-priv.demo.com orcl1-priv

 

#orcl2

192.168.120.59 orcl2.demo.com orcl2

192.168.120.61 orcl2-vip.demo.com orcl2-vip

192.168.121.59 orcl2-priv.demo.com orcl2-priv

2.2.3 Node2节点

[root@orcl2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

HWADDR=00:0c:29:9a:bf:3c

BOOTPROTO=static

BROADCAST=192.168.120.255

IPADDR=192.168.120.59

NETMASK=255.255.255.0

NETWORK=192.168.120.0

ONBOOT=yes

 

[root@orcl2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

HWADDR=00:0c:29:9a:bf:32

BOOTPROTO=static

BROADCAST=192.168.121.255

IPADDR=192.168.121.59

NETMASK=255.255.255.0

NETWORK=192.168.121.0

ONBOOT=yes

 

[root@orcl2 ~]# cat /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=orcl2

 

[root@orcl2 ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

 

#storage

192.168.120.67 storage

 

#orcl1

192.168.120.58 orcl1.demo.com orcl1

192.168.120.60 orcl1-vip.demo.com orcl1-vip

192.168.121.58 orcl1-priv.demo.com orcl1-priv

 

#orcl2

192.168.120.59 orcl2.demo.com orcl2

192.168.120.61 orcl2-vip.demo.com orcl2-vip

192.168.121.59 orcl2-priv.demo.com orcl2-priv

2.2.4 SCAN IP配置

scan ip可以一个也可以多个,设一个就在/etc/hosts配置,设多个就在DNS配置,需要在DNS中为SCAN注册3个IP地址,以轮循方式解析为域名。
当配置3个scan ip时,不要在/etc/hosts配置文件中注册SCAN名称,因为如果这样做,就不允许SCAN用3个不同IP地址进行解析了,并且在装grid时会报错,我们这里尽量模拟生产环境,所以在hosts文件不配置SCAN IP,后面的步骤会搭建DNS服务。

2.2.4.1 配置DNS服务器地址

Node1&Node2 配置相同

[root@orcl1 ~]# cat /etc/resolv.conf

search demo.com

nameserver 192.168.120.58

nameserver 192.168.120.59

options rotate #后三行参数不配置,可能会报错

options timeout:2

options attempts:5

2.2.4.2 配置Yum源

Yum 源可以自己挂载光盘配置,或者直接调用环境内的yum服务器,这里直接调用yum服务器.

Yum服务器配置方法见 http://www.cnblogs.com/kaodaxia/p/4575507.html

2.2.4.3修改tmpfs大小

Storage、Node1、Node2配置相同,以Node1为例,注意红色部分

[root@orcl1 yum.repos.d]# cat /etc/fstab

LABEL=/ / ext3 defaults 1 1

LABEL=/boot /boot ext3 defaults 1 2

tmpfs /dev/shm tmpfs defaults,size=2048M 0 0

devpts /dev/pts devpts gid=5,mode=620 0 0

sysfs /sys sysfs defaults 0 0

proc /proc proc defaults 0 0

LABEL=SWAP-sda3 swap swap defaults 0 0

[root@orcl1 yum.repos.d]# mount -o remount /dev/shm/

[root@orcl1 yum.repos.d]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda2 18G 2.4G 14G 15% /

/dev/sda1 289M 17M 258M 6% /boot

tmpfs 2.0G 0 2.0G 0% /dev/shm

/dev/scd0 3.5G 3.5G 0 100% /mnt/cdrom

[root@orcl1 yum.repos.d]# mount -a

注: tmpfs大小,生产环境视情况而定。

2.2.4.4 配置DNS主从服务

Node1(orcl1) 配置DNS主服务器

[root@orcl1 ~]# yum install bind bind-chroot caching-nameserver -y

[root@orcl1 ~]# chkconfig named on

[root@orcl1 yum.repos.d]# cd /var/named/chroot/etc/

[root@orcl1 etc]# ls

localtime named.caching-nameserver.conf named.conf named.rfc1912.zones rndc.key

[root@orcl1 etc]# cp -p named.caching-nameserver.conf named.conf #-p参数解决权限问题

[root@orcl1 etc]# vi named.conf

options {

directory "/var/named";

};

zone "." IN {

type hint;

file "/dev/null";

};

zone "demo.com" IN {

type master;

file "demo.com.zone";

};

zone "120.168.192.in-addr.arpa" IN {

type master;

file "192.168.120.local";

};

zone "121.168.192.in-addr.arpa" IN {

type master;

file "192.168.121.local";

};

保存退出

 

[root@orcl1 etc]# cd /var/named/chroot/var/named/

[root@orcl1 named]# cp -p localhost.zone demo.com.zone

[root@orcl1 named]# vi demo.com.zone

$TTL 86400

@ IN SOA demo.com. root.demo.com. (

42 ; serial (d. adams)

3H ; refresh

15M ; retry

1W ; expiry

1D ) ; minimum

 

IN NS dns.demo.com.

orcl1 IN A 192.168.120.58

orcl2 IN A 192.168.120.59

orcl1-vip IN A 192.168.120.60

orcl2-vip IN A 192.168.120.61

rac-scan IN A 192.168.120.62

rac-scan IN A 192.168.120.63

rac-scan IN A 192.168.120.66

storage IN A 192.168.120.67

orcl1-priv IN A 192.168.121.58

orcl2-priv IN A 192.168.121.59

dns IN A 192.168.120.58 #一定要放到最后

保存退出

 

[root@orcl1 named]# cp -p demo.com.zone 192.168.120.local

[root@orcl1 named]# vi 192.168.120.local

$TTL 86400

@ IN SOA demo.com. root.demo.com. (

42 ; serial (d. adams)

3H ; refresh

15M ; retry

1W ; expiry

1D ) ; minimum

IN NS dns.demo.com.

58 IN PTR orcl1.demo.com.

59 IN PTR orcl2.demo.com.

60 IN PTR orcl1-vip.demo.com.

61 IN PTR orcl2-vip.demo.com.

62 IN PTR rac-scan.demo.com.

63 IN PTR rac-scan.demo.com.

66 IN PTR rac-scan.demo.com.

67 IN PTR storage.demo.com.

保存退出

 

[root@orcl1 named]# cp 192.168.120.local 192.168.121.local

[root@orcl1 named]# vi 192.168.121.local

$TTL 86400

@ IN SOA demo.com. root.demo.com. (

42 ; serial (d. adams)

3H ; refresh

15M ; retry

1W ; expiry

1D ) ; minimum

 

IN NS dns.demo.com.

58 IN PTR orcl1-priv.demo.com.

59 IN PTR orcl2-priv.demo.com.

保存退出

[root@orcl1 named]# service named restart #重启named服务

Stopping named: [ OK ]

Starting named: [ OK ] 

Node2(orcl2) 配置DNS从服务器

[root@orcl2 ~]# yum install bind bind-chroot caching-nameserver -y

[root@orcl2 ~]# chkconfig named on

[root@orcl2 ~]# cd /var/named/chroot/etc/

[root@orcl2 etc]# cp -p named.caching-nameserver.conf named.conf

[root@orcl2 etc]# vi named.conf

options {

directory "/var/named";

};

zone "." IN {

type hint;

file "/dev/null";

};

zone "demo.com" IN {

type slave;

file "slaves/demo.com.zone";

masters { 192.168.120.58; };

};

zone "120.168.192.in-addr.arpa" IN {

type slave;

file "slaves/192.168.120.local";

masters { 192.168.120.58; };

};

zone "121.168.192.in-addr.arpa" IN {

type slave;

file "slaves/192.168.121.local";

masters { 192.168.120.58; };

};

保存退出

 

[root@orcl2 etc]# service named restart

Stopping named: [ OK ]

Starting named: [ OK ]

[root@orcl2 etc]# cd ../var/named/slaves/

[root@orcl2 slaves]# ls

192.168.248.local 192.168.28.local demo.com.zone

如上,能看到主DNS服务器上的区域文件,则证明从DNS服务器配置成功

DNS配置结束,在所有节点上用nslookup命令测试一下正向,反向解析是否成功.

注:因为是测试环境,为了满足测试需要,所以DNS服务搭建在RAC节点上,实际生产环境中,应该配置在DNS服务器上,已让环境内的客户端都能解析到。关于SCAN IP的说明请参考另外一篇文章。

2.3配置NTP服务(时间同步)

2.3.1 配置服务端(Node1)

[root@orcl1 ~]# vim /etc/ntp.conf

Redhat 5.8 ORACLE 11gR2 RAC安装文档1-环境配置及准备

如图,去掉原来第13行的注释,把网段改成192.168.120.0,注释掉17,18,19行,保存退出

 

[root@orcl1 ~]# vim /etc/sysconfig/ntpd

Redhat 5.8 ORACLE 11gR2 RAC安装文档1-环境配置及准备

如图,修改第2行,添加-x参数。修改第5行为yes。保存退出

 

[root@orcl1 ~]# /etc/init.d/ntpd restart #会报错,只要最后一条ok就行

[root@orcl1 ~]# chkconfig ntpd on

2.3.2 配置客户端(Node2)

 

[root@orcl2 ~]# vim /etc/ntp.conf

Redhat 5.8 ORACLE 11gR2 RAC安装文档1-环境配置及准备

如图,注释掉第17,18,19行,然后在第20行添加NTP服务器地址:server 192.168.120.58

 

[root@orcl2 ~]# vim /etc/sysconfig/ntpd

Redhat 5.8 ORACLE 11gR2 RAC安装文档1-环境配置及准备

如图,修改第2行,添加-x参数。修改第5行为yes。保存退出

 

[root@orcl2 ~]# /etc/init.d/ntpd restart (此时会失败,具体原因见下面)

[root@orcl2 ~]# chkconfig ntpd on

注意!NTP服务器不是刚搭好,客户端就能马上同步成功的,需要在node1上运行命令

ntpq -p 查看reach值,当该值达到17以上时候,ntp客户端再运行/etc/init.d/ntpd restart

就能同步成功了。

2.3.3 Linux 修改时间

 

[root@orcl1 ~]# date -s 03/23/2015 #2015年3月25号

[root@orcl1 ~]# date -s 11:13:00

[root@orcl1 ~]# clock -w #写入cmos

2.4 创建用户和组

Node1&Node2 配置一致,以Node1为例

[root@orcl1 ~]# groupadd -g 1100 oinstall

[root@orcl1 ~]# groupadd -g 1200 dba

[root@orcl1 ~]# groupadd -g 1300 oper

[root@orcl1 ~]# groupadd -g 2100 asmadmin

[root@orcl1 ~]# groupadd -g 2200 asmdba

[root@orcl1 ~]# groupadd -g 2300 asmoper

[root@orcl1 ~]# useradd -u 777 -g oinstall -G dba,oper,asmadmin,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle

[root@orcl1 ~]# echo "oracle" | passwd --stdin oracle

[root@orcl1 ~]# useradd -u 888 -g oinstall -G dba,asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid

[root@orcl1 ~]# echo "grid" | passwd --stdin grid

2.5 配置grid用户和oracle用户环境变量

Node1&Node2 都要配置,注意红色字体部分的区别

[root@orcl1 ~]# su - oracle

[oracle@orcl1 ~]$ cat .bash_profile

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

 

# User specific environment and startup programs

 

PATH=$PATH:$HOME/bin

 

export PATH

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_HOSTNAME=orcl1.demo.com #注意Node2需要修改orcl2

export ORACLE_SID=orcl1 #注意Node2需要修改orcl2

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export ORACLE_UNQNAME=racdb #各节点保持一致

export TNS_ADMIN=$ORACLE_HOME/network/admin

export ORACLE_TERM=xterm

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export EDITOR=vi

export LANG=en_US

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'

umask 022

保存退出,用source .bash_profile重读一下配置文件

 

[root@orcl1 ~]# su - grid

[grid@orcl1 ~]$ cat .bash_profile

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

 

# User specific environment and startup programs

 

PATH=$PATH:$HOME/bin

 

export PATH

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM1 #Node2需要改为+ASM2

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export ORACLE_TERM=xterm

export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'

export TNS_ADMIN=$ORACLE_HOME/network/admin

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export EDITOR=vi

export LANG=en_US

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

umask 022

保存退出,用source .bash_profile重读一下配置文件

 

2.6 创建相关目录及权限

 

[root@orcl1 ~]# mkdir –p /u01/app/grid

[root@orcl1 ~]# mkdir –p /u01/app/11.2.0/grid

[root@orcl1 ~]# mkdir –p /u01/app/oracle

[root@orcl1 ~]# chown –R oracle:oinstall /u01

[root@orcl1 ~]# chown –R grid:oinstall /u01/app/grid

[root@orcl1 ~]# chown –R grid:oinstall /u01/app/11.2.0

[root@orcl1 ~]# chmod –R 775 /u01

[grid@orcl1 ~]$ ll /u01/ -d

drwxrwxr-x 3 oracle oinstall 4096 Mar 23 12:28 /u01/

[grid@orcl1 ~]$ ll /u01/app/

total 24

drwxrwxr-x 3 grid oinstall 4096 Mar 23 12:28 11.2.0

drwxrwxr-x 2 grid oinstall 4096 Mar 23 12:28 grid

drwxrwxr-x 2 oracle oinstall 4096 Mar 23 12:28 oracle

2.7 修改/etc/security/limits.conf

Node1&Node2 都要以root用户配置,以下以ORCL1为例添加如下:

#add by martin for install oracle rac

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

 

2.8 修改/etc/pam.d/login文件

Node1&Node2 都要以root用户配置,以下以ORCL1为例添加如下:

#add by martin for install oracle rac

session required /lib/security/pam_limits.so

session required pam_limits.so

2.9 修改/etc/profile文件

Node1&Node2 都要以root用户配置,以下以ORCL1为例添加如下:

#add by martin for install oracle rac

if [ $USER = "oracle" ]||[ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

保存退出,用source /etc/profile 重读一下配置文件

2.10 修改/etc/sysctl.conf文件

Node1&Node2 都要以root用户配置,以下以ORCL1为例,修改原则

如果大于下面列出的,就不改,小于的,照下面改,没有的,就添加。

[root@orcl1 ~]# vim /etc/sysctl.conf

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 4294967295 #物理内存-1G

kernel.shmall = 268435456 #物理内存/4KB=X页

kernel.shmmni = 4096

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

net.ipv4.tcp_wmem = 262144 262144 262144

net.ipv4.tcp_rmem = 4194304 4194304 4194304

保存退出,之后别忘了用sysctl -p 命令使参数生效

2.11 配置SSH互信

Node1&Node2 分别在grid和oracle用户上都要做

Oracle 用户:

Node1:

oot@orcl1 ~]# su – oracle

[oracle@orcl1 ~]$ ssh-keygen -t rsa #一路回车,默认空密码

[oracle@orcl1 ~]$ ssh-keygen -t dsa    #一路回车,默认空密码

[oracle@orcl1 ~]$ cd /home/oracle/.ssh/    

[oracle@orcl1 .ssh]$ ls

id_dsa id_dsa.pub id_rsa id_rsa.pub

 

Node2:

[root@orcl2 ~]# su – oracle

[oracle@orcl2 ~]$ ssh-keygen -t rsa        #一路回车,默认空密码

[oracle@orcl2 ~]$ ssh-keygen -t dsa        #一路回车,默认空密码

[oracle@orcl2 ~]$ cd /home/oracle/.ssh/    #.ssh目录需要先运行上面的命令才会生成

[oracle@orcl2 .ssh]$ ls

id_dsa id_dsa.pub id_rsa id_rsa.pub 

 

Node1:

[oracle@orcl1 .ssh]$ cat id_rsa.pub >> authorized_keys

[oracle@orcl1 .ssh]$ cat id_dsa.pub >> authorized_keys

[oracle@orcl1 .ssh]$ ssh 192.168.120.59 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

[oracle@orcl1 .ssh]$ ssh 192.168.120.59 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys

[oracle@orcl1 .ssh]$ scp authorized_keys 192.168.120.59:/home/oracle/.ssh/authorized_keys

 

测试:

Node1:--Node2 相同测试方法

[oracle@orcl1 .ssh]$ ssh 192.168.120.58 date

[oracle@orcl1 .ssh]$ ssh 192.168.120.59 date

[oracle@orcl1 .ssh]$ ssh 192.168.248.1 date

[oracle@orcl1 .ssh]$ ssh 192.168.248.2 date

[oracle@orcl1 .ssh]$ ssh orcl1 date

[oracle@orcl1 .ssh]$ ssh orcl2 date

[oracle@orcl1 .ssh]$ ssh orcl1.demo.com date

[oracle@orcl1 .ssh]$ ssh orcl2.demo.com date

[oracle@orcl1 .ssh]$ ssh orcl1-priv date

[oracle@orcl1 .ssh]$ ssh orcl2-priv date

[oracle@orcl1 .ssh]$ ssh orcl1-priv.demo.com date

[oracle@orcl1 .ssh]$ ssh orcl2-priv.demo.com date

[oracle@orcl1 .ssh]$ ls

authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts

 

注意:两个节点均需要测试,最终目的是上述测试均不需要yes/no。

如果出现测试不通过的情况下解决方法如下:

[oracle@orcl1 .ssh]$ pwd

/home/oracle/.ssh

[oracle@orcl1 .ssh]$ cat config

StrictHostKeyChecking no

[oracle@orcl1 .ssh]$

 

GRID 用户:

Node1:

root@orcl1 ~]# su – grid

[grid@orcl1 ~]$ ssh-keygen -t rsa    # 一路回车,默认空密码

[grid@orcl1 ~]$ ssh-keygen -t dsa    # 一路回车,默认空密码

[grid@orcl1 ~]$ cd /home/grid/.ssh/

[grid@orcl1 .ssh]$ ls

id_dsa id_dsa.pub id_rsa id_rsa.pub

 

Node2:

[root@orcl2 ~]# su – grid

[grid@orcl2 ~]$ ssh-keygen -t rsa        # 一路回车,默认空密码

[grid@orcl2 ~]$ ssh-keygen -t dsa        # 一路回车,默认空密码

[grid@orcl2 ~]$ cd /home/grid/.ssh/    

[grid@orcl2 .ssh]$ ls

id_dsa id_dsa.pub id_rsa id_rsa.pub 

 

Node1:

[grid@orcl1 .ssh]$ cat id_rsa.pub >> authorized_keys

[grid@orcl1 .ssh]$ cat id_dsa.pub >> authorized_keys

[grid@orcl1 .ssh]$ ssh 192.168.120.59 cat /home/grid/.ssh/id_rsa.pub >> authorized_keys

[grid@orcl1 .ssh]$ ssh 192.168.120.59 cat /home/grid/.ssh/id_dsa.pub >> authorized_keys

[grid@orcl1 .ssh]$ scp authorized_keys 192.168.120.59:/home/grid/.ssh/authorized_keys

 

测试:

Node1:--Node2 相同测试方法

[grid@orcl1 .ssh]$ ssh 192.168.120.58 date

[grid@orcl1 .ssh]$ ssh 192.168.120.59 date

[grid@orcl1 .ssh]$ ssh 192.168.248.1 date

[grid@orcl1 .ssh]$ ssh 192.168.248.2 date

[grid@orcl1 .ssh]$ ssh orcl1 date

[grid@orcl1 .ssh]$ ssh orcl2 date

[grid@orcl1 .ssh]$ ssh orcl1.demo.com date

[grid@orcl1 .ssh]$ ssh orcl2.demo.com date

[grid@orcl1 .ssh]$ ssh orcl1-priv date

[grid@orcl1 .ssh]$ ssh orcl2-priv date

[grid@orcl1 .ssh]$ ssh orcl1-priv.demo.com date

[grid@orcl1 .ssh]$ ssh orcl2-priv.demo.com date

[grid@orcl1 .ssh]$ ls

authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub known_hosts

注意:两个节点均需要测试,最终目的是上述测试均不需要yes/no。

如果出现测试不通过的情况下解决方法如下:

[grid@orcl1 .ssh]$ pwd

/home/oracle/.ssh

[grid@orcl1 .ssh]$ cat config

StrictHostKeyChecking no

[grid@orcl1 .ssh]$

2.12 配置存储,导出ASM磁盘组所需的共享磁盘

启动Storage端,准备把磁盘导出成为共享存储。

配置yum,安装导出端所需要的包,请注意Yum配置文件里的库

除了Server以外,还有ClusterStorage。

[root@storage ~]# yum install scsi-target-utils -y

[root@storage ~]# vim /etc/tgt/targets.conf 编辑配置文件如下:

24 #<target iqn.2008-09.com.example:server.target1>

25 # backing-store /dev/LVM/somedevice

26 #</target>

27 <target iqn.2015-06.demo.com:rac-disk>

28 backing-store /dev/sdb

29 write-cache off

30 initiator-address 192.168.120.58

31 initiator-address 192.168.120.59

32 </target>

如上图,参照24-26行,写出27-32行的内容,其中的initiator-address参数设置的是访问控制列表,使指定的存储只能被指定的客户机发现。(重要!)如果不是新服务器可能行数有变化,可以搜索target关键字查找。

[root@storage ~]# /etc/init.d/tgtd restart # 重启服务

Stopping SCSI target daemon: [ OK ]

Starting SCSI target daemon: Starting target framework daemon

查看导出的存储:

[root@storage ~]# tgtadm --lld iscsi --mode target --op show

Target 1: iqn.2015-06.demo.com:rac-disk

System information:

Driver: iscsi

State: ready

I_T nexus information:

I_T nexus: 1

Initiator: iqn.1994-05.com.redhat:74ec56d54535

Connection: 0

IP Address: 192.168.120.58

I_T nexus: 2

Initiator: iqn.1994-05.com.redhat:f0797424c53b

Connection: 0

IP Address: 192.168.120.59

LUN information:

LUN: 0

Type: controller

SCSI ID: IET 00010000

SCSI SN: beaf10

Size: 0 MB, Block size: 1

Online: Yes

Removable media: No

Readonly: No

Backing store type: null

Backing store path: None

Backing store flags:

LUN: 1

Type: disk

SCSI ID: IET 00010001

SCSI SN: beaf11

Size: 3293837 MB, Block size: 512

Online: Yes

Removable media: No

Readonly: No

Backing store type: rdwr

Backing store path: /dev/sdb

Backing store flags:

Account information:

ACL information:

192.168.120.58

192.168.120.59

如上,Target 1中的LUN: 1即为导出的存储

[root@storage ~]# chkconfig tgtd on #最后不要忘记开机启动tgtd服务

2.13 导入存储

Node1&Node2都要操作,以Node1为例

[root@orcl1 ~]# yum install iscsi-initiator-utils -y

[root@orcl1 ~]# rm -rf /var/lib/iscsi/*

[root@orcl1 ~]# /etc/init.d/iscsid start

Starting iSCSI daemon: [ OK ]

[ OK ]

发现存储

[root@orcl1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.120.67:3260

192.168.120.67:3260,1 iqn.2015-06.demo.com:rac-disk

导入存储

[root@orcl1 ~]# iscsiadm -m node -T iqn.2015-06.demo.com:rac-disk -p 192.168.120.67:3260 -l

Logging in to [iface: default, target: iqn.2015-04.demo.com:rac-disk, portal: 192.168.28.185,3260]

Login to [iface: default, target: iqn.2015-04.demo.com:rac-disk, portal: 192.168.28.185,3260]: successful

[root@node1 ~]# chkconfig iscsi on

[root@node1 ~]# chkconfig iscsid on

 

查看

[root@orcl1 ~]# fdisk -l

Disk /dev/sda: 598.3 GB, 598342631424 bytes

255 heads, 63 sectors/track, 72744 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sda1 * 1 25 200781 83 Linux

/dev/sda2 26 72744 584115367+ 8e Linux LVM

 

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

 

 

Disk /dev/sdb: 2193.9 GB, 2193922981888 bytes

255 heads, 63 sectors/track, 266729 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdb1 1 266730 2142502911+ ee EFI GPT

 

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

 

 

WARNING: The size of this disk is 3.3 TB (3293837262848 bytes).

DOS partition table format can not be used on drives for volumes

larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID

partition table format (GPT).

 

 

Disk /dev/sdc: 3293.8 GB, 3293837262848 bytes

255 heads, 63 sectors/track, 400452 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

注:/dev/sdc 就是我们新增的盘,可以看到上面给了个警告,是因为Linux系统下大于2T的硬盘必须必须使用parted 进行分区。因为我这环境里是这样,所以跟大家的信息可能不一样,大家在虚拟机环境或生产上一般不会出现单个磁盘大于2T的情况,直接用fdisk分区即可。

最后别忘记在另外一个节点上也执行导入存储操作!

2.14 绑定UDEV

Node1&Node2

生成设备名

[root@orcl1 ~]# udevinfo -q path -n /dev/sdc

/block/sdc

计算/block/sdb的wwid

[root@orcl1 ~]# scsi_id -u -g -s /block/sdc

1IET_00010001

绑定udev

[root@orcl1 rules.d]# cd /etc/udev/rules.d/

[root@orcl1 rules.d]# cat 99-iscsi.rules #新建文件,添加一下内容

KERNEL=="sd*", BUS=="scsi", ENV{ID_SERIAL}=="1IET_00010001", SYMLINK+="rac-disk%n", OWNER="grid", GROUP="asmadmin", MODE="0660"

 

拷贝到Node2节点

[root@orcl1 rules.d]# scp 99-iscsi.rules 192.168.120.59:/etc/udev/rules.d/

The authenticity of host '192.168.120.59 (192.168.120.59)' can't be established.

RSA key fingerprint is 03:b0:a1:86:2f:8a:e6:2e:ee:bd:f0:d9:f9:82:18:0c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.120.59' (RSA) to the list of known hosts.

root@192.168.120.59's password:

99-iscsi.rules 100% 128 0.1KB/s 00:00

 

Node1&Node2节点启动udev服务

[root@orcl1 rules.d]# udevcontrol reload_rules

[root@orcl1 rules.d]# start udev

Starting udev: [ OK ]

节点2

[root@orcl2 ~]# udevcontrol reload_rules

[root@orcl2 ~]# start_udev

Starting udev: [ OK ]

 

查看Node1&Node2都能查看

[root@orcl1 udev]# ll /dev/rac-disk

lrwxrwxrwx 1 root root 3 Apr 23 00:53 /dev/rac-disk -> sdc

[root@orcl1 udev]# ll /dev/rac-disk

lrwxrwxrwx 1 root root 3 Apr 23 00:54 /dev/rac-disk -> sdc

注意:绑定了udev后,今后对于这个存储进行分区、查看、绑定等操作,都要使用udev别名

2.15 分区

只在Node1节点操作即可

查看当前分区状态

[root@orcl1 rules.d]# fdisk -l /dev/rac-disk

 

WARNING: GPT (GUID Partition Table) detected on '/dev/rac-disk'! The util fdisk doesn't support GPT. Use GNU Parted.

 

 

WARNING: The size of this disk is 3.3 TB (3293837262848 bytes).

DOS partition table format can not be used on drives for volumes

larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID

partition table format (GPT).

 

 

Disk /dev/rac-disk: 3293.8 GB, 3293837262848 bytes

255 heads, 63 sectors/track, 400452 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

开始分区

正常情况下我们用fdisk /dev/rac_disk 进行分区即可,但我这里比较麻烦,因为磁盘大小超过2T,所以只能使用parted 进行分区。

Parted分区步骤略,有兴趣的可以参考我的另外一篇关于parted分区的文章,或者百度。

分区结束查看

[root@orcl1 ~]# partprobe /dev/rac-disk #两个节点都要执行

[root@orcl1 ~]# ll /dev/rac-disk*

lrwxrwxrwx 1 root root 3 2015-06-17 13:29 /dev/rac-disk -> sdc

lrwxrwxrwx 1 root root 4 2015-06-17 13:29 /dev/rac-disk1 -> sdc1

lrwxrwxrwx 1 root root 4 2015-06-17 13:29 /dev/rac-disk2 -> sdc2

lrwxrwxrwx 1 root root 4 2015-06-17 13:29 /dev/rac-disk3 -> sdc3

lrwxrwxrwx 1 root root 4 2015-06-17 13:29 /dev/rac-disk4 -> sdc4

lrwxrwxrwx 1 root root 4 2015-06-17 13:29 /dev/rac-disk5 -> sdc5

说明:    disk3disk4、disk75是用来存储ocr和voting disk的

disk1、disk2分别是用来存储数据和归档。

分完区后一定重启一下系统,验证一下能否自动挂载导入的存储和UDEV。

2.16 配置ASM(ASMLIB)

2.16.1 安装ASMLIB包

两节点都需要安装,以节点1为例。

上传ASMLIB包并安装,要注意ASMLIB与内核版本匹配

查看内核版本

[root@orcl1 ~]# uname -a

Linux orcl1.demo.com 2.6.18-308.el5 #1 SMP Fri Jan 27 17:17:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

上传后查看与内核版本是否一致

[root@orcl1 asmlib]# ls -lrt

total 484

-rw-r--r-- 1 root root 137897 2015-05-05 13:14 oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm

-rw-r--r-- 1 root root 90225 2015-06-17 14:21 oracleasm-support-2.1.8-1.el5.x86_64.rpm

-rw-r--r-- 1 root root 14176 2015-06-17 14:21 oracleasmlib-2.0.4-1.el5.x86_64.rpm

安装rpm包

[root@orcl1 asmlib]# rpm -ivh oracleasm* --force --nodeps

warning: oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing... ########################################### [100%]

1:oracleasm-support ########################################### [ 33%]

2:oracleasm-2.6.18-308.el########################################### [ 67%]

3:oracleasmlib ########################################### [100%]

[root@orcl1 asmlib]#

上传到节点2并安装

[root@orcl1 asmlib]# scp asmlib-2.6.18-308.el5.zip orcl2:/usr/local/src/asmlib

root@orcl2's password:

asmlib-2.6.18-308.el5.zip 100% 227KB 227.5KB/s 00:00

[root@orcl2 src]# cd asmlib #注意已经切换到节点2操作

[root@orcl2 asmlib]# ls

asmlib-2.6.18-308.el5.zip

[root@orcl2 asmlib]# unzip asmlib-2.6.18-308.el5.zip

Archive: asmlib-2.6.18-308.el5.zip

inflating: oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm

inflating: oracleasm-support-2.1.8-1.el5.x86_64.rpm

inflating: oracleasmlib-2.0.4-1.el5.x86_64.rpm

[root@orcl2 asmlib]# rpm -ivh oracleasm* --force --nodeps

warning: oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing... ########################################### [100%]

1:oracleasm-support ########################################### [ 33%]

2:oracleasm-2.6.18-308.el########################################### [ 67%]

3:oracleasmlib ########################################### [100%]

2.16.2 初始化asmlib配置

两个节点都需要操作,以节点1为例。

[root@orcl1 asmlib]# oracleasm configure -i

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver. The following questions will determine whether the driver is

loaded on boot and what permissions it will have. The current values

will be shown in brackets ('[]'). Hitting <ENTER> without typing an

answer will keep that current value. Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

查看配置

[root@orcl1 asmlib]# oracleasm configure

ORACLEASM_ENABLED=true

ORACLEASM_UID=grid

ORACLEASM_GID=asmadmin

ORACLEASM_SCANBOOT=true

ORACLEASM_SCANORDER=""

ORACLEASM_SCANEXCLUDE=""

ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

切记,节点2做同样操作。

该脚本(oracleasm configure -i)完成以下任务:

创建 /etc/sysconfig/oracleasm 配置文件

创建 /dev/oracleasm 挂载点

挂载 ASMLib 驱动程序文件系统

2.16.3 加载oracleasm内核模块

两节点都要做,以节点1为例,切记节点2一定也要做,不然创建磁盘的时候会报错。

[root@orcl1 asmlib]# oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

2.16.4 创建ASM磁盘

节点1

[root@orcl1 asmlib]# oracleasm createdisk data_dg /dev/rac-disk1

Writing disk header: done

Instantiating disk: done

[root@orcl1 asmlib]# oracleasm createdisk arch_dg /dev/rac-disk2

Writing disk header: done

Instantiating disk: done

[root@orcl1 asmlib]# oracleasm createdisk ovddata1 /dev/rac-disk3

Writing disk header: done

Instantiating disk: done

[root@orcl1 asmlib]# oracleasm createdisk ovddata2 /dev/rac-disk4

Writing disk header: done

Instantiating disk: done

[root@orcl1 asmlib]# oracleasm createdisk ovddata3 /dev/rac-disk5

Writing disk header: done

Instantiating disk: done

[root@orcl1 asmlib]# oracleasm scandisks #扫描磁盘

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

[root@orcl1 asmlib]# oracleasm listdisks #查看磁盘

ARCH_DG

DATA_DG

OVDDATA1

OVDDATA2

OVDDATA3

节点2,只需要scandisks 即可,不需要createdisks 创建磁盘

[root@orcl2 asmlib]# oracleasm scandisks #扫描

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "DATA_DG"

Instantiating disk "ARCH_DG"

Instantiating disk "OVDDATA1"

Instantiating disk "OVDDATA2"

Instantiating disk "OVDDATA3"

[root@orcl2 asmlib]# oracleasm listdisks #查看

ARCH_DG

DATA_DG

OVDDATA1

OVDDATA2

OVDDATA3

注:oracleasm createdisk 创建ASM磁盘,data_dg是自定义的ASM磁盘名,/dev/rac-disk* 是该设备对应的UDEV别名。

创建完成后,在两个节点oracleasm listdisks 查看新增磁盘结果一致。

 

至此,安装前的配置准备工作结束!