最近接触了一下Oracle 11g R2 的RAC,发现变化很大。 所以在自己动手做实验之前还是先研究下它的新特性比较好。
一. 官网介绍
先看一下Oracle 的官网文档里对RAC 新特性的一点说明。
Oracle Database 11g Release 2 (11.2.0.2) New Features in Oracle RAC
http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/whatsnew.htm#CHDJAGEE
Oracle Database 11g Release 2 (11.2.0.2) New Features in Oracle RAC
This section describes the Oracle Database 11g release 2 (11.2) features for Oracle RAC administration and deployment.
Oracle RAC One Node
Oracle Real Application Clusters One Node (Oracle RAC One Node) provides enhanced high availability for single-instance databases, protecting them from both planned and unplanned downtime. Oracle RAC One Node provides the following:
(1)Always-on single-instance database services
(2)Better consolidation for database servers
(3)Enhanced server virtualization
(4)Lower cost development and test platform for full Oracle RAC
In addition, Oracle RAC One Node facilitates the consolidation of database storage, standardizes your database environment, and, when necessary, enables you to upgrade to a full, multinode Oracle RAC database without downtime or disruption.
Online database relocation is a tool you can use to relocate an Oracle RAC One Node database from one node to another while maintaining service availability.
This feature includes enhancements to the Server Control Utility (SRVCTL) for both Oracle RAC One Node and online database relocation.
This feature also includes enhancements to the Database Configuration Assistant (DBCA) to enable you to use the tool to add an Oracle RAC One Node database.
Edition-Based Redefinition
You can specify an edition attribute for a database service using SRVCTL. When you specify an edition attribute for a service, all subsequent connections that specify the service use this edition as the initial session edition.
Specifying an edition as a service attribute can help to manage resource usage. For example, services associated with an edition can be placed on a separate instance in an Oracle RAC environment, and the Oracle Database Resource Manager can manage resources used by different editions by associating resource plans with the corresponding services.
Enhancements to SRVCTL for Grid Infrastructure Management
Enhancements to SRVCTL simplify the management of various new Oracle grid infrastructure and Oracle RAC resources.
Oracle Database Quality of Service Management Server
The Oracle Database Quality of Service Management server allows system administrators to manage application service levels hosted in Oracle Database clusters by correlating accurate runtime performance and resource metrics and analyzing with an expert system to produce recommended resource adjustments to meet policy-based performance objectives.
Oracle Database 11g Release 2 (11.2.0.1) New Features in Oracle RAC
This section describes the Oracle Database 11g release 2 (11.2.0.1) features for Oracle RAC administration and deployment.
Grid Plug and Play
Grid Plug and Play reduces per-node configuration data and the need for explicit add and delete nodes steps, where possible. This allows a system administrator to take a template system image and run it on a node to be added with no further configuration. This removes many manual operations, reduces the opportunity for errors, and encourages configurations that can be changed more easily. Removal of the per-node configuration makes the nodes easier to replace because it is not required that they contain individual states that must be managed.
Grid Plug and Play also introduces simplified instance addition. When your databases are backed with Oracle Managed Files (OMF) and Oracle Automatic Storage Management (Oracle ASM), recovery threads and undo tablespaces are automatically created for an instance that you add explicitly with the srvctl add instance command, or implicitly when a policy-managed database brings up a new instance.
All tools and utilities such as DBCA, the Oracle Net Configuration Assistant (NETCA), and SRVCTL have been updated to support Grid Plug and Play. Oracle Enterprise Manager, the graphical interface for managing Oracle RAC, provides management and monitoring for the Grid Plug and Play environment.
Grid Plug and Play reduces the cost of installing, configuring, and managing database nodes by making their per-node state disposable. Nodes can easily be replaced with regenerated state.
Policy-based cluster and capacity management
Oracle Clusterware allocates and reassigns capacity based on policies you define, enabling faster resource failover and dynamic capacity assignment using policy-based management.
Policy-based cluster and capacity management allows the efficient allocation of different types of applications in the cluster. Various applications can be hosted on a shared infrastructure, being isolated regarding their resource consumption by policies and, therefore, behave as if they were deployed in single-system environments. Policy-managed Oracle RAC databases use policy-based cluster management to provide the required resources for the workloads the database supports.
Role-separated management
Role-separated management for Oracle Clusterware allows certain administrative tasks to be delegated to different people, representing different roles in the company. It is based on the idea of a clusterware administrator, who can grant administrative tasks on a per resource basis. For example, if two databases are placed into the same cluster, the cluster administrator can manage both databases in the cluster. But, the cluster administrator can also decide to grant different administrative privileges to each DBA responsible for each one of those databases.
Role-separated management enables multiple applications and databases to share the same cluster and hardware resources, but ensures that different administration groups do not interfere with each other.
Improved Cluster Resource Modeling
Oracle Clusterware can manage different types of applications and processes. You can create dependencies among the applications and processes and manage them as a single entity.
Oracle Enterprise Manager-based Oracle Clusterware resource management
You can use Oracle Enterprise Manager to manage Oracle Clusterware resources. You can create and configure resources in Oracle Clusterware and also monitor and manage resources after they are deployed in the cluster.
Oracle Cluster Registry performance enhancements
Improvements in the way Oracle Clusterware accesses Oracle Cluster Registry (OCR) speed up relocation of services when a node fails. Oracle Clusterware now supports up to five copies of OCR for improved availability of the cluster and OCR can now be stored in Oracle ASM.
The tools to manage OCR have changed to support the new management options. Consistent storage management automation provides improved performance in Oracle Clusterware and Oracle RAC environments, and easier management of the cluster.
SRVCTL support for single-instance database
Server Control Utility (SRVCTL) commands have been enhanced to manage the configuration in a standalone server using Oracle Restart. The new SRVCTL functionality enables you to register a single-instance database that can be managed by Oracle Clusterware. Once registered, Oracle Clusterware can start, stop, monitor, and restart the database instance.
The new SRVCTL functionality simplifies management of Oracle Database through a consistent interface that can be used from the console or scripted. An improved management interface makes it easy to provide higher availability for single-instance databases that run on a server that is part of a cluster.
Enhanced Cluster Verification Utility
New Cluster Verification Utility (CVU) functionality checks certain storage types and configurations. Also, more consideration is given to user-specific settings.
In addition to command-line commands, these checks are done through the Oracle Universal Installer, DBCA, and Oracle Enterprise Manager. These enhancements facilitate implementation and configuration of cluster environments and provide assistance in diagnosing problems in a cluster environment, improving configuration and installation.
Oracle Enterprise Manager support for Grid Plug and Play
You can use Oracle Enterprise Manager:
To support the Grid Plug and Play environment
To administer dynamic configuration use
To manage Grid Plug and Play profiles and targets, such as hosts, clusters, and Oracle RAC databases and Oracle RAC database instances
Additionally, Oracle Enterprise Manager supports other Oracle RAC administration tasks, including:
Monitoring:
Startup
Shutdown
Backup and recovery
Tablespace management
Node addition
Oracle Enterprise Manager provisioning for Oracle Clusterware and Oracle RAC
The Oracle Enterprise Manager provisioning framework has been updated to reflect the changes to the installation and configuration of Oracle Clusterware and Oracle RAC. You can achieve easier implementation and management of a clustered database environment using the Oracle Enterprise Manager provisioning framework.
Zero downtime for patching Oracle RAC
Patching Oracle Clusterware and Oracle RAC can be completed without taking the entire cluster down. This also allows for out-of-place upgrades to the cluster software and Oracle Database, reducing the planned maintenance downtime required in an Oracle RAC environment.
Integrated support for application failover in an Oracle Data Guard configuration
Applications connected to a primary database transparently failover to a new primary database when Oracle Data Guard changes roles. Clients integrated with Fast Application Notification (FAN) can achieve fast failover between primary and standby databases, in addition to fast failover within the cluster. Services have an attribute with which you can associate the service with a database role, such as PHYSICAL_STANDBY, so that the service is only active when the database is mounted in the associated role.
Oracle ASM Dynamic Volume Manager
The Oracle ASM Dynamic Volume Manager is a kernel-loadable device driver that provides a standard device driver interface to clients, such as the Oracle Automatic Storage Management Cluster File System (Oracle ACFS). Oracle ASM Dynamic Volume Manager is the primary I/O interface for Oracle ACFS to perform I/O and build a file system using Oracle Automatic Storage Management (Oracle ASM) as a volume manager. Oracle ASM Dynamic Volume Manager is loaded upon Oracle ASM startup, is cluster aware, and communicates with Oracle ASM for extent map information, extent rebalancing, and I/O failures.
Oracle ASM Dynamic Volume Manager provides a standard I/O interface allowing general-purpose file systems to leverage the full functionality of Oracle ASM as a volume manager. Files not directly supported by Oracle ASM, such as Oracle binaries, can now reside on ACFS on Oracle ASM volumes. This eliminates the need for third-party file systems or volume managers to host general-purpose files.
Oracle Enterprise Manager support for Oracle Automatic Storage Management Cluster File System
Oracle Enterprise Manager provides a comprehensive management solution that extends Oracle ASM technology to support all customer application data files, both database and non-database, and in both single-host and cluster configurations. It also enhances existing Oracle Enterprise Manager support for Oracle ASM, and adds features to support the Oracle ASM Dynamic Volume Manager (ADVM) and Oracle ASM Cluster File System (ACFS) technology.
Oracle Enterprise Manager provides a graphical user interface that makes it easier to manage the environment, whether it is a standalone server or a cluster deployment of Oracle ASM. The centralized console provides a consistent interface for managing volumes, database files, file systems, and the Oracle Database.
Oracle Automatic Storage Management Cluster File System
The Oracle Automatic Storage Management Cluster File System (Oracle ACFS) provides a robust, modern, general purpose file system for files beyond the Oracle database files. Oracle ACFS also provides support for files such as Oracle binaries, report files, trace files, alert logs, and other application data files. With the addition of Oracle ACFS, Oracle ASM becomes a complete storage management solution for both Oracle database and non-database files.
Additionally, Oracle ACFS
(1)Supports large files with 64-bit file and file system data structure sizes leading to exabyte-capable file and file system capacities
(2)Uses extent-based storage allocation for improved performance
(3)Uses a log-based metadata transaction engine for file system integrity and fast recovery
(4)Can be exported to remote clients through industry standard protocols such as NFS and CIFS
Oracle ACFS complements and leverages Oracle ASM and provides a general purpose journaling file system for storing and managing non-Oracle database files. This eliminates the need for third-party cluster file system solutions, while streamlining, automating, and simplifying all file type management in both single node and Oracle RAC and Grid computing environments.
Oracle ACFS supports dynamic file system expansion and contraction without any downtime and is also highly available, leveraging the Oracle ASM mirroring and striping features in addition to hardware RAID functionality.
Automatic Storage Management file access control
This feature implements access control on Oracle ASM files on UNIX platforms to isolate itself and different database instances from each other and prevent unauthorized access. The feature includes SQL statements to grant, modify, and deny file permissions.
This feature enables multiple database instances to store their Oracle ASM files in the same disk group and enables consolidation of multiple databases, securely, to prevent database instances from accessing or overwriting files belonging to other database instances.
Universal Connection Pool
Universal Connection Pool (UCP) is a Java connection pool that replaces the deprecated JDBC Implicit Connection Cache with Oracle Database 11g (11.1.0.7). UCP is integrated with Oracle RAC to provide the following benefits:
(1)A single UCP can be leveraged by any Oracle component or user.
(2)Eliminates redundant connection pools from several Oracle Components, such as AOL/J, ADF Business Components, and TopLink.
(3)Provides consistent connection pool behavior for an Oracle component or product. For example, the connection pool sizes can be configured to provide consistent connection management behavior for an application.
(4)Provides JMX interfaces for the UCP Manager, which delivers a consistent management interface to manage the connection pool.
(5)UCP adapters can provide standards compliance for a specific connection type being pooled.
(6)Supports connection pooling for Oracle and non-Oracle connections.
(7)Supports pooling for any type of connections, including JDBC or JCA connections.
Expose high availability events through a Java API
You can access fast application notification (FAN) events with a simplified JAVA API if you are not using the Oracle connection pool features.
SRVCTL enhancements to support Grid Plug and Play
This feature includes enhancements to the server control utility (SRVCTL) for the Grid Plug and Play feature.
二. 网上的资料:
2.1 11gR2 RAC 安装配置中的三个重要调整
2.2.1. ASMLIB 和 CRS集成在了一起,在安装GRID的时候CRS和ASM一起来配置,OCR和VOTING DISK 在新装11G R2 RAC上必须使用ASM或者NFS,已经不支持RAW和块设备,如果是从11G R1,或者10G升级到11G R2 OCR,VOTING DISK将被继续支持。
2.2.2. 使用OUI,DBCA创建数据库的时候已经不支持裸设备了,如果要用裸设备需要用手工来创建数据库。
2.2.3. 客户端tnsnames.ora使用SCAN IP 替代原来的直接配置配置VIP,简化了客户端的配置。而且在增加,删除节点后不需要对tnsnames.ora文件进行调整。
2.2 Oracle 11gR2新特性
Oracle 11gR2将自动存储管理 (ASM) 和 Oracle Clusterware 集成在 Oracle Grid Infrastructure 中。Oracle ASM 和 Oracle Database 11gR2 提供了较以前版本更为增强的存储解决方案,该解决方案能够在 ASM 上存储 Oracle Clusterware文件,即 Oracle 集群注册表 (OCR) 和表决文件(VF,又称为表决磁盘)。这一特性使 ASM 能够提供一个统一的存储解决方案,无需使用第三方卷管理器或集群文件系统即可存储集群件和数据库的所有数据;
Oracle 11gR2 中引入了SCAN(single client access name),即简单客户端连接名,一个方便客户端连接的接口;在Oracle 11gR2之前,client链接数据库的时候要用vip,假如cluster有4个节点,那么客户端的tnsnames.ora中就对应有四个主机vip的一个连接串,如果cluster增加了一个节点,那么对于每个连接数据库的客户端都需要修改这个tnsnames.ora。SCAN简化了客户端连接,客户端连接的时候只需要知道这个名称,并连接即可, 每个SCAN VIP对应一个scan listener, cluster内部的service在每个scan listener上都有注册,scan listener接受客户端的请求,并转发到不同的Local listener中去,由local 的listener提供服务给客户端。
此外,安装GRID的过程也简化了很多,内核参数的设置可保证安装的最低设置,验证安装后执行fixup.sh即可,此外ssh互信设置可以自动完成,尤其不再使用OCFS及其复杂设置,直接使用ASM存储,在HP-UX11.31上无需额外的集群软件(如Service Guard for RAC Extendsion)即可安装.
2.2.1 随处可见的集群
在以前的版本中,Oracle Clusterware必须要独立地安装在它自己的ORACLE HOME中,并且也只能在RAC环境下使用,这一切在Oracle 11g R2得到彻底颠覆,因为在这个版本中支持安装Oracle网格基础架构,而且只需要一个独立的ORACLE HOME,它包括了Oracle Clusterware和Oracle自动存储管理(ASM)。通过升级后的Oracle通用安装程序安装了网格基础架构后,你将会看到一个全新的功能和服务矩阵,如:
单实例RAC(Oracle重启): Oracle 11g R2扩展了Oracle Clusterware的功能,为任何单实例提供了高可用特性,本质上是将数据库变成了单实例RAC数据库。Oracle 11g R2中的Oracle重启特性帮助Oracle网格基础架构的高可用服务控制服务器重启时哪一个监听器,ASM实例和数据库应该启动,它完全取代了过去DBA们经常用到的DBSTART脚本。同样,当单个数据库实例崩溃或其它异常终止时,Oracle重启功能会自动监测失效的实例,并自动重启它,就好像是在一个真实的RAC环境中一样。
SRVCTL升级:如果你管理过旧版本的RAC环境,你可能已经熟悉了RAC环境中的维护工具SRVCTL,在11g R2中,该工具被扩展了,现在可以管理单实例RAC,以及监听器和ASM实例。
集群时间同步服务:Oracle 11g R2现在需要在所有RAC节点上配置时间同步,如果你曾经经历过某个节点被驱逐出RAC集群配置,你一定知道其难度有多大,特别是两个服务器的时间不同步和日志文件的时间戳不同步时,Oracle之前的版本借助系统提供的网络时间协议(NTP)同步所有节点时间,但这需要连接到互联网上的标准时间服务器,作为NTP的替代者,Oracle 11g R2提供了一个新的集群时间同步服务,确保集群中的所有节点的时间都保持一致。
网格即插即用:在以前的版本中,配置RAC最复杂的部分是确定和设置所有节点都需要用到的公共ip地址,私有ip地址和虚拟ip地址。为了简化RAC的安装,Oracle 11g R2提供了一个全新的网格名称服务(GNS),它和域名服务器协作,处理每个网格组件的ip地址分配,当集群环境跨越多个数据库时这个新特性极其有用。
干净地卸载RAC组件:如果你曾经尝试过删除多个节点上的所有RAC痕迹,那一定会钟情于这项新特性,在Oracle 11g R2中,所有安装和配置助手,特别是Oracle通用安装程序(OUI),数据库配置助手(DBCA)和网络配置助手(NETCA),都得到了增强,当需要卸载RAC组件时,可以保证卸得干干净净。
2.2.2 ASM加入了集群
ASM和Oracle 11g R2 Clusterware安装在同一个Oracle Home下,因此消除了之前推荐的冗余Oracle Home安装方法,并且ASM也从DBCA脱离出来了,有了专门的自动存储管理配置助手(ASMCA)。
智能化数据布局:在之前的版本中,要配置ASM磁盘可能需要存储管理员的参与,需要配置磁盘I/O子系统,Oracle 11g R2提供了ASM分配单元,可以直接受益于磁盘外缘柱面,获得更快的速度,可以将数据文件,重做日志文件和控制放在磁盘外缘获得更好的性能。
EM支持工作台扩展:在这个版本中对Oracle 11g R1引入到企业管理控制台中的自动诊断仓库(ADR)进行了扩展,包括支持ASM诊断,将所有诊断信息打包直接发送给Oracle技术支持,以便获得更快速的ASM性能问题解决方案。
ASMCMD增强:自动存储管理命令行实用工具(ASMCMD)也获得了不少增强,包括:
1) 启动和停止ASM实例;
2) 备份,恢复和维护ASM实例的服务器参数文件(spfile);
3) 实用iostat监控ASM磁盘组的性能;
4) 维护新的ASM集群文件系统(ACFS)中的磁盘卷,目录和文件存储,我的下一个话题就是它。
2.2.3 ACFS – 一个强健的集群文件系统
Oracle之前也发布过集群文件系统(OCFS),之后又发布了增强版OCFS2,它让Oracle RAC实例可以通过共享存储读写数据库文件,重做日志文件和控制文件。
此外,OCFS也允许RAC数据库的Oracle集群注册文件(OCR)和表决磁盘存储在集群文件系统中,在Oracle 10g R2中,这个需求被取消了,OCR文件和表决磁盘可以存储在裸设备或裸块设备中,如果你曾经在原始设备上丢失过这些文件的所有副本,你一定了解要恢复它们是多么繁琐,因此,在Oracle 11g R2中,将不再支持将这些文件存储在裸设备上。
为了提高这些关键文件的存活能力,Oracle 11g R2正式引入了一种新的集群文件系统,称之为ASM集群文件系统(ACFS: automatic cluster file system),在RAC环境中,ACFS可以为OCR文件和表决磁盘提供更好的保护,它允许创建五份OCR文件副本,之前的集群文件系统仅允许保存两份OCR文件,一个主OCR,一个镜像OCR,但ACFS不适合单独的RAC环境,除此之外,几乎所有与操作系统和数据相关的文件都可以从ACFS的安全性和文件共享特性受益。
动态卷管理器: Oracle 11g R2提供了一个新的ASM动态卷管理器(ADVM)来配置和维护存储在ACFS文件系统中的文件,使用ADVM可以在ASM磁盘组内构建一个ADVM卷设备,管理存储在ADVM卷设备中的文件,以及按需调整ADVM卷设备空间大小,最重要的是,因为ADVM是构建在ASM文件系统架构之上的,可以保证存储在这些卷中的文件受到良好的保护,不会出现意外丢失,因为ASM提供了类似RAID的磁盘阵列的功能。
文件访问控制:使用传统的Windows风格访问控制列表(ACL)或Unix/Linux下的用户/组/其它访问权限风格为ACFS目录和文件授予读,写和执行权限,可以通过图形化的企业管理控制台或命令行程序ASMCMD管理ACFS目录和文件安全。
文件系统快照(FSS):Oracle 11g R2通过它的文件系统快照(FSS)功能可以对ACFS文件系统执行快照,一个快照是所选ACFS文件系统的一个只读副本,对相同的ACFS,它会自动保留63个独立的ACFS快照,当某个ACFS文件被不经意地更新,删除或其它危险操作时,这个特性非常有用,利用11g R2企业管理控制台或ACFS acfsutil命令行工具可以找出该文件合适的版本并执行恢复。
2.2.4 改善的软件安装和打补丁过程
作为一名DBA压力最大的活就是给Oracle数据库打补丁了,如果补丁可能会引入对数据库有害的行为,我不得不花费大量的时间和精力来确定和审核.
集群验证实用程序集成:从Oracle 10g开始引入了集群验证实用程序(CVU),现在已经完全集成到Oracle通用安装程序(OUI)和其它配置助手(如DBCA,DBUA)中了。
零停机修补的集群:当为Oracle集群打补丁时,Oracle 11g R2在一个不合适的位置升级方式应用补丁,这意味着会有两个Oracle Home,其中一个专门用来存放补丁的,但一次只能激活一个Oracle Home,在Oracle 11g R2中不用再为升级全部关闭Oracle集群了,实现真正的零停机打补丁。
2.2.5 DBMS_SCHEDULER升级
DBMS_SCHEDULER包得到了彻底的更新,DBA经常使用这个包来调度作业。
文件监视器:以前的版本无法在批处理过程中检测大多数触发事件,如检测一个文件抵达某个目录,在Oracle 11g R2中,使用新的文件监视器可以缓解这个问题,一旦预期的文件抵达目录,DBMS_SCHEDULER现在就可以检测到了,并在新的对象类型SCHEDULER_FILEWATCHER_RESULT中注册它的到来,它通过新的CREATE_FILE_WATCHER存储过程向DBMS_SCHEDULER发送一个信号触发作业。
内置的email通知:无论何时,DBMS_SCHEDULER调度任务启动、失败或完成时,任务的状态可以立即通过email发送出去,虽然在以前的版本中也能实现这个功能,但要么调用DBMS_MAIL存储过程,要么调用DBMS_SMTP存储过程,现在这个功能合并到DBMS_SCHEDULER中了。
远程作业:DBMS_SCHEDULER现在允许DBA在远程数据库上创建和调度作业,现在我终于可以在生产库上通过DBMS_SCHEDULER调用生产库DBMS_SCHEDULER上的存储过程执行任务了,这意味着我现在可以在一台数据库上集中创建和维护调度任务了。
多作业目标:最后,现在可以在多个数据库实例上同时调度DBMS_SCHEDULER任务了,在RAC环境中,这个特性非常有用,因为我可以利用多个数据库实例将长时间运行任务分成几部分,分别在不同的数据库实例上执行更小的任务.
2.3 SCAN(Single Client Access Name)
在Oracle 11gR2以前,如果数据库采用了RAC架构,在客户端的tnsnames中,需要配置多个节点的连接信息,从而实现诸如负载均衡,failover等等RAC的特性。因此,当数据库RAC集群需要添加或删除节点时,需要及时对客户端机器的tns进行更新,以免出现安全隐患。
在11gR2中,为了简化该项配置工作,引入了SCAN(Single Client Access Name)的特性,该特性的好处在于,在数据库与客户端之间,添加了一层虚拟的服务层,就是所谓的scan ip以及scan ip listener,在客户端仅需要配置scan ip的tns信息,通过scan ip listener,连接后台集群数据库。这样,不论集群数据库是否有添加或者删除节点的操作,均不会对client产生影响。
2.3.1 SCAN(Single Client Access Name)的架构以及配置
在11gR2中,安装RAC发生的巨大变化,在10g以及11gR1的时代,安装RAC的步骤是先安装CRS,再安装DB,而到了11gR2的时代,crs与asm被集成在一起,合称为GRID,必须先安装GRID后,才能继续安装DB。
在11gR2中,SCAN IP是作为一个新增IP出现的,原有的CRS中的VIP仍然存在。在11gR2的RAC架构中,SCAN IP并非独立存在的,而是和原有的 VIP结合在一起的。它们的工作原理如图:
从图可以看出,scan ip其实是oracle在客户端与数据库之间,新加的一个连接层,当有客户端访问时,连接到 SCAN IP LISTENER, 而SCAN IP LISTENER接收到连接请求时,会根据 LBA 算法(所谓LBA算法,就是least loaded instance),将该客户端的连接请求,转发给对应的instance上的VIP LISTENER,从而完成了整个客户端与服务器的连接过程。简化如下:
Client -> scan listener -> local listener -> local instance
SCAN包含在安装grid的过程中。SCAN的定义,有两种途径:
1. 在DNS中定义域名。
2. 通过oracle提供的Grid Naming Server(GNS)实现DHCP自定义。
如果通过dns来定义,则需要在网络中定义3个SCAN IP地址,指向同一个域名,这3个ip地址必须处于同一个子网内,同时域名不能太长。另外,SCAN IP是由oracle clusterware管理的,因此在主机的集群软件(如IBM HACMP,HP SERVICE GUARD)中不能将此ip配置进去,类似于10g中的vip,在grid安装前,此IP是无法ping通的。
范例:
scan-ip.tianlesoftware.com IN A 192.168.1.111
IN A 192.168.1.112
IN A 192.168.1.113
如果使用GNS的方式,则必须有DHCP服务,在cluster的配置过程中,将会自动向DHCP服务器申请3个IP地址作为SCAN IP使用。
除了SCAN IP,在cluster的配置过程中,SCAN IP LISTENER服务也会被建立,每个SCAN IP对应一个SCAN IP LISTENER,并且,为了提升高可用性,3个SCAN IP以及其对应的SCAN IP LISTENER将被独立的分配到各个节点上。如果cluster中其中某个运行scan ip的节点出现异常,则其余两个正常的scan ip节点将自动接管。
注意,如果客户端是11gR2的版本,则客户端只需在tns中配置域名解析,即可实现failover,如果客户端版本低于11gR2,则无法通过域名解析出3个SCAN IP地址,因此如果要实现failover,必须在客户端的tns中配置3个SCAN IP的地址进行解析,这也是为何oracle强烈建议在使用11gR2数据库时,客户端也最好使用11gR2的原因。
如:
$srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
$srvctl config scan
SCAN name: scan-ip, Network: 1/192.168.1.0/255.255.255.0/
SCAN VIP name: scan1, IP: /scan-ip.tianlesoftware.com/192.168.1.111
SCAN VIP name: scan2, IP: /scan-ip.tianlesoftware.com/192.168.1.112
SCAN VIP name: scan3, IP: /scan-ip.tianlesoftware.com/192.168.1.113
2.3.2 SCAN IP 配置方法及使用
SCAN IP是跟数据库instance无关的,例如一个12节点的RAC,只有3个SCAN IP,并且SCAN IP是随机分布在各个instance的,那么,SCAN IP LISTENER如何监听各个节点的数据库呢,该怎么配置哪个instance去哪个SCAN IP LISTENER注册?
SCAN IP 和 SCAN LISTENER是独立于RAC的各个节点的,而每个节点的 VIP , VIP LISTENER是跟instance绑定的,每个节点的VIP LISTENER,会监听自己所属节点的instance。
因此,在数据库中,我们需要设置remote_listener参数,这个参数设置很有讲究,因为scan ip有3个,scan listener也有三个,但是他们对应的是同一个域名,因此,在数据库中,我们需要使用easy connect naming method方式,就是在sqlnet.ora的配置文件中,必须有NAMES.DIRECTORY_PATH=(tnsnames,ezconnect)存在。
另外,配置remote_listener的方式也有讲究,以前的版本中,我们通常是在tnsnames.ora中写好remote_listener的地址以及端口,但是对于scan listener,不能这么做,必须按照标准格式,设置成REMOTE_LISTENER=SCAN:PORT的形式,
如: REMOTE_LISTENER=scan-ip.tianlesoftware.com:1521,而不需要在tnsnames.ora中进行额外设置。
经过以上设置后,RAC数据库的每个节点的PMON进程,会用广播的方式向每个SCAN LISTENER进行注册,同时CRS的后台进程ONS,会采集各个节点的负载状况,通知scan listener,以便scan listener根据负载情况,将新连接分配到当前负载最低的节点上。
2.4 10gR2 RAC 与11gR2 RAC安装时的三方面不同
2.4.1 netca的配置不同
(1)oracle 10gR2 rac 是在安装完oracle clusterware和oracle database 后,需要手工运行netca,这时监听才配置完成
(2)oracle 11gR2 rac 是在安装完oracle grid时运行完root.sh脚本后,自动会运行netca,监听配置完成,并且多了listener_scan1监听名称
2.4.2 dbca的配置不同
(1)oracle 10gR2 rac dbca支持asm,共享文件系统,裸设备三种建库时存放数据文件方法
(2)oracle 11gR2 rac dbca只支持asm,共享文件系统,不再支持裸设备的方法来建库
2.4.3 TAF的配置不同
(1)oracle 10gR2 rac taf配置可以通过dbca图形工具来配置
(2)oracle 11gR2 rac taf配置只能通过EM图形工具来配置
转:http://blog.csdn.net/tianlesoftware/article/details/5982972