An OPS instance is said to have affinity for a device if the device is directly accessible from the node on which the instance is running. Figure 13.2 shows two instances on two different nodes: disk A is directly connected to node 1, and disk B is directly connected to node 2. A high-speed interconnect makes this configuration a shared disk architecture in which both disks are accessible from both nodes.
Since disk A is local to node 1, the instance running on that node is said to have affinity for disk A. In this case, instance 1 has affinity for disk A. Similarly; instance 2 has affinity for disk B. I/O operations will be faster and more efficient when an instance is accessing disks for which it has affinity.
Extending the device-to-instance affinity concept to datafiles, an instance has affinity for a file if it has affinity for the disk on which that file is stored. In Figure 13.2, instance 1 has affinity for files stored on disk A, and instance 2 has affinity for files stored on disk B.
When allocating parallel execution tasks to parallel slave processes on multiple nodes, Oracle takes disk affinity into account. However, this affinity is transparent to the application or user invoking the task.
When disk affinity is not used, Oracle balances the allocation of parallel slave processes evenly across all available instances. When disk affinity is used, Oracle attempts to allocate parallel slave processes to the instances that are “nearest” to the data required by those processes. Disk affinity can reduce inter-node communication and improve the performance of parallel operations.
Even when disk affinity is supported, it is not used for all parallel operations. Table 13.1 identifies operations in which Oracle will make use of disk affinity information and those in which it won’t.
3.149.236.27