Oracle rac how does it work
Este sitio usa Akismet para reducir el spam. About Books Web Links Web 2. Buscar: Buscar. How does it work? The Oracle RAC is a software component that allows the creation of multiple database engine instances on an independent manner, sharing the same storage. Oracle RAC, by itself, does not guarantee a transparent failover; it only guarantees database availability. The failover is implemented by the Oracle OCI client, with some restrictions. Me gusta esto: Me gusta Cargando Anterior Entrada anterior: Desarrollo de software sin desperdicios: Lean Software.
Greate article. Keep posting such kind of information on your site. Nombre obligatorio. Seguir Siguiendo. Accede ahora. La entrada no fue enviada. In either case, you must first deploy Oracle Grid Infrastructure on all nodes that are meant to be part of the cluster.
Oracle cloning is not a replacement for cloning using Oracle Enterprise Manager as part of the Provisioning Pack. When you clone Oracle RAC using Oracle Enterprise Manager, the provisioning process includes a series of steps where details about the home you want to capture, the location to which you want to deploy, and various other parameters are collected. The cloning process assumes that you successfully installed an Oracle Clusterware home and an Oracle home with Oracle RAC on at least one node.
In addition, all root scripts must have run successfully on the node from which you are extending your cluster database. This option adds to the flexibility that Oracle offers for database consolidation while reducing management overhead by providing a standard deployment for Oracle databases in the enterprise. With Oracle RAC One Node, there is no limit to server scalability and, if applications grow to require more resources than a single node can supply, then you can upgrade your applications online to Oracle RAC.
If the node that is running Oracle RAC One Node becomes overloaded, then you can relocate the instance to another node in the cluster. Alternatively, you can limit the CPU consumption of individual database instances per server within the cluster using Resource Manager Instance Caging and dynamically change this limit, if necessary, depending on the demand scenario.
Relocating an Oracle RAC One Node instance is therefore mostly transparent to the client, depending on the client connection. Oracle recommends to use either Application Continuity and Oracle Fast Application Notification or Transparent Application Failover to minimize the impact of a relocation on the client. For administrator-managed Oracle RAC One Node databases, you must monitor the candidate node list and make sure a server is always available for failover, if possible.
Candidate servers reside in the Generic server pool and the database and its services will fail over to one of those servers. For policy-managed Oracle RAC One Node databases, you must ensure that the server pools are configured such that a server will be available for the database to fail over to in case its current node becomes unavailable.
In this case, the destination node for online database relocation must be located in the server pool in which the database is located. Alternatively, you can use a server pool of size 1 one server in the server pool , setting the minimum size to 1 and the importance high enough in relation to all other server pools used in the cluster, to ensure that, upon failure of the one server used in the server pool, a new server from another server pool or the Free server pool is relocated into the server pool, as required.
Oracle Clusterware provides a complete, integrated clusterware management solution on all Oracle Database platforms. This clusterware functionality provides all of the features required to manage your cluster database including node membership, group services, global resource management, and high availability functions.
Oracle Database features, such as services, use the underlying Oracle Clusterware mechanisms to provide advanced capabilities. Oracle Database also continues to support select third-party clusterware products on specified platforms.
You can use Oracle Clusterware to manage high-availability operations in a cluster. These resources are automatically started when the node starts and automatically restart if they fail. The Oracle Clusterware daemons run on each node. Oracle Clusterware provides the framework that enables you to create CRS resources to manage any process running on servers in the cluster which are not predefined by Oracle.
Oracle Clusterware stores the information that describes the configuration of these components in OCR that you can administer. Overview of Oracle Flex Clusters. Overview of Reader Nodes. Overview of Local Temporary Tablespaces. Oracle Flex Clusters provide a platform for a variety of applications, including Oracle RAC databases with large numbers of nodes.
Oracle Flex Clusters also provide a platform for other service deployments that require coordination and automation for high availability. This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery.
The number of Leaf Nodes can be many more. Hub Nodes and Leaf Nodes can host different types of applications. An advantage of using reader nodes is that these instances are not affected if you need to reconfigure a Hub Node.
Parallel query jobs running on reader nodes are not subject to the brownout times that can occur when you reconfigure a Hub Node, and can continue to service clients that are connected to the reader nodes, as long as the Hub Node to which it is connected is not evicted from the cluster.
You can scale up to 64 reader nodes per Hub Node, and reader nodes can utilize massive parallel query to speed up operations on large data sets. Database instances running on reader nodes have different characteristics than database instances running on Hub Nodes. Hub Node instances are similar to previous versions of Oracle RAC database instances, whereas instances running on reader nodes have the following notable differences:.
Database instances are unaffected if you reconfigure Hub Node clusterware. Database instances running on reader nodes are not subject to the brownout times and can continue to service the clients connected to the reader nodes, as long as the Hub Node to which it is connected is not evicted from the cluster. Using reader nodes, you can scale up to 64 reader nodes per hub. Because database instances running on the reader nodes are read only, you will receive an error if you attempt any DDL or DML operations.
You can create services to direct queries to read-only instances running on reader nodes. These services can use parallel query to further speed up performance. Oracle recommends that you size the memory in these reader nodes as high as possible so that parallel queries can use the memory for best performance.
It is still possible for SQL operations, such as hash aggregation, sort, hash join, creation of cursor-duration temporary tables for the WITH clause, and star transformation to spill over to disk specifically to the global temporary tablespace on shared disks. Management of the local temporary tablespace is similar to that of the existing temporary tablespace. Local Temporary Tablespace Organization. Temporary Tablespace Hierarchy.
Local Temporary Tablespace Features. Metadata Management of Local Temporary Files. Local Temporary Tablespaces for Users. Atomicity Requirement for Commands. Local Temporary Tablespace and Dictionary Views. The temporary tablespaces created for the WITH clause and star transformation exist in the temporary tablespace on the shared disk.
A set of parallel query child processes load intermediate query results into these temporary tablespaces, which are then read later by a different set of child processes. There is no restriction on how these child processes reading these results are allocated, as any parallel query child process on any instance can read the temporary tablespaces residing on the shared disk. For read-write and read-only instance architecture, as the parallel query child processes load intermediate results to the local temporary tablespaces of these instances, the parallel query child processes belonging to the instance where the intermediate results are stored share affinity with the reads for the intermediate results and can thus read them.
Creation of a local temporary tablespace results in the creation of local temporary files on every instance and not a single file, as is currently true for shared global temporary tablespaces. You can create local temporary tablespaces for both read-only and read-write instances. For example:. When you define local temporary tablespace and shared existing temporary tablespace, there is a hierarchy in which they are used. To understand the hierarchy, remember that there can be multiple shared temporary tablespaces in a database, such the default shared temporary tablespace for the database and multiple temporary tablespaces assigned to individual users.
If a user has a shared temporary tablespace assigned, then that tablespace is used first, otherwise the database default temporary tablespace is used. Once a tablespace has been selected for spilling during query processing, there is no switching to another tablespace. For example, if a user has a shared temporary tablespace assigned and during spilling it runs out of space, then there is no switching to an alternative tablespace. The spilling, in that case, will result in an error.
Additionally, remember that shared temporary tablespaces are shared among instances. The allocation of temporary space for spilling to a local temporary tablespace differs between read-only and read-write instances.
For read-only instances, the following is the priority of selecting which temporary location to use for spills:. For read-write instances, the priority of allocation differs from the preceding allocation order, as shared temporary tablespaces are given priority, as follows:. Instances cannot share local temporary tablespace, hence one instance cannot take local temporary tablespace from another. If an instance runs out of temporary tablespace during spilling, then the statement resutls in an error.
To address contention issues arising from having only one BIGFILE -based local temporary tablespace, multiple local temporary tablespaces can be assigned to different users, as default. One local temporary created with the FOR RIM option when the user is connected to the read-only instance running on reader nodes. One shared temporary tablespace to be used when the same user is connected on the read-write instances running on a Hub Node. Currently, temporary file information such as file name, creation size, creation SCN, temporary block size, and file status is stored in the control file along with the initial and max files, as well as auto extent attributes.
However, the information about local temporary files in the control file is common to all applicable instances read write and read only for LOCAL Instance-specific information, such as bitmap for allocation, current size for a temporary file, and the file status, is stored in the SGA on instances and not in the control file because this information can be different for different instances. When an instance starts up, it reads the information in the control file and creates the temporary files that constitute the local temporary tablespace for that instance.
If there are two or more instances running on a node, then each instance will have its own local temporary files. For local temporary tablespaces, there is a separate file for each involved instance. The local temporary file names follow a naming convention such that the instance numbers are appended to the temporary file names specified while creating the local temporary tablespace.
For example, assume that a read-only node, N1, runs two Oracle read-only database instances with numbers 3 and 4. All DDL commands related to local temporary tablespace management and creation are run from the read-write instances.
Running all other DDL commands will affect all instances in a homogeneous manner. For local temporary tablespaces, Oracle supports the allocation options and their restrictions currently active for temporary files. A database administrator can specify default temporary tablespace when creating the database, as follows:. When you create a database, its default local temporary tablespace will point to the default shared temporary tablespace.
Local Temporary Tablespace for Users. When you create a user without explicitly specifying shared or local temporary tablespace, the user inherits shared and local temporary tablespace from the corresponding default database tablespaces. You can specify default local temporary tablespace for a user, as follows:.
As previously mentioned, default user local temporary tablespace can be shared temporary space. You can change the user default local temporary tablespace to any existing local temporary tablespace. If you want to set the user default local temporary tablespace to a shared temporary tablespace, T , then T must be the same as the default shared temporary tablespace. If a default user local temporary tablespace points to a shared temporary tablespace, then, when you change the default shared temporary tablespace of the user, you also change the default local temporary tablespace to that tablespace.
Some read-only instances may be down when you run any of the preceding commands. This does not prevent the commands from succeeding because, when a read-only instance starts up later, it creates the temporary files based on information in the control file.
Creation is fast because Oracle reformats only the header block of the temporary file, recording information about the file size, among other things. If you cannot create any of the temporary files, then the read-only instance stays down.
Commands that were submitted from a read-write instance are replayed, immediately, on all open read-only instances. All the commands that you run from the read-write instances are performed in an atomic manner, which means the command succeeds only when it succeeds on all live instances.
Oracle extended dictionary views to display information about local temporary tablespaces. Oracle made the following changes:. All the diagnosibility information related to temporary tablespaces and temporary files exposed through AWR, SQL monitor, and other utilities, is also available for local temporary tablespaces and local temporary files. For local temporary files, this column contains information about temporary files per instance, such as the size of the file in bytes BYTES column.
At a minimum, Oracle RAC requires Oracle Clusterware software infrastructure to provide concurrent access to the same storage and the same set of data files from all nodes in the cluster, a communications protocol for enabling interprocess communication IPC across the nodes in the cluster, enable multiple database instances to process data as if the data resided on a logically combined, single cache, and a mechanism for monitoring and communicating the status of the nodes in the cluster.
Understanding Cluster-Aware Storage Solutions. An Oracle RAC database is a shared everything database.
All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks, so that all of the cluster database instances can access these storage components. In Oracle RAC, the Oracle Database software manages disk access and is certified for use on a variety of storage architectures.
It is your choice how to configure your storage, but you must use a supported cluster-aware storage solution. A third-party cluster file system on a cluster-aware volume manager that is certified for Oracle RAC.
All nodes in an Oracle RAC environment must connect to at least one Local Area Network LAN commonly referred to as the public network to enable users and applications to access the database. In addition to the public network, Oracle RAC requires private network connectivity used exclusively for communication between the node s and database instances running on those nodes.
This network is commonly referred to as the interconnect. The interconnect network is a private network that connects all of the servers in the cluster.
The interconnect network must use at least one switch and a Gigabit Ethernet adapter. Oracle supports interfaces with higher bandwidth but does not support using crossover cables with the interconnect. Do not use the interconnect the private network for user communication, because Cache Fusion uses the interconnect for interinstance communication. This additional network communication channel should be independent of the other communication channels used by Oracle RAC the public and private network communication.
If the storage network communication must be converged with one of the other network communication channels, then you must ensure that storage-related communication gets first priority. Applications should use the Dynamic Database Services feature to connect to an Oracle database over the public network. Dynamic Database Services enable you to define rules and characteristics to control how users and applications connect to database instances.
These characteristics include a unique name, workload balancing and failover options, and high availability characteristics. A typical connect attempt from a database client to an Oracle RAC database instance can be summarized, as follows:. The SCAN listener then determines which database instance hosts this service and routes the client to the local or node listener on the respective node. The node listener, listening on a node VIP and a given port, retrieves the connection request and connects the client to the an instance on the local node.
If multiple public networks are used on the cluster to support client connectivity through multiple subnets, then the preceding operation is performed within a given subnet.
Clients that attempt to connect to a VIP address not residing on its home node receive a rapid connection refused error instead of waiting for TCP connect timeout messages. When the network on which the VIP is configured comes back online, Oracle Clusterware fails back the VIP to its home node, where connections are accepted. Generally, VIP addresses fail over when:. Oracle RAC 12 c supports multiple public networks to enable access to the cluster through different subnets.
Each network resource represents its own subnet and each database service uses a particular network to access the Oracle RAC database. Each network resource is a resource managed by Oracle Clusterware, which enables the VIP behavior previously described.
Incoming connections are load balanced across the active instances providing the requested service through the three SCAN listeners. With SCAN, you do not have to change the client connection even if the configuration of the cluster changes nodes added or removed.
The valid node checking feature provides the ability to configure and dynamically update a set of IP addresses or subnets from which registration requests are allowed by the listener. Database instance registration with a listener succeeds only when the request originates from a valid node.
The network administrator can specify a list of valid nodes, excluded nodes, or disable valid node checking. The list of valid nodes explicitly lists the nodes and subnets that can register with the database. The shared disk cannot just be a simple filesystem because it needs to be cluster-aware, which is the reason for Oracle Clusterware. RAC still supports third-party cluster managers, but the Oracle Clusterware provides the hooks for the new features for provisioning or deployment of new nodes and the rolling patches.
The shared disk for the clusterware comprises two components: a voting disk for recording disk membership and an Oracle Cluster Registry OCR , which contains the cluster configurations. The Oracle Clusterware is the key piece that allows all of the servers to operate together. Without the interconnect, the servers do not have a way to talk to each other; without the clustered disk, they have no way to have another node to access the same information. Figure 1 shows a basic setup with these key components.
Save my name, email, and website in this browser for the next time I comment.
0コメント