CHAPTER 11

image

Storage Reconfiguration

An Exadata system, by default, pre-allocates the bulk of the physical storage between the +DATA_MYEXA1 and +RECO_MYEXA1 disk groups and does so using an 80/20 split. During the preinstallation phase of purchase, Oracle supplies a worksheet to the DBAs, the network administrators, and the system administrators. Each group supplies information for the configuration process, with the DBAs and the network admins providing the bulk of that data. Once the worksheet is completed, discussions are held to clarify any ambiguous settings and to determine how the storage will be divided. It may not be apparent that the default settings need to be adjusted, so the system is configured with the standard 80/20 allocations. A third disk group, DBFS_DG, is also provided with approximately 32GB of space, and this disk group does not change in size.

It is possible, after configuration is complete, to change those allocations, but this is a destructive process that requires careful consideration. We have done this on an Exadata system destined for production, but it was done long before any production databases were created. It was also done with the assistance of Oracle Advanced Customer Support, and though we provide the steps to perform this reconfiguration, we strongly recommend you involve Oracle Advanced Customer Support in the process. The actual steps will be discussed, so that you will understand the entire process from start to finish. The goal of this chapter is to provide working knowledge of the storage-reconfiguration process. It may be best, if you are not comfortable performing such actions, to arrange with Oracle Advanced Customer Support to do this for you, should the need arise. Even if you don’t do this yourself, it will be helpful to know what tasks will be run and what the end results will be. With that thought in mind, we’ll proceed.

I Wish We Had More…

It may become apparent after the configuration has been completed, but before any databases (aside from DBM, the default database provided at installation) are created, that you may want or need more recovery storage than originally allocated, as an example. Or you may not be using the recovery area (an unlikely event) and would like to make the data disk group a bit larger, as another example. There is a way to do that before anything important gets created. We state again that reconfiguring the storage allocations is a destructive process that requires planning long before execution. We will demonstrate a process to distribute storage more evenly between the +DATA_MYEXA1 and +RECO_MYEXA1 disk groups, again with the caveat that proper planning is absolutely necessary and that Oracle Advanced Customer Support should be involved.

Redistributing the Wealth

Each disk in an Exadata system is logically divided into areas sometimes known as partitions, areas containing a fixed amount of storage determined at configuration time. Looking at how a single disk is configured on a standard Exadata installation, the following output shows how the various partitions are allocated:

CellCLI> list griddisk attributes name, size, offset where name like '.*CD_11.*'
         DATA_MYEXA1_CD_11_myexa1cel05   2208G           32M
         DBFS_DG_CD_11_myexa1cel05       33.796875G      2760.15625G
         RECO_MYEXA1_CD_11_myexa1cel05   552.109375G     2208.046875G
 
CellCLI>

image Note  For UNIX/Linux systems, the terms slice and partition are used interchangeably in some sources. Both are used to indicate logical divisions of the physical disk. We will use the term partition in this text, as it is the most accurate terminology.

Notice that for a given disk, there are three partitions that ASM can see and use. Also note that for a given partition, the size plus the offset gives the starting “point” for the next partition. Unfortunately, the output from CellCLI cannot be ordered the way you might like to see it. The default ordering is by name in this example.

From the output provided, the first partition is assigned as the +DATA_MYEXA1 storage partition, starting after the first 32MB of storage. That 32MB is reserved for disk management by the operating system. The next partition available to ASM is assigned to the +RECO_MYEXA1 disk group, and the third partition is assigned to the +DBFS_DG disk group. The size of the disk groups is determined by the type of storage you select and the size of the Exadata rack that you’ve ordered. The system illustrated in this chapter uses high capacity disks, providing 3TB of storage per disk. High speed disks provide 600GB of storage per disk. The configuration process configures partitions for the data and recovery disk groups based on a parameter in the /opt/oracle.SupportTools/onecommand/onecommand.param file named SizeArr. Two systems were configured with the default storage division between +DATA_MYEXA1 and +RECO_MYEXA1. This resulted in an 80/20 allocation between these two disk groups. As mentioned previously, the output provided is for a standard Exadata configuration.

To change how ASM sees the storage, it will be necessary to delete the original configuration as the first step, so new partitions can be created across the physical disks. The next basic step is to create the new partitions and after that configure the ASM disk groups. Because the first step is to delete all configured partitions, this is not a recommended procedure when development, test, or production databases have been created on the Exadata system. We state again that this process is a destructive process and that no databases will remain after the initial steps of deleting the original logical disk configuration have completed. The GRID infrastructure is also deleted; it will be re-created as one of the 26 steps executed to define the new logical storage partitions.

image Note  We ran this procedure on a newly configured Exadata system, with the assistance of Oracle Advanced ­Customer Support. No important databases had been created on this system. This is not a recommended operation when such databases exist on your system.

Looking at the V$ASM_DISKGROUP output you can see the original storage allocations. This is illustrated with the following output:

SQL> select name, free_mb, usable_file_mb
  2  from v$asm_diskgroup
  3  where name like '%MYEXA%';
 
NAME                              FREE_MB USABLE_FILE_MB
------------------------------  --------- --------------
DATA_MYEXA1                      81050256       26959176
RECO_MYEXA1                      20324596        6770138
 
SQL>

You will see that when this process completes, the output from the illustrated query will report different values that are the result of the new storage settings. You should run this query both before the process starts and after the process finishes, to document the original storage and the newly configured storage.

With those notes in mind, we proceed with the steps necessary to reconfigure the storage.

Prepare

Not only are the logical storage areas redefined, but the “oracle” O/S user home is also re-created, so any changes made to that location must be preserved before executing this procedure. This needs to be done on all available database servers in your Exadata system, even if you have Oracle Advanced Customer Support perform the actual storage work. Thus, the first step to be executed is to preserve the “oracle” user home, either with tar (which originally was an acronym for Tape ARchiver,  since in the olden days, the only media available was tape) or cpio (an acronym for CoPy In/Out, a “newer” utility relative to tar, designed to operate not only on tape but on block devices such as disk drives). These two utilities are provided at the operating-system level. If you choose tar, the following command will preserve the “oracle” O/S user’s home directory:

$ tar cvf /tmp/oraclehome.tar ./*

In case you are not familiar with the tar command, the options provided perform the following actions:

c -- Create the archive
v -- Use verbose mode.  This will display all of the directories and files processed with their associated paths
f -- The filename to use for the archive

Using the ./* syntax to specify the files to archive makes it easier to restore the files to any base directory you choose. It isn’t likely that the “oracle” O/S user home will change names. Knowing this syntax can be helpful, if you want to copy a complete directory tree to another location, and it is our usual way to invoke tar to copy files and directories.

Using tar is not mandatory; you can also use cpio. If you are more comfortable with the cpio utility, the following command will archive the “oracle” O/S user home:

$ find . -depth -print | cpio -ov > /tmp/oraclehome.cpio

The options provided do the following:

o -- Copy out to the listed archive
v -- Verbose output, listing the files and paths processed

We recommend preserving the “oracle” user home directory before starting any storage reconfiguration, as it will be overwritten by the reconfiguration process. Preserve the existing “oracle” O/S user home on all available database servers, as all “oracle” O/S user homes will be rebuilt. Additionally, the “oracle” O/S user password will be reset to its original value, listed in the Exadata worksheet completed prior to delivery, installation, and initial configuration.

The file used for the Exadata configuration only allows you to specify the size of the data disk group partition. The recovery disk group partition size is derived from the value provided to size the data disk group. The DBFS_DG size is not affected by changing this value. This file cannot be modified until the original storage configuration has been deleted.

image Note  The commands listed for storage reconfiguration must be run as the “root” user. As this user has unlimited power at the O/S level, it cannot be stressed enough that caution must be exercised when executing these commands. If you are not comfortable using the “root” account, it would be best to make arrangements with Oracle Advanced Customer Support to reconfigure the storage to your desired levels.

You should have a good idea on what you want the storage reconfiguration to provide in terms of +DATA_MYEXA1 disk group and +RECO_MYEXA1 disk group sizes, and you should also have the “oracle” O/S user home preserved, either with tar or cpio. We now proceed with the actual process of reconfiguring the storage.

Additional tasks at this point include knowing what OEM agents are installed and noting whether or not the SCAN listener is using the default port. Reinstalling the OEM agents and reconfiguring the SCAN listener for a non-default port, if necessary, will have to be done as part of the restore steps, after the storage reconfiguration completes and the “oracle” O/S user home is restored. Steps to reinstall the OEM agents and reconfigure the SCAN listener will not be covered in this chapter, but it is good to know that they may need to be executed.

Act

You should be sufficiently prepared to begin the actual reconfiguration process. This first step drops the original storage configuration, including the configuration for the storage cells. Let’s begin.

Act 1

Connect to database server 1 as the “root” user. You should see a # in the shell prompt indicating the login/su process was successful. You will now have to change directories to /opt/oracle.SupportTools/onecommand. This is where the scripts and configuration files that you will need to complete this operation reside. It’s a good idea to know the available commands you can execute for this reconfiguration, and that information can be found by executing the following script with the listed parameters:

./deploy11203.sh -a -l

The following list will be displayed:

Step  0 = CleanworkTmp
Step  1 = DropUsersGroups
Step  2 = CreateUsers
Step  3 = ValidateGridDiskSizes
Step  4 = CreateOCRVoteGD
Step  5 = TestGetErrNode
Step  6 = DeinstallGI
Step  7 = DropCellDisk
Step  8 = DropUsersGroups
Step  9 = CheckCssd
Step 10 = testfn
Step 11 = FixAsmAudit
Step 12 = DropExpCellDisk
Step 13 = testHash
Step 14 = DeconfigGI
Step 15 = GetVIPInterface
Step 16 = DeleteDB
Step 17 = ApplyBP3
Step 18 = CopyCrspatchpm
Step 19 = SetupMultipath
Step 20 = RemovePartitions
Step 21 = SetupPartitions
Step 22 = ResetMultipath
Step 23 = DoT4Copy
Step 24 = DoAlltest
Step 25 = ApplyBPtoGridHome
Step 26 = Apply112CRSBPToRDBMS
Step 27 = RelinkRDSGI
Step 28 = RunConfigAssistV2
Step 29 = NewCreateCellCommands
Step 30 = GetPrivIpArray
Step 31 = SetASMDefaults
Step 32 = GetVIPInterface
Step 33 = Mychknode
Step 34 = UpdateOPatch
Step 35 = GetGroupHash
Step 36 = HugePages
Step 37 = ResetPermissions
Step 38 = CreateGroups
Step 39 = FixCcompiler
Step 40 = CollectLogFiles
Step 41 = OcrVoteRelocate
Step 42 = OcrVoteRelocate
Step 43 = FixOSWonCells
Step 44 = Dbcanew
Step 45 = testgetcl
Step 46 = setupASR
Step 47 = CrossCheckCellConf
Step 48 = SetupCoreControl
Step 49 = dropExtraCellDisks
Step 50 = CreateCellCommands

The following are three commands of interest for this step of the reconfiguration:

Step  6 = DeinstallGI
Step  7 = DropCellDisk
Step  8 = DropUsersGroups

These are the commands that will be executed to “wipe clean” the existing storage configuration and prepare it for the next step. Each command is run individually from the deploy11203.sh script. You must run these steps in the listed order, to ensure a complete removal of the existing configuration. The actual commands follow:

./deploy11203.sh -a -s 6
./deploy11203.sh -a -s 7
./deploy11203.sh -a -s 8

These steps will take approximately an hour to run on an Eighth Rack or Quarter Rack configuration and will provide output to the screen describing pending and completed tasks. We recommend that you run each command and watch the output provided. There should not be any errors generated from these processes; however, we feel it’s best to be safe by watching the progress.

What do these steps do? Step 6 removes the GRID Infrastructure, basically the heart and soul of ASM (and RAC, but it’s the link to ASM that we are concerned with in this operation). Once that infrastructure is gone, we are free to run Step 7 to drop the created grid disks and then the definitions for the partitions that ASM uses to recognize and access the storage. Last is the removal of the users and groups that access these partitions and grid disks. This is where the “oracle” user account gets dropped, along with its O/S home. Notice that only 3 steps are necessary to drop the existing storage configuration but 26 steps are necessary to re-create it.

Act 2

Once these operations have finished, it will be time to modify the storage array parameter to provide the storage division you want. The configuration file to edit is /opt/oracle.SupportTools/onecommand/onecommand.params. We use vi for editing such files, but you may wish to use emacs instead. Use whichever editor you are comfortable with. The parameter value to change is named SizeArr. For a standard default install, this line should look like the following entry:

SizeArr=2208G

It is good to know what value to use to obtain the desired storage division. Table 11-1 shows several SizeArr values and the approximate storage divisions created.

Table 11-1. Some SizeArr Values and the Resulting Storage Allocations

SizeArr Setting

Data Disk Group pct

Recovery Disk Group pct

2208 80 20
1932 70 30
1656 60 40
1487 55 45
1380 50 50

Remember that these percentages are approximations based on the default 80/20 storage allocation. We recommend that you use the standard shell script comment character, #, and comment that line, so you can preserve the original setting. You can now either copy the commented line or open a new line just below it and provide the following text:

SizeArr=<your chosen value>G

As an example, if you want to configure a 55/45 split on the available storage, the new line, based on the information in Table 11-1, would be as follows:

SizeArr=1487G

Once your edits are complete, there should be two lines listing SizeArr, as follows:

# SizeArr=2208G
SizeArr=1487G

This is the only change you will need to make to this parameter file, so save your work and exit the editor. You are now prepared to establish a new storage configuration.

Act 3

As in the previous operation, the deploy11203.sh script will be used, although with a greater number of steps. The following is the complete list of available steps for this part of the process:

Step  0 = ValidateEnv
Step  1 = CreateWorkDir
Step  2 = UnzipFiles
Step  3 = setupSSHroot
Step  4 = UpdateEtcHosts
Step  5 = CreateCellipinitora
Step  6 = ValidateIB
Step  7 = ValidateCell
Step  8 = PingRdsCheck
Step  9 = RunCalibrate
Step 10 = CreateUsers
Step 11 = SetupSSHusers
Step 12 = CreateGridDisks
Step 13 = GridSwInstall
Step 14 = PatchGridHome
Step 15 = RelinkRDSGI
Step 16 = GridRootScripts
Step 17 = DbSwInstall
Step 18 = PatchDBHomes
Step 19 = CreateASMDiskgroups
Step 20 = DbcaDB
Step 21 = DoUnlock
Step 22 = RelinkRDSDb
Step 23 = LockUpGI
Step 24 = ApplySecurityFixes
Step 25 = setupASR
Step 26 = SetupCellEmailAlerts
Step 27 = ResecureMachine

Unlike the first list, there are only 28 steps shown. As this is an existing Exadata installation, step 0, which validates the environment, and Step 27, which secures the machine, will not be run. This leaves the remaining 26 steps to reconfigure the storage and the storage cells. The following command will be used to complete the storage reconfiguration portion of this process:

./deploy11203.sh -r 1-26

The -r option executes the provided steps in order, as long as no step generates an error. If a step does generate an error, it stops at that step; any corrective action should be taken, and the deploy11203.sh script restarted with a modified list of steps. For example, if the execution stopped at step 21 due to errors, and the cause of the errors is corrected, the following command would restart the reconfiguration at the point it generated the error, as follows:

./deploy11203.sh -r 21-26

There should be no need to start the entire process from the beginning, should a problem arise midway through the execution.

For an Eighth Rack or Quarter Rack Exadata configuration, this last execution of the deploy11203.sh script will run for one and one-half to two hours. For larger Exadata configurations, this will run longer, as there will be a greater number of disks. Feedback will be provided by the script for each of the steps executed.

On a “clean” system—one that is still at the original configuration and without any new databases—we have experienced no errors or difficulties when reconfiguring the storage. It is a time-consuming process, but it should execute without issues.

image Note  We have executed this on a newly configured Exadata system, and it ran without difficulties. If you are planning on doing this on a system where other databases have been created, where other ORACLE_HOME locations exist, be advised that the steps provided here may experience problems related to these changes. The included database, commonly named DBM, will be re-created from scripts during the reconfiguration process. Be aware that restoring databases that were created after initial configuration was completed may cause errors due to the reallocated storage, especially if the +DATA_MYEXA1 disk group has been made smaller. In this situation, it is best to involve Oracle Advanced Customer Support.

Let’s look at what each step accomplishes, at a basic level. Steps 1 and 2 are basic “housekeeping” actions, creating a temporary work directory and unzipping the software archives into that temporary work area. Step 3 starts the important aspects of this operation, which sets user equivalence (meaning passwordless ssh access) to all servers and switches for the “root” account. This is required, so that the remaining steps proceed without having to ask for a password. Step 4 updates the /etc/hosts file for the preconfigured storage cell IP addresses and server names for the Exadata system. This makes it easier to connect to other servers in the system, by using the server name rather than the IP address.

The next step, Step 5, creates the cellinit.ora file on the available storage cells. A sample of what the cellinit.ora contains (with actual IP addresses and port numbers obscured) follows.

#CELL Initialization Parameters
version=0.0
HTTP_PORT=####
bbuChargeThreshold=800
SSL_PORT=#####
RMI_PORT=#####
ipaddress1=###.###.###.###/##
bbuTempThreshold=60
DEPLOYED=TRUE
JMS_PORT=###
BMC_SNMP_PORT=###

This step also creates the cellip.ora file on the available database servers. Sample contents follow.

cell="XXX.XXX.XXX.3"
cell="XXX.XXX.XXX.4"
cell="XXX.XXX.XXX.5"
cell="XXX.XXX.XXX.6"
cell="XXX.XXX.XXX.7"

This file informs ASM of the available storage cells in the cluster.

Steps 6, 7, and 8 do necessary validation of the InfiniBand connectivity, storage cell configuration, and InfiniBand inter-process connectivity (IPC) between the database servers and the storage cells. Step 8 performs this validation by checking how the oracle executables are linked. IPC over InfiniBand is set by linking the oracle kernels with the ipc_rds option, and if this option is missing, IPC occurs over the non-InfiniBand network layer. Unfortunately, most database patches that relink the kernel do not include this parameter, because, I suspect, they are intended for non-Exadata systems. Thus, patching the oracle software can result in a loss of this connectivity, which can create performance issues by using a much slower network connection for IPC. The situation is easily rectified by relinking the oracle kernel with the ipc_rds option. One way to ensure that this occurs is to edit the script from the patchset where the oracle kernel is relinked. Another option is to simply perform another relink with the ipc_rds option before releasing the database to the users. The exact statement to use, copied from the command line on the database server, follows:

$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds

This is also a check that the exachk script executes and output from that report lists the oracle kernels, which are not linked to provide RDS over InfiniBand connectivity.

Step 9 calibrates the storage, and it is a very important step. Each available storage cell has this run. First, CELLSRV is shut down, then the calibrate process is run. Output similar to the following will be generated.

CellCLI> calibrate;
Calibration will take a few minutes...
Aggregate random read throughput across all hard disk luns: 137 MBPS
Aggregate random read throughput across all flash disk luns: 2811.35 MBPS
Aggregate random read IOs per second (IOPS) across all hard disk luns: 1152
Aggregate random read IOs per second (IOPS) across all flash disk luns: 143248
Controller read throughput: 5477.08 MBPS
Calibrating hard disks (read only) ...
Calibrating flash disks (read only, note that writes will be significantly slower) ...
...

Once the calibrate step is complete for all storage cells, the cell disks and grid disks can be created, which is Step 12. Before that, though, the “oracle” O/S user is created, and the associated passwordless ssh access is established with Steps 10 and 11. After these steps complete, the grid disk partitions are created in Step 12, so they can be recognized by ASM. This is where your new setting takes effect and the desired ratio between +DATA_MYEXA1 and +RECO_MYEXA1 is established.

Next in line is setting up the GRID home by installing the GRID software (Step 13), patching the GRID home (step 14), relinking the GRID oracle executable to provide IPC over InfiniBand (Step 15), and running the GRID root.sh scripts (Step 16). These are the same actions you would take (with the exception of Step 15) on any other GRID installation. After the GRID software is installed and configured, the database software is installed and patched (Steps 17 and 18). Again, these are steps you would take on any system when you install and patch Oracle database software.

Since the grid disks are now created, it’s a simple task to get ASM to recognize them and create the +DATA_MYEXA1, +RECO_MYEXA1, and +DBFS_DG disk groups in Step 19. This paves the way for creation of the DBM cluster database in Step 20, which is the default database provided with a “bare metal” Exadata installation. It is possible to change this database name in the configuration worksheet, but we haven’t seen any installations where DBM isn’t an available database. Once the DBM database is created, selected accounts are unlocked (Step 21), and a final relink of the DBM oracle executable, to ensure IPC over InfiniBand is functional, is executed (Step 22). The GRID Infrastructure is “locked” against any configuration changes in Step 23, then, in Step 24, any security fixes/patches are applied to the database home.

Step 25 may or may not be executed, and that depends on whether your enterprise has decided to allow Oracle direct access to Exadata via the Auto Service Request (ASR) server. This requires that a separate server be configured to communicate with Exadata to monitor the system and report back to Oracle any issues it finds, automatically generating a service request to address the issue. Such issues would be actual or impending hardware failures (disk drives, Sun PCIe cards, memory). The service request generates a parts order and dispatches a service technician to install the failed or failing parts. If you don’t have a server configured for this, the step runs then terminates successfully, without performing any actions. If you do want this service and have the required additional server installed, this step configures the service with the supplied My Oracle Support credentials and tests to ensure the credentials are valid.

Step 26 sets up any desired cell alerts via e-mail. It requires a valid e-mail address that it uses and verifies. Having cell alerts e-mailed to you is convenient; you won’t need to connect to the individual cells to list the alert history, so it saves time. This, too, is an optional step; you do not have to configure cell alerts to be e-mailed to you, but it is recommended.

At this point, your Exadata system is back to the state it was when it was installed, with the exception of the storage percentages that you modified. It’s now time to restore the “oracle” O/S user home you had prior to starting this process.

Restore

Now that the storage has been reconfigured, it’s time to address the re-created “oracle” O/S user homes and the reset “oracle” O/S password. Remember that the “oracle” O/S user was dropped as the last step of clearing the old storage configuration. Presumably you are still connected as “root,” so at the shell prompt on each database server, you will type the following:

# passwd oracle

This will prompt you for the new “oracle” O/S user password, twice for confirmation. Because passwordless ssh access is configured for the “root” account, you can use ssh to connect to all other database servers from your “home base” on node 1. For those not familiar with ssh, the following command will connect you to database server 2:

# ssh myexa1db02

Change the destination server name to connect to the remaining database servers.

It is now time to restore the previous “oracle” O/S user home files and configuration. Use su to connect as “oracle.” This will also put you at the user’s home directory. If you used tar to archive the directory and its contents, the following command will restore the archived items:

$ tar xvf /tmp/oraclehome.tar . | tee oraclehome_restore.log

You may be prompted to overwrite existing files. With the exception of the .ssh directory, you should do so, as the files you are restoring are those which you configured prior to the storage reallocation. Remember that this process reestablished passwordless ssh connectivity for “oracle,” and replacing the .ssh directory contents with those you saved prior to the reconfiguration could disable passwordless ssh, because the signatures in the saved authorized_keys file don’t match the current values. Check the logfile for any errors you may have missed as the verbose output scrolled by and correct the listed errors. Use tar again to restore files or directories not created because of errors that have now been corrected. Should a second pass with tar be required, do not replace existing files when prompted.

If you used cpio to archive the directory and its contents, the following command will restore the archived items:

$ cpio -ivf ./.ssh < /tmp/oraclehome.cpio | tee oraclehome_restore.log

The cpio utility reads the archive specified; extracts the files and directories, except for the .ssh directory; and puts them in the current location. As explained in the tar example, you do not want to overwrite the .ssh directory contents, because the reconfiguration process reestablished passwordless ssh connectivity for “oracle,” and overwriting those current files will disable that configuration. Once the archive restore is finished, you need to check the logfile for errors. Presuming there are none, you should be back to the “oracle” O/S user configuration you established before making changes to the storage allocations. If you do find errors in the restore, correct the conditions causing those errors and use cpio again to restore the files or directories that failed to create, due to errors.

Open another session and log in as “oracle,” to see that all of the restored settings function as intended. If you have configured a way to set various ORACLE_HOME environments, that method should be tested, to ensure that nothing was missed in the archive of the “oracle” O/S user home or in the restore of those archived files and directories.

Act 4

Congratulations, you have completed the storage reconfiguration of your Exadata system. Looking at V$ASM_DISKGROUP , you should see output similar to the following, if you used the SizeArr setting from the example:

SYS> select name, free_mb, usable_file_mb
  2  from v$asm_diskgroup
  3  where name like '%MYEXA%';
 
NAME                              FREE_MB USABLE_FILE_MB
------------------------------ ---------- --------------
DATA_MYEXA1                      52375300       17051522
RECO_MYEXA1                      46920568       15638300
 
SYS>

The values of interest are those in the USABLE_FILE_MB column. From earlier in the chapter, the output of that query against the original storage allocations follows.

SQL> select name, free_mb, usable_file_mb
  2  from v$asm_diskgroup
  3  where name like '%MYEXA%';
 
NAME                              FREE_MB USABLE_FILE_MB
------------------------------  --------- --------------
DATA_MYEXA1                      81050256       26959176
RECO_MYEXA1                      20324596        6770138
 
SQL>

There may be other tasks that you need to perform, depending on how you have the Exadata system configured, which include reinstalling any OEM agents you had running and, if it’s not using the default port, reconfiguring the SCAN listener. Steps such as these are part of the preparation step prior to reconfiguration and should be noted prior to starting any storage reallocation.

Things to Know

Reconfiguring the storage allocations is possible on Exadata, but it is a destructive process that requires planning prior to execution.

It will be necessary to drop the existing storage configuration using the /opt/oracle.SupportTools/onecommand/deploy11203.sh script. This script will also be used, with different parameters, to create the new storage configuration.

Dropping the existing storage configuration is performed in three steps: dropping the GRID Infrastructure, dropping the cell disks, then dropping the associated user accounts and groups. Since the “oracle” O/S user will be dropped in this process, it is necessary to preserve the existing O/S user home on all available database servers in the Exadata system. This can be done with tar or cpio to create an archive of the directories and files in those locations.

There is a configuration file under /opt/oracle.SupportTools/onecommand named onecommand.params. It is this file that has to be edited to provide the new storage array size. The size provided in this file is for the +DATA disk group (the +DATA and +RECO disk group names usually include the Exadata machine name, for example, +DATA_MYEXA1 and +RECO_MYEXA1 for an Exadata machine named MYEXA1); the recovery disk group size is derived from this parameter’s value.

The actual reconfiguration takes place using, once again, the /opt/oracle.SupportTools/onecommand/deploy11203.sh script and encompasses 26 separate steps. These steps reestablish user accounts, connectivity, software installations, disk partition creations, creation of the grid disks and ASM disk groups, creating the default DBM database, and applying database and security patches across the various homes. This part of the process also ensures that inter-process communication (IPC) uses InfiniBand, rather than the standard network, to avoid performance issues.

The next step you must execute is the restoration of all but the .ssh directory in the “oracle” O/S user’s home directory. Since passwordless ssh connectivity was established by the reconfiguration procedure, overwriting those files would disable that functionality.

Finally, you must connect as the “oracle” user, preferably using an additional session, to verify that all functionality you established prior to the reconfiguration works, as expected.

Additional tasks you may need to perform include reinstalling OEM agents and reconfiguring the SCAN listener, if you changed it from using the default port.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.232.189