Upgrade Oracle RAC GRID + Database from 11.2.0.3.0 to 11.2.0.4.0
One tries to evade the inevitable..but there will always be one day when you just have to do it…upgrade the database and/or grid software. Since the support of 11.2.0.3.0 is ending, see below:
However we want to take things SLOW…and in small steps..The idea is not to go for the big bang and scramble for the restore procedures. In short:
– Install a new oracle 11.2.4.0 grid home (the Oracle software).
– Upgrade the GRID/ASM infrastructure.
– Install a new oracle 11.2.4.0 database home (the Oracle software).
– Upgrade the database.
– Upgrade the GRID/ASM infrastructure.
– Install a new oracle 11.2.4.0 database home (the Oracle software).
– Upgrade the database.
Some questions answered in advance (for downtime/impact planning):
– Can an Oracle 11.2.0.3.0 database run on an ASM home/instance of version 11.2.0.4.0?
-> Yes. This is not an issue. The reason for this question is: in our situation we have one RAC which services three databases, but there was only downtime available for the ASM and one database. Thus the 11.2.0.3.x database is required to run for a while on the upgraded ASM version. So far this is running for 3 weeks without any hitches.
– When and how much do the databases/ASM need to bounce?
-> The update of ASM needs a bounce, and renders access to all the databases is services inaccessible. The upgrade of the database can be done per instance, but still in effect the whole database on all the nodes where not accessible, although node 2 was on-line. This can be important for SLA fine-print, however from user point of view: no access to the database, means downtime.
– What can be expected in regards to prerequisites?
-> When upgrading from 11.2.0.3.0 vanilla version: no specific need to do extra patching. However when applying the October PSU to the 11.2.0.4.0 software, the OPatch version needs to be at least version 11.2.0.3.6 and downloaded separately.
NOTE: in this post an out-of-place upgrade will be done, since this is Oracle recommended, and with an inline some components will not work as advertised according to Oracle documentation.
Section 1: Grid upgrade.
First we install the software by unpacking the downloaded files in a temp directory (we use /oracle/patch in this post). This process is straightforward, so no need to document this.
Since the database can’t be upgraded without the grid first (yes, a warning will be issued when tried), we start with upgrading grid/ASM.
After the unpacking, we change to the unpacked grid software directory, and fire up the installer:
$ cd /oracle/patch/grid
$ ./runInstaller.sh
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 11085 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5951 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-11-07_03-33-19PM. Please wait ...
Prerequisites
All these will be done on NODE 1 of the RAC unless stated otherwise.
All these will be done on NODE 1 of the RAC unless stated otherwise.
1) Unset Oracle environment variables.
Check if the variable ORA_CRS_HOME is set. If set, unset it before starting an installation or upgrade.
Check to ensure that installation owner login shell profiles (for example,
.profile
or .cshrc
) do not have ORA_CRS_HOME set.
We have an existing ASM installation running, and we use the same ‘user’ to install the upgrade, so at the least the following environment variables needs to be unset:
ORA_CRS_HOME;
ORACLE_HOME;
ORA_NLS10;
TNS_ADMIN;
ORACLE_HOME;
ORA_NLS10;
TNS_ADMIN;
$ unset ORACLE_BASE $ unset ORACLE_HOME $ unset ORACLE_HOSTNAME $ unset ORACLE_SID $ unset ORACLE_UNQNAME
The setting ‘ORACLE_TERM=xterm’ stays set.
2) Pre-check the upgrade process to determine if any patches are required.
$ /oracle/patch/grid/runcluvfy.sh stage -pre crsinst -upgrade -n bedc-odb01,bedc-odb02 -rolling -src_crshome /oracle/grid -dest_crshome /oracle/11.2.0.4/grid/ -dest_version 11.2.0.4.0 -fixup -fixupdir /home/oracle/fixup -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "bedc-odb02" Destination Node Reachable? ------------------------------------ ------------------------ node01 yes node02 yes Result: Node reachability check passed from node "node02" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Status ------------------------------------ ------------------------ bedc-odb02 passed bedc-odb01 passed Result: User equivalence check passed for user "oracle"Check: Time zone consistency Result: Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Checking Oracle Cluster Voting Disk configuration... ASM Running check passed. ASM is running on all specified nodes Oracle Cluster Voting Disk configuration check passed Clusterware version consistency passed Pre-check for cluster services setup was successful.
This is what we need to see. If there is anything wrong, no matter how small it seems..
FIX IT!
After any issues are fixed, start the installer:
/oracle/patch/grid/runInstaller.sh
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 27009 MB Passed
Checking swap space: must be greater than 150 MB. Actual 49983 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-11-11_11-08-36AM. Please wait ...
An error showed up during this phase, strange, since the previous check was ok. The log shows:
INFO: Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "rac-scan" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 10.3.28.50) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan"
INFO: Verification of SCAN VIP and Listener setup failed
This is not something critical for this procedure and can be considered as a Warning only, and can safely be ignored, so continue.
ASM Upgrade Finished!
Now this part is done, proceed with some sanity checks:
Check if ASM is upgraded to the correct version:
Log in as “sqlplus / as sysasm”
SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production CORE 11.2.0.4.0 Production TNS for Linux: Version 11.2.0.4.0 - Production NLSRTL Version 11.2.0.4.0 - Production SQL>
Check if all the processes of the cluster are running:
[root@ ~]# /oracle/11.2.0.4/grid/bin/crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.DATACCPRDG.dg ONLINE ONLINE bedc-odb02 ora.DATACCUTDG.dg ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.FRA.dg ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.FRACCPRDG.dg ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.FRACCUTDG.dg ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.LISTENER.lsnr ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.asm ONLINE ONLINE bedc-odb01 Started ONLINE ONLINE bedc-odb02 Started ora.gsd OFFLINE OFFLINE bedc-odb01 OFFLINE OFFLINE bedc-odb02 ora.net1.network ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.ons ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 ora.registry.acfs ONLINE ONLINE bedc-odb01 ONLINE ONLINE bedc-odb02 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE bedc-odb01 ora.bedc-odb01.vip 1 ONLINE ONLINE bedc-odb01 ora.bedc-odb02.vip 1 ONLINE ONLINE bedc-odb02 ora.cvu 1 ONLINE ONLINE bedc-odb01 ora.erpccpr0.db 1 ONLINE ONLINE bedc-odb01 Open 2 ONLINE ONLINE bedc-odb02 Open ora.erpccpr0.erpccprd.svc 1 ONLINE ONLINE bedc-odb01 2 ONLINE ONLINE bedc-odb02 ora.erpccut0.db 1 ONLINE ONLINE bedc-odb01 Open 2 ONLINE ONLINE bedc-odb02 Open ora.erpccut0.erpccuat.svc 1 ONLINE ONLINE bedc-odb01 2 ONLINE ONLINE bedc-odb02 ora.erpemea.db 1 ONLINE ONLINE bedc-odb01 Open 2 ONLINE ONLINE bedc-odb02 Open ora.oc4j 1 ONLINE ONLINE bedc-odb02 ora.scan1.vip 1 ONLINE ONLINE bedc-odb01
Section 2: Database upgrade
Before starting the upgrade, a word of caution:
Your mileage may vary, but we found that while the database was running happily with Huge Pages(large pages in the init file) configured as ‘only’, and could bounce without any issues, it wreaked havoc with the Database Upgrade Assistant, probably somewhere with the ‘startup upgrade’ mode.
As a precaution we put our databases in shared memory (temporary 10G instead of the 48G Huge Pages) and make sure they bounce without any issues.
The trial runs all failed with:
Memlock limit too small: 32768 to accommodate segment size: 268435456
With the database in shared memory, we did not run into this issue.
Run the root script on node02-xx. When done:
Make sure correct database is selected when more databases are available in this Oracle home.
Press Next.
Press Next.
Validate the invalid objects, if applicable and make sure you at least check the warnings.
Press Yes when done.
Press Yes when done.
Optionally this screen can appear. check/uncheck this by choice. In this post it will not be checked, since the database is already registered in OEM 12c.
Press Next.
Press Next.
And the database upgrade is done.
The database should be up and running again, but there are a couple of settings in need of change.
* Put the database back into Huge Pages if it previously was configured like this.
* Alter some parameter settings which still points to the old oracle home path.
* Alter some parameter settings which still points to the old oracle home path.
A small check before shutting down the database node:
SQL >select * from gv$version; INST_ID BANNER ---------- -------------------------------------------------------------------------------- 1 Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production 1 PL/SQL Release 11.2.0.4.0 - Production 1 CORE 11.2.0.4.0 Production 1 TNS for Linux: Version 11.2.0.4.0 - Production 1 NLSRTL Version 11.2.0.4.0 - Production 2 Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production 2 PL/SQL Release 11.2.0.4.0 - Production 2 CORE 11.2.0.4.0 Production 2 TNS for Linux: Version 11.2.0.4.0 - Production 2 NLSRTL Version 11.2.0.4.0 - Production 10 rows selected.
Nice!
Beware of your environment variables:
ORACLE_HOME and PATH should be updated to the NEW Oracle Home!
ORACLE_HOME and PATH should be updated to the NEW Oracle Home!
Now, shut down the node, alter the pfile/spfile. Specific parameters to alter/check are:
cluster = true audit_file_dest background_dump_dest core_dump_dest user_dump_dest spfile Double check the /etc/oratab file
Do the regular database checks to see if all still looks correct, and keep an eye on the alert file.
If the DBUA bails out after the new Oracle Home is installed, and the database is not upgraded, yes it happened to me also. A lot.Just do the upgrade manually.
Shutdown the database.
sql> startup upgrade -> cluster_database=false--> FROM NEW ORACLE_HOME!!
sql>?/ORACLE_HOME/rdbms/admin/catupgrd.sql
Removing the old HOMES:
Before removing/renaming the old GRID and DB homes:
GRID:
alter system set background_dump_dest='/oracle/11_4/base/diag/asm/+asm/+ASM1/trace' scope=spfile; alter system set core_dump_dest='/oracle/11_4/base/diag/asm/+asm/+ASM1/cdump' scope=spfile; alter system set diagnostic_dest='/oracle/11_4/base' scope=spfile; alter system set user_dump_dest='/oracle/base/diag/asm/+asm/+ASM1/trace' scope=spfile; cp ab_+ASM1.dat hc_+ASM1.dat /oracle/11_4/base/db_1/dbs/ cp ab_+ASM1.dat hc_+ASM1.dat /oracle/grid_11_4/dbs/
DATABASE:
Alter the pfile/spfile:
cluster = true audit_file_dest core_dump_dest
Edit /etc/oratab:
VMRAC:/oracle/11_4/base/db_1:N # NEW HOME!!!
Edit $ORACLE_HOME/network/admin/sqlnet.ora (NEW)
ADR_BASE = /oracle/11_4/base # new home!
Bounce both the GRID and the DATABASE after renaming (not yet deleting!) the old homes to see if all is still working as expected. If there are no errors in any alert log, the old homes can be deleted. Or put on tape.
This should cover most of the upgrade procedure. At the least it gives an idea of what to expect. Of course every situation is different, so test,test and test to build up confidence before upgrading the production environment. And did someone mention backups? Might be a good idea also, just keep in mind to create a backup OUTSIDE of ASM before upgrading this software to prevent panic attacks.
ps: If there are any third party patches applied to the ORACLE_HOME (as in ERPLN NLS patch) you need to re-apply this patch! This is a NEW vanilla oracle home.
Success!
Links to useful documentation:
No comments:
Post a Comment