Anuj Singh Oracle DBA

Search This Blog

Total Pageviews

Friday, 8 May 2026

Oracle Database Patch 39036936 - GI Release Update 19.31.0.0.260421






check opatch version 

[grid@srv1 ~]$ export ORACLE_HOME=/u01/app/19.0.0/grid
[grid@srv1 ~]$ export PATH=$ORACLE_HOME/OPatch:$PATH

[grid@srv1 ~]$ opatch version
OPatch Version: 12.2.0.1.49

OPatch succeeded.
[grid@srv1 ~]$



Bugs Resolved by this Patch
10121473 INCORRECT WAIT EVENT PARAMETER DESCRIPTION FOR "LIBRARY CACHE LOCK"
10123661 CURSOR SHARING OF "AS OF SCN" CURSORS
12608302 BTREE SDATA: CTX_REPORT.CREATE_INDEX/POLICY_SCRIPT GIVES WRONG VALUES
1297945 QH:FOLDER ERORR WHEN ATTEMPTING TO PLACE ITEMS ON EIT TAB
13087312 DBMS_SQLTUNE.REPORT_SQL_MONITOR THROWS EXCEPTION IF BIND VALUES HAVE AMPERSAND
13742922 DI:PROVIDE COMMAND TO CLEAN OUT CSS LEASES
13801211 "LATCH FREE" CONTENTION WITHOUT SETTING _RESOURCE_MANAGER_ALWAYS_OFF
14219141 ACFS FILESYSTEM FULL DUE TO INODE TABLE
14570574 TKPROF RETURNS INCORRECT PARSING USERID FOR ANY ID > 65535
14735102 AC: SQLPLUS WITH TAC


=====

current patch 

$ORACLE_HOME/OPatch/opatch lsinventory|grep -i 19.
Oracle Home       : /u01/app/19.0.0/grid
   from           : /u01/app/19.0.0/grid/oraInst.loc
Log file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2026-05-08_11-33-33AM_1.log
Lsinventory Output file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2026-05-08_11-33-33AM.txt
Oracle Grid Infrastructure 19c                                       19.0.0.0.0
Unique Patch ID:  28319643
Patch description:  "TOMCAT RELEASE UPDATE 19.0.0.0.0 (38729293)"
     32625073, 33121445, 33655429, 33846688, 34300543, 34519419, 34816344
     38082506, 38162614, 38311920, 38640885
Patch description:  "OCW RELEASE UPDATE 19.30.0.0.0 (38661284)"



[root@srv1 39036936]# df -Ph
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             3.8G     0  3.8G   0% /dev
tmpfs                3.8G  1.1G  2.7G  30% /dev/shm
tmpfs                3.8G  9.7M  3.8G   1% /run
tmpfs                3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   50G   37G   14G  74% /  <<<<<<<<<<<<<<<<<<<check space>>>> space need  14046.416MB


Patch: /home/grid/39036936/39039430
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_11-46-24AM_1.log
Reason: Failed during Analysis: CheckSystemSpace Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:
Required amount of space(14046.416MB) is not available.]



$ORACLE_HOME/OPatch/opatch lsinventory|grep -E "(^Patch.*applied)|(^Sub-patch)"

 du -sh .patch_storage
16G     .patch_storage




crsctl query has releaseversion
crsctl query has softwareversion
crsctl query has releasepatch
crsctl query has softwarepatch

[grid@srv1 ~]$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]

[grid@srv1 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [19.0.0.0.0]

[grid@srv1 ~]$ crsctl query has releasepatch
Oracle Clusterware release patch level is [1191252804] and the complete list of patches [36758186 38632161 38653268 38661284 38729293 ] have been applied on the local node. The release patch string is [19.30.0.0.0].

[grid@srv1 ~]$ crsctl query has softwarepatch
Oracle Clusterware patch level on node srv1 is [1191252804].




$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/34762026/34768569 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/34762026/34765931 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/34762026/33575402 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/34762026/34863894 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/34762026/34768559 | grep checkConflictAgainstOHWithDetail


/home/grid/39036936/39039430
/home/grid/39036936/39055473
/home/grid/39036936/39107855
/home/grid/39036936/39107825
/home/grid/39036936/39034528


$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39039430 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39055473 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39107855 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39107825 | grep checkConflictAgainstOHWithDetail

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39034528 | grep checkConflictAgainstOHWithDetail




[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39039430 | grep checkConflictAgainstOHWithDetail
Prereq "checkConflictAgainstOHWithDetail" passed.

[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39055473 | grep checkConflictAgainstOHWithDetail
Prereq "checkConflictAgainstOHWithDetail" passed.

[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39107855 | grep checkConflictAgainstOHWithDetail
Prereq "checkConflictAgainstOHWithDetail" passed.

[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39107825 | grep checkConflictAgainstOHWithDetail
Prereq "checkConflictAgainstOHWithDetail" failed.

[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /home/grid/39036936/39034528 | grep checkConflictAgainstOHWithDetail
Prereq "checkConflictAgainstOHWithDetail" passed.



[root@srv1 grid]# . oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid

[root@srv1 grid]# ps -ef|grep -i smon
grid      5427     1  0 10:30 ?        00:00:00 asm_smon_+ASM


 echo $ORACLE_HOME
/u01/app/19.0.0/grid


[root@srv1 39036936]# pwd
/home/grid/39036936


[root@srv1 39036936]# id
uid=0(root) gid=0(root) groups=0(root)


$ORACLE_HOME/OPatch/opatchauto apply /home/grid/39036936 -oh $ORACLE_HOME



==Following patches FAILED in analysis for apply:

Patch: /home/grid/39036936/39039430
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_11-46-24AM_1.log
Reason: Failed during Analysis: CheckSystemSpace Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:
Required amount of space(14046.416MB) is not available.]




[root@srv1 grid]# df -Ph
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             3.8G     0  3.8G   0% /dev
tmpfs                3.8G  1.1G  2.7G  30% /dev/shm
tmpfs                3.8G  9.7M  3.8G   1% /run
tmpfs                3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   50G   22G   29G  44% /
/dev/mapper/ol-home  339G   26G  314G   8% /home
/dev/sda1           1014M  233M  782M  23% /boot
tmpfs                771M   24K  771M   1% /run/user/54323
tmpfs                771M     0  771M   0% /run/user/0
[root@srv1 grid]#


[root@srv1 grid]# . oraenv
ORACLE_SID = [+ASM] ?
The Oracle base remains unchanged with value /u01/app/grid
[root@srv1 grid]# cd /home/grid


[root@srv1 39036936]# id
uid=0(root) gid=0(root) groups=0(root)


[root@srv1 grid]# cd 39036936
[root@srv1 39036936]# pwd
/home/grid/39036936


[root@srv1 39036936]# echo $ORACLE_HOME
/u01/app/19.0.0/grid



$ORACLE_HOME/OPatch/opatchauto apply /home/grid/39036936 -oh $ORACLE_HOME






as root ---
[root@srv1 grid]# cd 39036936
[root@srv1 39036936]# pwd
/home/grid/39036936
[root@srv1 39036936]# echo $ORACLE_HOME
/u01/app/19.0.0/grid
[root@srv1 39036936]# $ORACLE_HOME/OPatch/opatchauto apply /home/grid/39036936 -oh $ORACLE_HOME

OPatchauto session is initiated at Fri May  8 12:26:30 2026

System initialization log file is /u01/app/19.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2026-05-08_12-27-26PM.log.

Session log file is /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/opatchauto2026-05-08_12-28-20PM.log
The id for this session is 98KA

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.0.0/grid
Patch applicability verified successfully on home /u01/app/19.0.0/grid


Executing patch validation checks on home /u01/app/19.0.0/grid
Patch validation checks successfully completed on home /u01/app/19.0.0/grid


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.0.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/srv1/crsconfig/hapatch_2026-05-08_12-36-59AM.log
CRS service brought down successfully on home /u01/app/19.0.0/grid


Start applying binary patch on home /u01/app/19.0.0/grid
Binary patch applied successfully on home /u01/app/19.0.0/grid


Running rootadd_rdbms.sh on home /u01/app/19.0.0/grid
Successfully executed rootadd_rdbms.sh on home /u01/app/19.0.0/grid




Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.0.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/srv1/crsconfig/hapatch_2026-05-08_12-59-18AM.log
CRS service started successfully on home /u01/app/19.0.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:srv1
SIHA Home:/u01/app/19.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /home/grid/39036936/39034528
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_12-38-35PM_1.log

Patch: /home/grid/39036936/39039430
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_12-38-35PM_1.log

Patch: /home/grid/39036936/39055473
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_12-38-35PM_1.log

Patch: /home/grid/39036936/39107825
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_12-38-35PM_1.log

Patch: /home/grid/39036936/39107855
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2026-05-08_12-38-35PM_1.log



OPatchauto session completed at Fri May  8 13:05:18 2026
Time taken to complete the session 37 minutes, 53 seconds



[grid@srv1 ~]$ . oraemv
-bash: oraemv: No such file or directory
[grid@srv1 ~]$ . oraenv
ORACLE_SID = [+ASM] ?
The Oracle base remains unchanged with value /u01/app/grid


[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory|grep -i 19.
Oracle Home       : /u01/app/19.0.0/grid
   from           : /u01/app/19.0.0/grid/oraInst.loc
Log file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2026-05-08_13-07-16PM_1.log
Lsinventory Output file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2026-05-08_13-07-16PM.txt
Oracle Grid Infrastructure 19c                                       19.0.0.0.0
Patch description:  "TOMCAT RELEASE UPDATE 19.0.0.0.0 (39107855)"
     32625073, 33121445, 33655429, 33846688, 34300543, 34519419, 34816344
     38082506, 38162614, 38311920, 38640885, 39066568
Patch description:  "DBWLM RELEASE UPDATE 19.0.0.0.0 (39107825)"
Patch description:  "ACFS RELEASE UPDATE 19.31.0.0.0 (39055473)"

 $ORACLE_HOME/OPatch/opatch lsinventory|grep -i "Patch description"

 $ORACLE_HOME/OPatch/opatch lsinventory|grep -i "Patch description"
Patch description:  "TOMCAT RELEASE UPDATE 19.0.0.0.0 (39107855)"
Patch description:  "DBWLM RELEASE UPDATE 19.0.0.0.0 (39107825)"
Patch description:  "ACFS RELEASE UPDATE 19.31.0.0.0 (39055473)"
Patch description:  "OCW RELEASE UPDATE 19.31.0.0.0 (39039430)"
Patch description:  "Database Release Update : 19.31.0.0.260421 (39034528)"


[grid@srv1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory|grep -E "(^Patch.*applied)|(^Sub-patch)"
Patch  39107855     : applied on Fri May 08 12:57:42 GST 2026
Patch  39107825     : applied on Fri May 08 12:57:00 GST 2026
Patch  39055473     : applied on Fri May 08 12:55:55 GST 2026
Patch  39039430     : applied on Fri May 08 12:54:47 GST 2026
Patch  39034528     : applied on Fri May 08 12:47:38 GST 2026

crsctl query has releaseversion
crsctl query has softwareversion
crsctl query has releasepatch
crsctl query has softwarepatch


crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]

[grid@srv1 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [19.0.0.0.0]

[grid@srv1 ~]$ crsctl query has releasepatch
Oracle Clusterware release patch level is [3307946645] and the complete list of patches [39034528 39039430 39055473 39107825 39107855 ] have been applied on the local node. The release patch string is [19.31.0.0.0].

[grid@srv1 ~]$ crsctl query has softwarepatch
Oracle Clusterware patch level on node srv1 is [3307946645].


[grid@srv1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADISK.dg
               ONLINE  ONLINE       srv1                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       srv1                     STABLE
ora.OCRDISK.dg
               ONLINE  ONLINE       srv1                     STABLE
ora.asm
               ONLINE  ONLINE       srv1                     Started,STABLE
ora.ons
               OFFLINE OFFLINE      srv1                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       srv1                     STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       srv1                     STABLE
ora.oradb.db
      1        ONLINE  OFFLINE                               STABLE
ora.oradb.prodb_srvp.svc
      1        ONLINE  OFFLINE                               STABLE
--------------------------------------------------------------------------------

[grid@srv1 ~]$ df -Ph
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             3.8G     0  3.8G   0% /dev
tmpfs                3.8G  1.1G  2.7G  30% /dev/shm
tmpfs                3.8G  9.7M  3.8G   1% /run
tmpfs                3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   50G   28G   23G  55% /
/dev/mapper/ol-home  339G   26G  314G   8% /home
/dev/sda1           1014M  233M  782M  23% /boot
tmpfs                771M   24K  771M   1% /run/user/54323
tmpfs                771M     0  771M   0% /run/user/0


[grid@srv1 grid]$ du -sh .patch_storage/
4.9G    .patch_storage/








Oracle Database Patch 39036936 - GI Release Update 19.31.0.0.260421


Oracle® Database

Patch 39036936 - GI Release Update 19.31.0.0.260421

In this document, Oracle Database home refers to Oracle Database Enterprise Edition and Oracle Database Standard Edition.

The GI Release Update 19.31.0.0.260421 includes updates for both the Oracle Grid Infrastructure home and Oracle Database home that can be applied in a rolling fashion.

This patch is Oracle Data Guard Standby-First Installable. See Installing Patch in Oracle Data Guard Standby-First Mode for more information.

The database subpatch includes the JDK fixes released in the prior cycle and will update the JDK in the Oracle home. For the most recent JDK fixes, a separate patch is available and needs to be installed in addition to this patch. Refer to My Oracle Support document KB106822 Primary Note for Database Quarterly Release Updates for the JDK patch number.

Beginning with 19.21 Oct2023 RU, the UTL_URI.ESCAPE function is now compliant with RFC 3896 and will treat "#" as a reserved character. See My Oracle Support Note PALRT1208 for more details.

Beginning with 19.22 Jan2024 RU, the 19c database is now certified on Oracle Linux 9.x and RHEL9.x.

Beginning with 19.30 Jan2026 RU, the 19c database will include Micronaut 3.8.5-11.

Beginning with 19.31 Apr2026 RU, TOMCAT directories have been deleted from subpatch and the functionality is replaced with Micronaut.

Beginning with 19.31 Apr2026 RU, QOS / WLM directory has been removed.

For the latest Bundle Patch with security fixes that should be used on client-only installations, see the "Oracle Database Client" row for your Database version in the "Oracle Database" section of the most recent Critical Patch Update (CPU) Program Patch Availability Document (PAD).

This document is accurate at the time of release. For any changes and additional information regarding GI Release Update 19.31.0.0.260421, see this related document that is available at My Oracle Support (http://support.oracle.com/):

  • Document KB869205 Oracle Database 19c RU Apr 2026 Known Issues

  • Document KB188772 Oracle Database 19c and Oracle AI Database 26ai Important Recommended One-off Patches

  • Document KB106822 Primary Note for Database Quarterly Release Updates

  • Document KA19 19c Database Upgrade - Self Guided Assistance with Best Practices

This document includes the following sections:

1.1 Patch Information

  • The Oracle Grid Infrastructure patches are cumulative and include the database CPU program security content.

  • Any new or changed database initialization parameters that might be included into a quarterly patch bundle would be documented in the Oracle Database Reference manual section on initialization parameters. For Oracle Database 19c refer to Oracle Database Reference.

  • This GIRU contains the following CORE DST patches:

    Patch NumbersPatches
    28852325DSTV33 UPDATE - TZDATA2018G
    29997937DSTV34 UPDATE - TZDATA2019B
    31335037DSTV35 UPDATE - TZDATA2020A
    32327201DSTV36 UPDATE - TZDATA2020E
    33613829DSTV37 UPDATE - TZDATA2021E
    34006614DSTV38 UPDATE - TZDATA2022A
    34533061DSTV39 UPDATE - TZDATA2022C
    34698179DSTV40 UPDATE - TZDATA2022E
    35099667DSTV41 UPDATE - TZDATA2022G
    35220732DSTV42 UPDATE - TZDATA2023C
    36260493DSTV43 UPDATE - TZDATA2024A
    37537949DSTV44 UPDATE - TZDATA2025A
    39070269DSTV45 UPDATE - TZDATA2026A

Table 1-1 lists the various configurations and the patch that should be used to patch that configuration.

Table 1-1 Configuration and Database Patch Mapping

ConfigurationGrid VersionDatabase VersionsPatchOPatch Command(1)Comments

Grid home in conjunction with Oracle RAC, Oracle RAC One Node, or single-instance home

19

19

Grid RU

opatchauto Footnote 1

Grid home and all Oracle homes are patched.

Grid home in conjunction with Oracle RAC, Oracle RAC One Node, or single-instance home

19

19 and prior versions

Grid RU

opatchauto Footnote 1

Grid home and Oracle home at version 19 are patched.

For Oracle home with a version other than 19, apply the appropriate database RU for that version. For example, apply 19.x RU to database version 19c.

Grid home in conjunction with Oracle RAC, Oracle RAC One Node, or single-instance home

19

Versions prior to 19

Grid RU

opatchauto Footnote 1

Grid home alone is patched.

For Oracle home, apply the appropriate database RU for that version. For example, apply 19.x RU to database version 19c.

Oracle Restart home

19

19

Grid RU

opatchauto Footnote 1

Grid home and all the Oracle homes are patched.

Database single-instance home

NA

19

Database RU

opatch apply

None.

Database client home

NA

19

Database RU

opatch apply

None.

Footnote 1 OPatchAuto does not support patching in Oracle Data Guard environments. See Installing Patch in Oracle Data Guard Standby-First Mode for more information.

Table 1-2 lists the various patches by patch number that are installed as part of this bundle patch.

Table 1-2 Patch Numbers Installed as Part of this Bundle Patch

Patch NumberDescriptionApplicable Homes

39034528

Database Release Update 19.31.0.0.260421

Only Oracle home for non-Oracle RAC setup. Both Oracle home and Grid home for Oracle RAC setup.

39039430

OCW Release Update 19.31.0.0.260421

Both Oracle home and Grid home.

39055473

ACFS Release Update 19.31.0.0.260421 Footnote 2

Only Grid home.

39107855

Tomcat Release Update 19.0.0.0.0Footnote 2

Beginning 19.31 TOMCAT version will be deleted.

Only Grid home.

39107825

DBWLM Release Update 19.0.0.0.0Footnote 2

Beginning from 19.31 Apr2026, QOS / WLM directory has been removed.

Only Grid home.

Footnote 2 Oracle Automatic Storage Management Cluster File System (Oracle ACFS), Apache Tomcat (TOMCAT), and Database Workload Management (DBWLM) subpatches are not applicable to the HP-UX Itanium and Linux on IBM System z platforms.

2.1.1 Patch Installation Prerequisites

It is highly recommended to take a backup of the Oracle home binaries, the Grid home binaries, and Central Inventory prior to applying patches. For further information, refer to My Oracle Support document KB137807 How to Perform ORACLE_HOME Backup?.

You must satisfy the conditions in the following sections before applying the patch:

2.1.1.1 OPatch Utility Information

You must use the OPatch utility version 12.2.0.1.49 or later to apply this patch. Oracle recommends that you use the latest released OPatch version for 19c, which is available for download from My Oracle Support patch 6880880 by selecting "OPatch for DB 19.0.0.0.0" from the Select a Release dropdown. It is recommended that you download the OPatch utility and the patch to a shared location in order to access them from any node in the cluster for the patch application on each node.

When patching the Grid home, a shared location on Oracle ACFS only needs to be unmounted on the node where the Grid home is being patched.

The new OPatch utility should be updated in all of the Oracle RAC database homes and the Grid home that are being patched.

For each Oracle RAC database home and the Oracle Grid Infrastructure home that are being patched, as the respective home owner, extract the OPatch utility.

For exact instructions to install OPatch, follow the readme included with the tool download.

A new feature has been added to OPatch to increase performance by deleting inactive patches. See My Oracle Support document KB104015 OPatch 12.2.0.1.37+ Introduces a New Feature to Delete Inactive Patches in the ORACLE_HOME/.patch_storage directory.

For information about OPatch documentation, including any known issues, see My Oracle Support document KB133615 Primary Note For OPatch.

2.1.1.2 Validation of Oracle Inventory

Before beginning patch application, check the consistency of inventory information for Grid home and each Oracle home to be patched. Run this command as the respective Oracle home owner to check the consistency:

StepCommand
1.
$ <ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so that you have the status prior to the patch application.

If this command fails, contact Oracle Support for assistance.

2.1.1.3 Download and Unzip the Patch

To apply the patch, it must be accessible from all nodes in the Oracle cluster. Download the patch and unzip it to a shared location called the <UNZIPPED_PATCH_LOCATION>. This directory must be empty and cannot be /tmp. Additionally, the directory should have read permission for the ORA_INSTALL group:

StepCommand
1.
$ cd <UNZIPPED_PATCH_LOCATION>

Ensure that the directory is empty:

StepCommand
1.
$ ls

Unzip the patch as the Grid home owner except for installations that do not have any Grid homes. For installations where this patch is applied to the Oracle home only, the patch must be unzipped as the Oracle home owner:

StepCommand
1.
$ unzip p39036936_190000_<platform>.zip

2.1.1.4 Run OPatch Conflict Check

Determine whether any currently installed one-off patches conflict with this patch 39036936 as follows:

  • As the Grid home user:

    StepCommand
    1.
    % $ORACLE_HOME/OPatch/opatch prereq CheckMinimumOPatchVersion-phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39034528
    2.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39034528
    3.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39039430
    4.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39055473
    5.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39107855
    6.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39107825

    Note:

    For HP-UX Itanium and Linux on IBM System z platforms, the last two checks in the previous example do not need to be done.

  • For Oracle home, as home user:

    StepCommand
    1.
    % $ORACLE_HOME/OPatch/opatch prereq CheckMinimumOPatchVersion-phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39034528
    2.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39034528
    3.
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/39036936/39039430

The report will indicate the interim patches that conflict with the patch 39036936 and the interim patches for which patch 39036936 is a superset.

Note:

When OPatch starts, it validates the patch and ensures that there are no conflicts with the software already installed in the ORACLE_HOME. OPatch categorizes conflicts into the following types:

  • Conflicts with a patch already applied to the ORACLE_HOME.

    In this case, stop the patch installation and contact Oracle Support Services.

  • Conflicts with subset patch already applied to the ORACLE_HOME.

    In this case, continue with the patch installation because as the new patch contains all the fixes from the existing patch in the ORACLE_HOME. And, in any case, the subset patch will automatically be rolled back prior to the installation of the new patch.

2.1.1.5 Run OPatch System Space Check

Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:

  • For Grid Infrastructure home, as home user:

    1. Create file /tmp/patch_list_gihome.txt with the following content:

      StepCommand
      1.
      % cat /tmp/patch_list_gihome.txt
      2.
      <UNZIPPED_PATCH_LOCATION>/39036936/39034528
      <UNZIPPED_PATCH_LOCATION>/39036936/39039430
      <UNZIPPED_PATCH_LOCATION>/39036936/39055473
      <UNZIPPED_PATCH_LOCATION>/39036936/39107855
      <UNZIPPED_PATCH_LOCATION>/39036936/39107825

      Note:

      For HP-UX Itanium and Linux on IBM System z platforms, the last two rows in the previous example should not be added to the patch_list_gihome.txt file.

    2. Run the OPatch command to check if enough free space is available in the Grid Infrastructure home:

      StepCommand
      1.
      % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
  • For Oracle home, as home user:

    1. Create file /tmp/patch_list_dbhome.txt with the following content:

      StepCommand
      1.
      % cat /tmp/patch_list_dbhome.txt
      2.
      <UNZIPPED_PATCH_LOCATION>/39036936/39034528
      <UNZIPPED_PATCH_LOCATION>/39036936/39039430
    2. Run OPatch command to check if enough free space is available in the Oracle home:

      StepCommand
      1.
      % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt

The command output reports pass and fail messages as per the system space availability:

  • If OPatch reports Prereq "checkSystemSpace" failed., then cleanup the system space as the required amount of space is not available.

  • If OPatch reports Prereq "checkSystemSpace" passed., then no action is needed. Proceed with patch installation.

2.1.2 One-off Patch Conflict Detection and Resolution

The fastest and easiest way to determine whether you have one-off patches in the Oracle home that conflict with the patch, and to get the necessary conflict resolution patches, is as follows:

  •  If you are using the My Oracle Support Patch Plan, then use the Patch Recommendations and Patch Plans features on the Patches & Updates tab in My Oracle Support.
  •  If you are not using My Oracle Support Patch Plans, the My Oracle Support Conflict Checker tool enables you to upload an OPatch inventory and check the patches that you want to apply to your environment for conflicts.

If no conflicts are found, you can download the patches. If conflicts are found, the tool finds an existing resolution to download. If no resolution is found, it will automatically request a resolution, which you can monitor in the Plans and Patch Requests region of the Patches & Updates tab.

For more information, see Knowledge Document KB135057 How to Use My Oracle Support Conflict Checker Tool for Patches Installed with OPatch.

Note that Oracle proactively generates interim patches for common conflicts.

See My Oracle Support document KB862698 Patch Set Updates - One-off Patch Conflict Resolution to determine, for each conflicting patch, whether a conflict resolution patch is already available, and if you need to request a new conflict resolution patch or if the conflict may be ignored.

2.1.3 Patch Installation Checks

The Cluster Verification Utility (CVU) command line interface (CLUVFY) may be used to verify the readiness of the Grid_Home to apply the patch. The CLUVFY command may be issued from the configured Grid_Home or from the downloaded latest version of the CVU standalone release (preferred) from My Oracle Support patch 30839369

Before applying the patch, the readiness of the Grid_Home can be verified by issuing the cluvfy stage -pre patch command from any one of the cluster nodes. This command reports issues, if any are detected, in the Grid_Home which may affect the patching process.

After applying the patch, the sanity of the patching operation can be verified by issuing the cluvfy stage -post patch command from any of the cluster nodes upon completion of the patch application process.

The CLUVFY command line for patching ensures that the Grid_Home can receive the new patch and also ensures that the patch application process completed successfully leaving the home in the correct state.

2.1.4 OPatchAuto Out-of-Place Patching

Out-of-place patching is a mechanism where patching is done by creating a clone of the Oracle home, applying patches on the cloned home, and switching services to the newly created cloned home. This approach to patching reduces unavailability or downtime of the service by separating the step to create the patched clone home from the step of switching services when they must be restarted.

Out-of-place patching documentation can be found at this link:

Oracle OPatch User's Guide

Note: Users can check OPatchAuto help for syntax and examples to execute out-of-place patching.

2.1.5 OPatchAuto

The OPatch utility has automated the patch application for the Oracle Grid Infrastructure (Grid) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the Grid home.

The utility must be executed by an operating system (OS) user with root privileges, and it must be executed on each node in the cluster if the Grid home or Oracle RAC database home is in non-shared storage. The utility can be run in parallel on the cluster nodes except for the first (any) node.

Depending on command line options specified, one invocation of OPatchAuto can patch the Grid home, Oracle RAC database homes, or both Grid and Oracle RAC database homes of the same Oracle release version as the patch. You can also roll back the patch with the same selectivity.

TaskCommand
Add the directory containing the OPatchAuto to the $PATH environment variable. For example:
# export PATH=$PATH:<GI_HOME>/OPatch
# cd <UNZIPPED_PATCH_LOCATION>
When using -oh flag:
# export PATH=$PATH:<oracle_home_path>/OPatch
# cd <UNZIPPED_PATCH_LOCATION>
To patch the Grid home and all Oracle RAC database homes of the same version:
# opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936
To patch only the Grid home:
# opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936 -oh <GI_HOME>
To patch one or more Oracle RAC database homes:
# opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936 -oh <oracle_home1_path>,<oracle_home2_path>
To roll back the patch from the Grid home and each Oracle RAC database home:
# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936
To roll back the patch from the Grid home:
# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936 -oh <path to GI home>  
To roll back the patch from the Oracle RAC database home:
# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936 -oh <oracle_home1_path>,<oracle_home2_path> 

For more information about opatchauto, see Oracle OPatch User's Guide.

For detailed patch installation instructions, see Patch Installation.

2.1.6 Patch Installation

The patch instructions will differ based on the configuration of the Grid infrastructure and the Oracle RAC database homes. Patching instructions for Oracle RAC database homes and Grid together are listed below.

The most common configurations are listed as follows:

For other configurations listed below, see My Oracle Support document KB627956 Supplemental Readme - Grid Infrastructure Release Update 12.2.0.1.x / 18c / 19c:

  • Grid home is not shared, the Oracle home is not shared, Oracle ACFS may be used.

  • Patching Oracle RAC database homes.

  • Patching Grid home alone.

  • Patching Grid home together with Oracle RAC One Node and clusterware-managed single-instance databases.

  • Patching Oracle Restart home.

  • Patching a software only Grid home installation or before the Grid home is configured.

Case 1: Oracle RAC, where the Grid home and the Oracle homes are not shared and Oracle ACFS file system is not configured

As root user, execute the following command on each node of the cluster:

StepCommand
1.
# <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936
2.
# cd <UNZIPPED_PATCH_LOCATION>

Case 2: Oracle RAC, where the Grid home is not shared, Oracle home is shared, and Oracle ACFS may be used

Patching instructions:

  1. From the Oracle home, make sure to stop the Oracle RAC databases running on all nodes. As the Oracle home owner execute:

    StepCommand
    1.
    $ <ORACLE_HOME>/bin/srvctl stop database  d <db-unique-name>
  2. On the first node, unmount the Oracle ACFS file systems. See My Oracle Support document KB86783 How to Mount or Unmount ACFS File System While Applying GI Patches? for unmounting Oracle ACFS file systems.

  3. On the first node, apply the patch to the Grid home using the opatchauto command. As root user, execute the following command:

    StepCommand
    1.
    # <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936 -oh <GI_HOME>
    2.
    # cd <UNZIPPED_PATCH_LOCATION>
  4. If the message, "A system reboot is recommended before using ACFS" is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.

  5. On the first node, remount Oracle ACFS file systems. See My Oracle Support document KB86783 How to Mount or Unmount ACFS File System While Applying GI Patches? for mounting Oracle ACFS file systems.

  6. On the first node, apply the patch to the Oracle home using the opatchauto command. Since the Oracle home is shared, this operation will patch the Oracle home across the cluster. Note that a USM only patch cannot be applied to a Oracle home. As root user, execute the following command:

    StepCommand
    1.
    # <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936 -oh <ORACLE_HOME>
    2.
    # cd <UNZIPPED_PATCH_LOCATION>
  7. On the first node only, restart the Oracle instance, which you have previously stopped in Step 1. As the Oracle home owner execute:

    StepCommand
    1.
    $ <ORACLE_HOME>/bin/srvctl start instance  d <db-unique-name> -n <nodename>
  8. On the second (next) node, unmount the Oracle ACFS file systems. See My Oracle Support document KB86783 for unmounting Oracle ACFS file systems.

  9. On the second node, apply the patch to Grid home using the opatchauto command. As root user, execute the following command:

    StepCommand
    1.
    # <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/39036936 -oh <GI_HOME>
    2.
    # cd <UNZIPPED_PATCH_LOCATION>
  10. If the message, "A system reboot is recommended before using ACFS" is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.

  11. On the second node, running the opatchauto command in Step 9 will restart the stack.

  12. On the second node, remount Oracle ACFS file systems. See My Oracle Support document KB86783 for mounting Oracle ACFS file systems.

  13. On the second node only, restart the Oracle instance, which you have previously stopped in Step 1. As the Oracle home owner execute:

    StepCommand
    1.
    $ <ORACLE_HOME>/bin/srvctl start instance  d <db-unique-name> -n <nodename>
    
  14. Repeat steps 8 through 13 for all remaining nodes of the cluster.

Case 3: Single-instance homes not managed by Oracle Grid Infrastructure

Follow these steps:

  1. If you are using a Data Guard Physical Standby database, you must install this patch on both the primary database and the physical standby database, as described by My Oracle Support document KB147888 How To Apply DBRU in a Data Guard Physical Standby Configuration (Non Standby-First Installable).

  2. Shut down all instances and listeners associated with the Oracle home that you are updating. For more information, see Oracle Database Administrator's Guide.

  3. Set your current directory to the directory where the patch is located and then run the OPatch utility by entering the following commands:

    StepCommand
    1.
    cd <UNZIPPED_PATCH_LOCATION>/39036936/39034528
    2.
    opatch apply
  4. If there are errors, refer to Known Issues.

2.1.7 Installing Patch in Oracle Data Guard Standby-First Mode

For Data Guard Standby-First patching, see My Oracle Support document KB137118. For Standby-First patching for Oracle database RU 12.2 and higher, the following points need to be considered:

  1. The database RU subpatch 39034528 must be applied to the Data Guard standby using OPatch.

  2. Datapatch must not be invoked on the Data Guard standby environment to apply post patch SQL actions for the database RU. If datapatch is run on a standby, it will error while trying to call the SYS.DBMS_QOPATCH interface. For more details about this error, see My Oracle Support document KB142500.

  3. Datapatch must be invoked on the primary database after all the databases, that is primary and Data Guard, are patched and patch deployment of the database RU is complete for the setup.

2.1.8 Patch Post Installation Instructions

After installing the patch, perform the following actions:

  1. Apply conflict resolution patches as explained in Section 2.1.8.1 .

  2. If you are not using OPatchAuto, then load modified SQL files into the database, as explained in Section 2.1.8.2 .

  3. Upgrade Oracle Recovery Manager Catalog, as explained in Section 2.1.8.3.

  4. Bug fixes that may change an existing optimizer execution plan, as explained in Section 2.1.8.4 .

2.1.8.1 Applying Conflict Resolution Patches

Apply the patch conflict resolution interim patches that were determined to be needed when you performed the steps in One-off Patch Conflict Detection and Resolution.

2.1.8.2 Load Modified SQL Files into the Database

The following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node.

Datapatch is run to complete the post-install SQL deployment for the PSU. For further details about Datapatch, including Known Issues and workarounds to common problems, see My Oracle Support document KB148594 Datapatch: Database 12c or later Post Patch SQL Automation and My Oracle Support document KB123801 Datapatch User Guide.

  1. For each separate database running on the same shared Oracle home being patched, run the datapatch utility as described in Table 1-3.

    Table 1-3 Steps to Run the Datapatch Utility for Non-CDB or Non-PDB Database Versus Multitenant (CDB/PDB) Oracle Database

    StepsNon-CDB or Non-PDB DatabaseStepsMultitenant (CDB/PDB) Oracle Database

    1

    sqlplus /nolog

    1

    sqlplus /nolog

    2

    SQL> Connect / as sysdba

    2

    SQL> Connect / as sysdba

    3

    SQL> startup

    3

    SQL> startup

    4

    SQL> quit

    4

    SQL> alter pluggable database all open; Footnote 3

    5

    cd $ORACLE_HOME/OPatch

    5

    SQL> quit

    6

    ./datapatch -sanity_checks (optional)

    6

    cd $ORACLE_HOME/OPatch

    7

    ./datapatch -verbose

    7

    ./datapatch -sanity_checks (optional)



    8

    ./datapatch -verbose

    • Footnote 3 It is recommended the Post Install step be run on all pluggable databases; however, the following command (SQL> alter pluggable database PDB_NAME open) could be substituted to only open certain PDBs in the single/multitenant database. Doing so will result in the Post Install step only being run on the CDB and opened PDB's. To update a pluggable database at a later date (skipped or newly plugged in), open the database using the alter pluggable database command mentioned previously and rerun the datapatch utility.

      • See My Oracle Support document KB150931 Multitenant Unplug/Plug Best Practices for more information about the procedure for unplugging/plugging with different patch releases (in both directions).

    • Recommended: The datapatch -sanity_checks optional step runs a series of environment and database checks to validate if conditions are optimal for patching. Results are shown on screen with severity and potential actions to take.

      • For more information, refer to My Oracle Support document KB123801 Datapatch User Guide for additional information and actions. Oracle highly recommends that you perform this step.

    • The datapatch utility will then run the necessary apply scripts to load the modified SQL files into the database. An entry will be added to the dba_registry_sqlpatch view reflecting the patch application. In the dba_registry_sqlpatch view, verify the Status for the APPLY is SUCCESS.

      • For any other status, refer to My Oracle Support document KA1374 Troubleshooting Assistant: 12c Datapatch Issues for additional information and actions.
  2. Check the following log files in $ORACLE_BASE/cfgtoollogs/sqlpatch/39034528/<unique patch ID> for errors:

    StepCommand
    1.
    39034528_apply_<database SID>_<CDB name>_<timestamp>.log

    where database SID is the database SID, CDB name is the name of the multitenant container database, and timestamp is of the form YYYYMMMDD_HH_MM_SS.

  3. Any (pluggable) database that has invalid objects after the execution of datapatch should have catcon.pl run to revalidate those objects. For example:

    StepCommand
    1.
    export PATH=$PATH:$ORACLE_HOME/bin
    2.
    cd $ORACLE_HOME/rdbms/admin
    3.
    $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -e -b utlrp -d $ORACLE_HOME/rdbms/admin utlrp.sql

2.1.8.3 Upgrade Oracle Recovery Manager Catalog

If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it. The UPGRADE CATALOG command must be entered twice to confirm the upgrade.

StepCommand
1.
$ rman catalog username/password@alias
2.
RMAN> UPGRADE CATALOG;
3.
RMAN> UPGRADE CATALOG;
4.
RMAN> EXIT;

2.1.8.4 Bug Fixes That May Change an Existing Optimizer Execution Plan

At the successful conclusion of the patching event, none of the database bug fixes that may change an existing optimizer execution plan are delivered with the bug fix disabled by default. The status of any module bug fixes (which cause an execution plan change) that were in an enabled state prior to starting the patching event are preserved, but no new module bug fixes (which cause an execution plan change) are activated automatically.

Details on this, including the commands to explicitly enable such bug fixes are present in My Oracle Support document KB148297.

2.1.9 Patch Post Installation Instructions for Databases Created or Upgraded After Installation of Patch in the Oracle Home

You must execute the steps in Load Modified SQL Files into the Database for any new or upgraded database.

2.1.10 Patch Deinstallation

Datapatch is run to complete the post-deinstall SQL deployment for the database subpatch. For further details about Datapatch, including Known Issues and workarounds to common problems, see My Oracle Support document KB148594 Database 12c Post Patch SQL Automation.

The patch rollback instructions will differ based on the configuration of the Grid infrastructure and the Oracle RAC database homes. Roll Back instructions for Oracle RAC database homes and Grid are listed below.

The most common configurations are listed as follows:

  • Case 1: Oracle RAC, where the Grid home and Oracle homes are not shared and Oracle ACFS file system is not configured

  • Case 2: Oracle RAC, where the Grid home is not shared, Oracle home is shared and Oracle ACFS may be used

  • Case 3: Single-instance homes not managed by Oracle Grid Infrastructure

For other configurations listed below, see My Oracle Support document KB627956:

  • Grid home is not shared, the Oracle home is not shared, Oracle ACFS may be used.

  • Rolling back from Oracle RAC database homes.

  • Rolling back from Grid home alone.

  • Rolling back from Grid home together with Oracle RAC One Node and clusterware-managed single-instance databases.

  • Rolling back the patch from Oracle Restart home.

  • Rolling back the patch from a software only Grid home installation or before the Grid home is configured.

Roll Back the Oracle RAC Database Homes and Grid Together

  • Case 1: Oracle RAC, where the Grid home and Oracle homes are not shared and Oracle ACFS file system is not configured.

    As root user, execute the following command on each node of the cluster.

    StepCommand
    1.
    # <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936

    If the message, "A system reboot is recommended before using ACFS" is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.

  • Case 2: Oracle RAC, where the Grid home is not shared, Oracle home is shared and Oracle ACFS may be used.

    1. From the Oracle Oracle home, make sure to stop the Oracle RAC databases running on all nodes. As the Oracle home owner execute:

      StepCommand
      1.
      $ <ORACLE_HOME>/bin/srvctl stop database  d <db-unique-name>
    2. On the first node, unmount the Oracle ACFS file systems. See My Oracle Support document KB86783 for unmounting Oracle ACFS file systems.

    3. On the first node, roll back the patch from the Grid home using the opatchauto command. As root user, execute the following command:

      StepCommand
      1.
      # <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936 -oh <GI_HOME>
    4. If the message, "A system reboot is recommended before using ACFS" is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.

    5. On the first node, remount Oracle ACFS file systems. See My Oracle Support document KB86783 for mounting Oracle ACFS file systems.

    6. On the first node, roll back the patch to the Oracle home using the opatchauto command. This operation will rollback the patch to the Oracle home across the cluster given that it is a shared Oracle ACFS home. Note that a USM only patch cannot be applied to a Oracle home. As root user, execute the following command:

      StepCommand
      1.
      # <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936
    7. On the first node only, restart the Oracle instance, which you have previously stopped in Step 1. As the Oracle home owner execute:

      StepCommand
      1.
      $ <ORACLE_HOME>/bin/srvctl start instance  d <db-unique-name> -n <nodename>
    8. On the second (next) node, unmount the Oracle ACFS file systems. See My Oracle Support document KB86783 for unmounting Oracle ACFS file systems.

    9. On the second node, roll back the patch to Grid home using the opatchauto command. As root user, execute the following command:

      StepCommand
      1.
      # <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/39036936
    10. If the message, "A system reboot is recommended before using ACFS" is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.

    11. On the second node, running the opatchauto command in Step 9 will restart the stack.

    12. On the second node, remount Oracle ACFS file systems. See My Oracle Support document KB86783 for mounting Oracle ACFS file systems.

    13. On the second node only, restart the Oracle instance, which you have previously stopped in Step 1. As the Oracle home owner execute:

      StepCommand
      1.
      $ <ORACLE_HOME>/bin/srvctl start instance  d <db-unique-name> -n <nodename>
    14. Repeat Steps 8 through 13 for all remaining nodes of the cluster.

  • Case 3: Single-instance homes not managed by Oracle Grid Infrastructure

    Follow these steps:

    1. Shut down all instances and listeners associated with the Oracle home that you are updating. For more information, see Oracle Database Administrator's Guide.

    2. Run the OPatch utility specifying the rollback argument as follows.

      StepCommand
      1.
      opatch rollback -id 39034528
    3. If there are errors, refer to Known Issues.

2.1.11 Patch Post Deinstallation Instructions

After deinstalling the patch, perform the following actions.

2.1.11.1 Run the Datapatch Utility

Perform the following steps:

  1. For each separate Oracle running on the Oracle home being patched, run the datapatch utility as described in Table 1-4. If this is Oracle RAC, run datapatch on only one instance.

    Table 1-4 Steps to Run the datapatch Utility for Non-CDB or Non-PDB Database Versus Multitenant (CDB/PDB) Oracle Database

    StepsNon-CDB or Non-PDB DatabaseStepsMultitenant (CDB/PDB) Oracle Database

    1

    sqlplus /nolog

    1

    sqlplus /nolog

    2

    SQL> Connect / as sysdba

    2

    SQL> Connect / as sysdba

    3

    SQL> startup

    3

    SQL> startup

    4

    SQL> quit

    4

    SQL> alter pluggable database all open;Footnote 4

    5

    cd $ORACLE_HOME/OPatch

    5

    SQL> quit

    6

    ./datapatch -sanity_checks (optional)

    6

    cd $ORACLE_HOME/OPatch

    7

    ./datapatch -verbose

    7

    ./datapatch -sanity_checks (optional)



    8

    ./datapatch -verbose

    • Footnote 4 It is recommended the Post Install step be run on all pluggable databases; however, the following command (SQL> alter pluggable database PDB_NAME open ) could be substituted to only open certain PDBs in the single/multitenant database. Doing so will result in the Post Install step only being run on the CDB and opened PDB's. To update a pluggable database at a later date (skipped or newly plugged in), open the database using the alter pluggable database command mentioned previously and rerun the datapatch utility.

      • See My Oracle Support document KB150931 Multitenant Unplug/Plug Best Practices for more information about the procedure for unplugging/plugging with different patch releases (in both directions).

    • Recommended: The datapatch -sanity_checks optional step runs a series of environment and database checks to validate if conditions are optimal for patching. Results are shown on screen with severity and potential actions to take.

      • For more information, refer to the My Oracle Support document KB123801 Datapatch User Guide for additional information and actions. Oracle highly recommends that you perform this step.

    • The datapatch utility runs the necessary rollback scripts. An entry is added to the dba_registry_sqlpatch view reflecting the patch application. In the dba_registry_sqlpatch view, verify the Status for the ROLLBACK is SUCCESS.

      • For any other status, refer to My Oracle Support document KA1374 Troubleshooting Assistant: 12c Datapatch Issues for additional information and actions.
  2. Check the following log files in $ORACLE_HOME/sqlpatch/39034528/ for errors:

    StepCommand
    1.
    39034528_rollback_<database SID>_<CDB name>_<timestamp>.log
    

    where database SID is the database SID, CDB name is the name of the multitenant container database, and timestamp is of the form YYYYMMMDD_HH_MM_SS.

  3. Any (pluggable) database that has invalid objects after the execution of datapatch should have catcon.pl run to revalidate those objects. For example:
    StepCommand
    1.
    export PATH=$PATH:$ORACLE_HOME/bin
    2.
    cd $ORACLE_HOME/rdbms/admin
    3.
    $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -e -b utlrp -d $ORACLE_HOME/rdbms/admin utlrp.sql

2.1.11.2 Upgrade Oracle Recovery Manager Catalog

If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it. The UPGRADE CATALOG command must be entered twice to confirm the upgrade.

StepCommand
1.
$ rman catalog username/password@alias
2.
RMAN> UPGRADE CATALOG;
3.
RMAN> UPGRADE CATALOG;
4.
RMAN> EXIT;

3.1 Known Issues

For issues documented after the release of this patch, see My Oracle Support document KB869205 Oracle Database 19c RU Apr 2026 Known Issues.

4.1 References

The following documents are references for this patch:

Document KB869205 Oracle Database 19c RU Apr 2026 Known Issues

Document KA19 19c Database Upgrade - Self Guided Assistance with Best Practices

Document KB627956 Supplemental Readme - Grid Infrastructure Release Update 12.2.0.1.x / 18c / 19c

Document KB86783 How to Mount or Unmount ACFS While Applying GI Patches?

Document KB148594 Datapatch: Database 12c or later Post Patch SQL Automation

Document KB590650 Impact of Java SE Security Vulnerabilities on Oracle Database and Fusion Middleware Products

Document KB718940 Grid Infrastructure 19 Release Updates and Revisions Bugs Fixed Lists

Document KB141463 genclntsh: Could not locate $ORACLE_HOME/network/admin/shrept.lst

Oracle OPatch User's Guide

5.1 Manual Steps for Applying or Rolling Back the Patch

See My Oracle Support document KB627956 for cases where opatchauto cannot be used.

6.1 Bugs Fixed by This Patch

See My Oracle Support document KB718940 for the list of bugs fixed in this patch.

Oracle DBA

anuj blog Archive