Recently, I am focusing on dealing with the vulnerabilities which found by a security scan. Most of them are related with Web Server, especially with Tomcat. I will record how to verify and fix the these vulnerabilities.

Expired/Untrusted/Self-signed Certificate:

Verify:

$ openssl s_client -connect localhost:7004
CONNECTED(00000003)
depth=0 C = US, ST = VA, L = VVV, O = xxx, OU = yyy, CN = zzz.prod.xxx.edu
-->> verify error:num=18:self signed certificate
verify return:1
depth=0 C = US, ST = VA, L = VVV, O = xxx, OU = yyy, CN = zzz.prod.xxx.edu
-->> verify error:num=10:certificate has expired
-->> notAfter=Mar 29 12:45:03 2017 GMT
verify return:1
depth=0 C = US, ST = VA, L = VVV, O = xxx, OU = yyy, CN = zzz.prod.xxx.edu
notAfter=Mar 29 12:45:03 2017 GMT
verify return:1

………

No client certificate CA names sent
---
SSL handshake has read 1052 bytes and written 589 bytes
---
New, TLSv1/SSLv3, Cipher is AES128-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.2
Cipher : AES128-SHA
Session-ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Session-ID-ctx:
Master-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1544020132
Timeout : 300 (sec)
-->> Verify return code: 10 (certificate has expired)
---
read:errno=0

Fix:

Generate a request, the password of the old keystore is needed.
Normally, the password is on Tomcat server.xml, if cannot find, create a new one instead.

$ keytool -genkey -alias tara -keyalg RSA -keysize 2048 -keystore ./tara
$ keytool -certreq -alias tara -keyalg RSA -file ./tara.csr -keystore ./.keystore -ext SAN=dns:tara.pprd.yyy.edu

Send the content in tara.csr to CA to sign

After get the feedback from CA, download tara.cer and import

$ keytool -import -alias tara -file tara.cer -keystore ./.keystore

If recreate keystore, the new name / password should be changed on server.xml

Weak Cipher suite

Verify:
Do it from a windows client

>nmap -sV --script ssl-enum-ciphers -p 7004 128.82.96.201
Starting Nmap 7.70 ( https://nmap.org ) at 2018-12-05 09:26 Eastern Standard Time
Nmap scan report for homer.prod.odu.edu (128.82.96.201)
Host is up (0.0010s latency).

……...

| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
|_ least strength: C

……

Or, use openssl, when we use MEDIUM, it should be not working, only HIGH can be accepted

$ openssl s_client -connect 127.0.0.1:4903 -cipher MEDIUM
CONNECTED(00000003)
139781540525968:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:769:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 171 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1544157174
Timeout : 300 (sec)
Verify return code: 0 (ok)
---

Or, we can use the following script:

#!/usr/bin/env bash

# OpenSSL requires the port number.
SERVER=$1
DELAY=1
ciphers=$(openssl ciphers 'ALL:eNULL' | sed -e 's/:/ /g')

echo Obtaining cipher list from $(openssl version).

for cipher in ${ciphers[@]}
do
echo -n Testing $cipher...
result=$(echo -n | openssl s_client -cipher "$cipher" -connect $SERVER 2>&1)
if [[ "$result" =~ ":error:" ]] ; then
error=$(echo -n $result | cut -d':' -f6)
echo NO \($error\)
else
if [[ "$result" =~ "Cipher is ${cipher}" || "$result" =~ "Cipher :" ]] ; then
echo YES
else
echo UNKNOWN RESPONSE
echo $result
fi
fi
sleep $DELAY
done

Unfortunately, not every cipher suite you are using can be listed.

Fix:

Remove motioned cipher suites from server.xml for specific connector
Do it on the test machine at first to make sure the client can still access the web server.
The cipher suites which Tomcat supports are determinded by JAVA. It would be better to use update-to-date JDK

TLS1.0 / 1.1 / SSLv3 is supported

Verfiy:
if not supported, you can see the following information, otherwise, like the verify in “Expired/Untrusted/Self-signed Certificate:”

$ openssl s_client -connect localhost:8441 -tls1_1
CONNECTED(00000003)
140204650911560:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:339:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 7 bytes
---
New, (NONE), Cipher is (NONE)
-->> Secure Renegotiation IS NOT supported <<--
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1544026583
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---

Fix:
Specify sslProtocol=”TLSv1.2” for connector in server.xml of Tomcat.


We have a test ndbcluster, the whole cluster failed every night during 11:00 pm to 1:00 am. And then, the data node restarted automatically, so no one know this until last week, one of two data nodes didn’t start successfully.

I checked the cluster log, I found that the manage node 6 disconnected with 1(another management node), 2(data node), 3(data node).

2018-11-16 23:11:30 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 3 Disconnected
2018-11-16 23:11:30 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 1 Disconnected
2018-11-16 23:11:30 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 2 Disconnected
2018-11-16 23:12:36 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 1 Disconnected
2018-11-18 00:23:49 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 2 Disconnected
2018-11-18 00:23:49 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 1 Disconnected
2018-11-18 00:24:16 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 6: Node 1 Disconnected

The data node and management node are on the same machine, Node 1(M) and 2(D) on one box, Node 3(D) and 6(M) on the another box.
I don’t know why the former DBA did this, it is not safe for ndbcluster due to one of the management nodes is an arbitrator.
If the machine which is as an aribtrator in the cluster is down, the whole cluster would be shut down to prevent data corruption.

So, there is an interesting situation that Node 6 is also missing connection with node 3 which is on the same machine.
Even though I doubt it is a network-related problem, it is hard to talk with Network engineer about this.

So, I deployed a script to ping each machie.
I can see the pocket lost, at the same time, the data node will report missing heartbeat.

/usr1/dba/appadm11>>cat xxx.log|grep -E "EST 2018|loss|statistics"|awk '{
> if($0 ~ /EST 2018/){
>   TIME=$2" "$3" "$4
>   getline;
>   IP=$2;
>   getline;
>   LOSS=$6;
>   printf "%s %s %s\n",TIME,IP,LOSS;
> }
> }'|grep -v " 0%"
Nov 16 23:11:31 192.168.92.81 100%
Nov 16 23:11:45 192.168.92.81 100%
Nov 16 23:11:59 192.168.92.81 100%
Nov 16 23:12:57 192.168.92.81 33%
Nov 17 00:27:17 192.168.92.81 33%
Nov 17 00:45:32 192.168.92.81 33%
Nov 17 00:45:37 192.168.92.81 100%
Nov 17 00:45:51 192.168.92.81 100%
Nov 17 00:46:05 192.168.92.81 100%
Nov 18 00:21:41 192.168.92.81 33%
Nov 18 00:23:10 192.168.92.81 100%
Nov 18 00:23:24 192.168.92.81 100%
Nov 18 00:23:38 192.168.92.81 100%
Nov 19 00:23:18 192.168.92.81 33%
Nov 19 00:23:23 192.168.92.81 100%
Nov 19 00:23:37 192.168.92.81 100%
Nov 19 00:23:51 192.168.92.81 100%
-- cluster log
2018-11-16 23:12:05 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 1 missed heartbeat 2
2018-11-16 23:12:06 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 1 missed heartbeat 3
2018-11-16 23:12:08 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 1 missed heartbeat 4
2018-11-16 23:12:08 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 3: Node 1 declared dead due to missed heartbeat
2018-11-18 00:23:59 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 1 missed heartbeat 2
2018-11-18 00:24:00 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 1 missed heartbeat 3
2018-11-18 00:24:02 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 1 missed heartbeat 4
2018-11-18 00:24:02 [/l01/mysql/cluster_data/6.log] ALERT    -- Node 3: Node 1 declared dead due to missed heartbeat
2018-11-19 00:24:02 [/l01/mysql/cluster_data/6.log] WARNING  -- Node 3: Node 4 missed heartbeat 2 

So, with evidence in hand, we talked with the network guy, and he found there is a snapshot job at night, the whole machine may hang for several seconds.
After removing this job, the cluster is running smoothly.


MySQL doesn’t offer a documented and easy way to check the usage of resource, like Tables, Indexes, Scan, and etc.

Fortunately, there is an internal table in NDBINFO database, named ndb$pools can be used to check the current and historical high usage of most of resources.

I list the description and the name of each resource.

Use the following statement to check:

ndb_mgm> 2 DUMP 2514 0
select count(*) from ndbinfo.cluster_operations;
select count(distinct fq_name) from ndbinfo.memory_per_fragment where type = 'Ordered index';
select * from ndbinfo.ndb$pools where pool_name in(
'Attribute Record',
'Index Op',
'TC Scan Record',
'Scan Fragment',
'SubOp',
'Fired Trigger',
'Subscriber',
'Subscription',
'Table Record',
'Trigger Record',
'Index') and block_instance=0;
show engine ndbcluster status;

MaxNoOfAttributes

  • Default: 1000
  • Describe: Table, Secondary Unique Hash Index, Ordered Index and BLOB will consume attributes.
  • Usage: Attribute Record

MaxNoOfConcurrentIndexOperations

  • Default: 8192
  • Describe: For queries using a unique hash index at the same time This record is allocated only while executing a part of a query. As soon as this part has been executed, the record is released.
  • Usage: Index Op

MaxNoOfConcurrentOperations

  • Default: 32768
  • Describe: In a transaction, the DML on each row needs a entry on each replica. So, update 4 rows on data node 1 and with 2 replica, needs 8 entities.
  • Usage: ndbinfo.cluster_operations for current

MaxNoOfConcurrentScans

  • Default: 256
  • Describe: All operations but primary or unique key lookups are scan operations. In addition, a scan record is also required for full table scans. It is recommended to set to 500 which is the upper limit
  • Usage: TC Scan Record

MaxNoOfLocalScans

  • Default: 0 = MaxNoOfCuncurrentScans * number_of_data_nodes * 4 + 2
  • Describe: In contrast to scan record, which is used for taking care of scan accessing the entire cluster, local scan record is responsible for accessing data in its own data node.
  • Usage: Scan Fragment

MaxNoOfConcurrentSubOperations

  • Default: 256
  • Describe: Max no of concurrent subscriber operations
  • Usage: SubOp
  • mysql> show engine ndbcluster status;
    | ndbcluster | NdbOperation          | created=332, free=332, sizeof=944 
    

MaxNoOfConcurrentTransactions

  • Default: 4096
  • Describe:
    • It doesn’t represent the number of transactions. It represents the number of transaction records instead. A transaction record is a memory buffer used by Transaction Coordinator (TC). One TC on a certain data node takes care of each transaction throughout its lifecycle. Every data node has TC on it. Transaction records is used for coordinating transactions between the data nodes.
    • Each active transaction needs a record. And each table the thread access in the transaction needs a record.
    • The data nodes do cache the transaction objects. So if you had a (mysqld) connection that has used more transaction objects than is usually required for the connections, you can reconnect to release the cached transaction objects.
    • It is recommended to set it as max_connections * (average_number_of_tables_accessed_per_transaction + 1) * number_of_sql_nodes / number_of_node_groups
  • Usage:
mysql> show engine ndbcluster status;
| ndbcluster | NdbTransaction        | created=5, free=0, sizeof=368   

ndb_mgm> 2 DUMP 2514 0
-- log on the data node.
Start of ApiConnectRec summary (4096 total allocated)
Api node 4 connect records seized : 6 stateless : 0 stateful : 1 scan : 0
Api node 5 connect records seized : 0 stateless : 0 stateful : 0 scan : 0

MaxNoOfFiredTriggers

  • Default: 4000
  • Describe: A record is created when an operation is performed that affects a unique hash index. Inserting or deleting a record in a table with unique hash indexes or updating a column that is part of a unique hash index fires an insert or a delete in the index table.
  • Usage: Fired Trigger

MaxNoOfOrderedIndexes

  • Default: 128
  • Describe: Total number of ordered indexes that can be defined in the system
  • Usage:

select count(distinct fq_name) from ndbinfo.memory_per_fragment where type = ‘Ordered index’

MaxNoOfSubscribers

  • Default: 0 == 2 * MaxNoOfTables
  • Describe: For NDB replication only. There is a replication subscription for each table each direction. So, the default value is enough for two nodes with bidirectional replication. But, for a circular replication setup using three Nodes, we need to change this value to 3 * MaxNoOfTables
  • Usage: Subscriber

MaxNoOfSubscriptions

  • Default: 0 = MaxNoOfTables
  • Describe: Each NDB table in an NDB Cluster requires a subscription in the NDB kernel.
  • Usage: Subscription

MaxNoOfTables

  • Default: 128, but it is not the exact number you can have, the actually number of objects is calculated based on this value.
  • Describe: Every objects, including:Table, Secondary Unique Hash Index, Ordered Index and BLOB, will consume one entry of this resource.
  • Usage: Table Record. The exact limit will also be here.

MaxNoOfTriggers

  • Default: 768
  • Describe: Total number of triggers that can be defined in the system. The number of required triggers is adjusted internally using the following options: MaxNoOfTables, MaxNoOfOrderedIndexes, and MaxNoOfUniqueHashIndexes.
  • Usage: Trigger Record

MaxNoOfUniqueHashIndexes

  • Default: 64
  • Describe: Total number of unique hash indexes that can be defined in the system. But this parameter is not working, it doesn’t restrict the number of Unique Hash Indexes in the system.
  • Usage: Index

What is DST?
DST is the abbreviation of Daylight Saving Time.

DST and Timezone in Oracle
Nothing special for the DST/Time Zone, if we are using the number format of Timezone, for example, -5:00 for New York, and +8:00 for Beijing. The application decides which time zone for a particular city/location.
But we can also use a legible area/city name to use timezone, for example, we are using Europe/Moscow, which is the same as +4:00. This way is called NAMED Timezone.

SQL> with t1(tm) as(
  2    select '2013-11-08T10:11:31+02:00' from dual
  3  )
  4  select cast(to_timestamp_tz(tm, 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM')
  5           at time zone '+4:00' as date)   as this_way
  6       , cast(to_timestamp_tz(tm, 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM')
  7           at time zone 'Europe/Moscow' as date) as or_this_way
  8    from t1
  9  ;

THIS_WAY            OR_THIS_WAY
------------------- -------------------
2013-11-08 12:11:31 2013-11-08 12:11:31

If we are using NAMED Timzone, we will face a problem that, the time zone of a city or the start/end date of DST of cities/areas may be changed.

For example, we can see this from the update of DSTv31

  • Northern Cyprus has decided to resume EU rules starting 2017-10-29.
  • Fiji ends DST 2018-01-14 instead of the 2018-01-21 previously predicted.
  • Namibia will switch from +01 with DST to +02 all year on 2017-09-03 at 02:00.
  • Sudan will switch from +03 to +02 on 2017-11-01. South Sudan is not switching.
  • Tonga cancels DST and will not adjust its clocks on 2017-11-05.
  • Turks & Caicos will switch from -04 all year to -05 with US DST on 2018-03-11 at 03:00.

So, we need to apply DST patch to coordinate these unpredictable changes.

DST Version
Due to named timezone may be changed, to coordinate this change, Oracle provides DST patches for this situation.

To check the current version for RDBMS:

SQL> SELECT version FROM v$timezone_file;

   VERSION
----------
        31

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
  2  FROM DATABASE_PROPERTIES
  3  WHERE PROPERTY_NAME LIKE 'DST_%'
  4  ORDER BY PROPERTY_NAME;   

PROPERTY_NAME                  VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION         31
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE              NONE

To check the current DST version of JVM:

Testing your Oracle JVM Time Zone Setup (Doc ID 416860.1)

The more information about Patches list can be found:

Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches (Doc ID 412160.1)

Do I need to apply DST patch?
That depends on whether you are using NAMED Timezone.
In Oracle database, there are two places which are related with NAMED Timezone.
One is that tables or views with “* WITH TIME ZONE” data type.

For example:

SQL> create table ctais2.test1(id number, tdata TIMESTAMP WITH TIME ZONE);
SQL> insert into ctais2.test1 values(1,to_timestamp_tz('2013-11-08T10:11:31 Europe/Moscow', 'yyyy-mm-dd"T"hh24:mi:ss TZR TZD'));

SQL> select tdata, to_char(tdata,'TZR') from ctais2.test1
  2  ;

TDATA                                                        TO_CHAR(TDATA,'TZR')
------------------------------------------------------------ --------------------------------
08-NOV-13 10.11.31.000000 AM EUROPE/MOSCOW                   EUROPE/MOSCOW

Another is just as the SQL I used in the “DST and Timezone” part, do some simple conversion by SQL without table access.

For the first situation, we can use script cre_tz_views.sql to figure out whether there is a table or view using NAMED Timezone.

SCRIPT: cre_tz_views.sql - Assess Time Zone usage in a Database (Doc ID 412971.1)

For the second situation, we may find some clue from shared pool, but it is not 100% to determine whether NAMED Timezone is used in the application. The best way is to ask vendor.

If you are using NAMED Timezone, you should consider to upgrade if you want to get the correct time for these related timezone.

Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches (Doc ID 412160.1)

  • If you use only US named Timezones and no dates beyond 2020 than DST version 4 to 8 is enough.
  • If you are on DSTv9 or higher and US named Timezones you will have correct results up to the year 2040
  • Otherwise, after upgrading the database, from older edition to 11.2.0.x or 12.1.0.x, we need to upgrade RDBMS DST to the latest. The simplest way is to run script in 1585343.1 to archive it automatically.
  • If the older Oracle RDBMS version(11.2.0.x ,11.1.0.x, 10.2.0.x. etc) is using a RDBMS DST version higher than DSTv18 you need to install the SAME RDBMS DST patch as the old (11.2.0.x ,11.1.0.x, 10.2.0.x. etc) DST version on the 12.1.0.1 side before the Oracle RDBMS version upgrade. Note the usage of " the same RDBMS DST version" not a higher/newer DST version.

Default DST version for each Database Edition

  • 8.1.7.4 and lower: has no timezone information
  • 9.0.1 up to 9.2.0.8: includes and uses RDBMS DSTv1, includes and uses OJVM DSTv1
  • 10.1 up to 10.2.0.2: includes and uses RDBMS DSTv2, includes and uses OJVM DSTv2
  • 10.2.0.3: includes and uses RDBMS DSTv3, includes and uses OJVM DSTv3
  • 10.2.0.4, 10.2.0.5 , 11.1.0.6, 11.1.0.7: includes and uses RDBMS DSTv4, includes and uses OJVM DSTv4
  • 11.2.0.1: includes all RDBMS DST updates from DSTv1 up to DSTv11, the default RDBMS DST version used is DSTv11. includes and uses OJVM DSTv4
  • 11.2.0.2: includes all RDBMS DST updates from DSTv1 up to DSTv14, the default RDBMS DST version used is DSTv14. includes and uses OJVM DSTv14
  • 11.2.0.3: includes all RDBMS DST updates from DSTv1 up to DSTv14, the default RDBMS DST version used is DSTv14. includes and uses OJVM DSTv16
  • 11.2.0.4: includes all RDBMS DST updates from DSTv1 up to DSTv14, the default RDBMS DST version used is DSTv14. includes and uses OJVM DSTv20
  • 12.1.0.1: includes all RDBMS DST updates from DSTv1 up to DSTv18, the default RDBMS DST version used is DSTv18. includes and uses OJVM DSTv19
  • 12.1.0.2: includes all RDBMS DST updates from DSTv1 up to DSTv18, the default RDBMS DST version used is DSTv18. includes and uses OJVM DSTv22
  • 12.2.0.1: includes all RDBMS DST updates from DSTv1 up to DSTv26, the default RDBMS DST version used is DSTv26. includes and uses OJVM DSTv26

How to upgrade DST?
Two DST patches, one for RDBMS, one for OJVM.
If you are using DST in JAVA procedure, you need OJVM DST patch, otherwise, you can ignore it and only apply RDBMS DST patch.

Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches (Doc ID 412160.1)

  • Pre-requirist checklist
  • DST patches list
  • Related Document for OEM, ERP
  • Updated information
  • DST patches are cumulative.
  • DST patches cannot be downgraded.

Before upgrade, we should to check the current DST version, if it is higher than the default DST version of new Oracle Edition, the new Oracle Edition has been patched to the same. If lower, the DST version will not change after upgrade, we should do it manually.

After database upgrade, DST cannot be updated automatically in Database Dictionary. So: If you are using DBUA to upgrade Data Dictionary, you can choose “Upgrades Timezone Data”

  • Updating the RDBMS DST version in 11g Release 2 (11.2.0.1 and up) using DBMS_DST (Doc ID 977512.1)
  • Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 and up) using DBMS_DST (Doc ID 1509653.1)
  • Scripts to automatically update the RDBMS DST (timezone) version in an 11gR2 or 12c database . (Doc ID 1585343.1)
  • For the releases from 12.2.0.x, the timezone upgrade scripts are included in the target ORACLE_HOME under rdbms/admin directory. Refer to Database Globalization Support Guide in "Upgrading the Time Zone File and Timestamp with Time Zone Data" Section.

For example, After upgrade database O11204 from 11.2.0.4 to 12.1.0.2, We can see, timezone version is not updated to v18, it is still v14 now.

SQL> SELECT version FROM v$timezone_file;

   VERSION
----------
        14

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
  2  FROM DATABASE_PROPERTIES
  3  WHERE PROPERTY_NAME LIKE 'DST_%'
  4  ORDER BY PROPERTY_NAME;

PROPERTY_NAME
--------------------------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
DST_PRIMARY_TT_VERSION
14

DST_SECONDARY_TT_VERSION
0

DST_UPGRADE_STATE
NONE

So, if we try to get a time on “Pacific/Chuuk” which is introduced in v15, an error will be report:

SQL> select cast(to_timestamp_tz('2013-11-08T10:11:31+02:00', 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM')
  2           at time zone 'Pacific/Chuuk' as date) as t
  3    from dual;
select cast(to_timestamp_tz('2013-11-08T10:11:31+02:00', 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM')
*
ERROR at line 1:
ORA-01882: timezone region not found

if we try to get a time after 2011 on Egypt, whose DST policy changed in v16, we can get the different time with in 18c, whose DST version is v31

-- after update to 12.1.0.2
SQL> select cast(to_timestamp_tz('2013-05-08T10:11:31+02:00', 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM')
  2           at time zone 'Egypt' as date) as t
  3    from dual;

T
-------------------
2013-05-08 11:11:31

-- in 18c
SQL> select cast(to_timestamp_tz('2013-05-08T10:11:31+02:00', 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM')
  2           at time zone 'Egypt' as date) as t
  3    from dual;

T
-------------------
2013-05-08 10:11:31

In this situation, after upgrade to a 12.1.0.2 and we also expect to get right time, we should update RDBMS / OJVM DST version to the default version of 12.1.0.2 (v18). This process cannot be rolling, downtime is needed.

I am using the script

Scripts to automatically update the RDBMS DST (timezone) version in an 11gR2 or 12c database . (Doc ID 1585343.1)

If you cannot upgrade database to latest edition, but we want to update DST information, we can apply DST patches.There is no downtime for just patching ORACLE_HOME, but for upgrading DST version, it cannot be rolling update, downtime is needed.

The version and patch ID you can find here

Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches (Doc ID 412160.1)

  • Version 1 N/A N/A
  • Version 2 patch 3171059 N/A
  • Version 3 patch 4689959 N/A
  • Version 4 - tzdata2006g update - patch 5632264 patch 5865568
  • Version 5 patch 5880630 N/A
  • Version 6 - tzdata2007f update - patch 6113507 patch 6113568
  • Version 7 - tzdata2007k update - patch 6672979 patch 6708377
  • Version 8 N/A N/A
  • Version 9 - tzdata2008f update - patch 7181943 patch 7449057
  • Version 10 - tzdata2008i update - patch 7580744 patch 7708501
  • Version 11 - tzdata2009g update - patch 8524113 * patch 8603182
  • Version 12 N/A N/A
  • Version 13 - tzdata2009s update - patch 9146268 * patch 9197935
  • Version 14 - tzdata2010j update - patch 9742718 * patch 9751299
  • Version 15 - tzdata2010o update - patch 10272702 * patch 10406470
  • Version 16 - tzdata2011g update - patch 12320006 * patch 12565410
  • Version 17 - tzdata2011j update - patch 12949905 * patch 12949919 ( tzdata2011i for OJVM )
  • Version 18 - tzdata2012c update - patch 13417321 * patch 14112098
  • Version 19 - tzdata2012j update - patch 15897859 * patch 15897884
  • Version 20 - tzdata2013c update - patch 16799735 * patch 16799766
  • Version 21 - tzdata2013e update - patch 17432124 * patch 17443054
  • Version 22 - tzdata2014c update - patch 18759211 * patch 18759260
  • Version 23 - tzdata2014f update - patch 19396455 * patch 19397646
  • Version 24 - tzdata2015d update - patch 20875898 * patch 20875943
  • Version 25 - tzdata2015g update - patch 22037014 * patch 22037067
  • Version 26 - tzdata2016d update - patch 22873635 * patch 22873666
  • Version 27 - tzdata2016f update - patch 23614158 * patch 23614160
  • Version 28 - tzdata2016g update - patch 24701840 * patch 24701882
  • Version 29 - tzdata2016j update - patch 25173124 * patch 25173172
  • Version 30 - tzdata2017b update - patch 25881255 * patch 25881271
  • Version 31 - tzdata2017c update - patch 27015449 * patch 27015468

Oracle ASM Filter Driver is a new way to bind the disk with a permanent path, instead of the using of asmlib or udev. And, additionally, it provides the protection of the disk device from the crazy operations.

ASMCMD> afd_state
ASMCMD-9526: The AFD state is ‘LOADED’ and filtering is ‘ENABLED’ on host ‘database01’

ASMCMD> afd_lsdsk
——————————————————————————–
Label Filtering Path
================================================================================
CRSDG1 ENABLED /dev/asm-sdd
CRSDG2 ENABLED /dev/asm-sdc
CRSDG3 ENABLED /dev/asm-sdb
MGMT1 ENABLED /dev/asm-sdj

Enable:
The simplest way is to choose it during the installment of Grid.
We can also enable it by using afd_configure in asmcmd.( with CRS off)

Label new disk:

/u01/app/12.2.0/grid/bin/asmcmd afd_label DAT01 /dev/sde

Label exist disk:

/u01/app/12.2.0/grid/bin/asmcmd afd_label CRSDG1 /dev/sdd –migrate

Unlabel:

/u01/app/12.2.0/grid/bin/asmcmd afd_unlabel DATA1

Scan and List( like asmlib )

asmcmd afd_lsdsk
asmcmd afd_scan

The status in v$asm_disk

1* select NAME, PATH, mount_status from v$asm_disk
SQL> /

NAME PATH MOUNT_S
———- ——————– ——-
/dev/asm-sdj IGNORED
/dev/asm-sdc IGNORED
/dev/asm-sdd IGNORED
/dev/asm-sdb IGNORED
CRSDG1 AFD:CRSDG1 CACHED
CRSDG2 AFD:CRSDG2 CACHED
MGMT1 AFD:MGMT1 CACHED
CRSDG3 AFD:CRSDG3 CACHED

8 rows selected.

Filter — rewrite the disk

[root@database01 ~]# dd if=/dev/zero of=/dev/oracleafd/disks/CRSDG1 bs=128 count=8000
8000+0 records in
8000+0 records out
1024000 bytes (1.0 MB) copied, 0.0274736 s, 37.3 MB/s

— read the disk and find the data has been cleaned up
[root@database01 ~]# dd if=/dev/oracleafd/disks/CRSDG1 |od -xv|tail -n 10
2000+0 records in
2000+0 records out
1024000 bytes (1.0 MB) copied, 0.358082 s, 2.9 MB/s
3717560 0000 0000 0000 0000 0000 0000 0000 0000
3717600 0000 0000 0000 0000 0000 0000 0000 0000
3717620 0000 0000 0000 0000 0000 0000 0000 0000
3717640 0000 0000 0000 0000 0000 0000 0000 0000
3717660 0000 0000 0000 0000 0000 0000 0000 0000
3717700 0000 0000 0000 0000 0000 0000 0000 0000
3717720 0000 0000 0000 0000 0000 0000 0000 0000
3717740 0000 0000 0000 0000 0000 0000 0000 0000
3717760 0000 0000 0000 0000 0000 0000 0000 0000
3720000

— after reboot, data is remain
[root@database01 ~]# dd if=/dev/oracleafd/disks/CRSDG1 |od -xv|tail -n 10
0+1 records in
0+1 records out
13 bytes (13 B) copied, 3.3587e-05 s, 387 kB/s
0000000 642f 7665 612f 6d73 732d 6464 000a
0000015

— the disk is still a member of votedisk
[grid@database01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE 88f23183af1f4fdbbf65f93a8b927f27 (AFD:CRSDG1) [CRSDG]
2. ONLINE c761b16740504fd0bfdc280f4cbef571 (AFD:CRSDG2) [CRSDG]
3. ONLINE 24e27d7bf82f4fa4bf8bd69dd3fde7e9 (AFD:CRSDG3) [CRSDG]
Located 3 voting disk(s).

The Path of Disk

[grid@database01 ~]$ ls -l /dev/oracleafd/disks
total 16
-rwxrwx— 1 grid oinstall 13 Jun 11 22:54 CRSDG1
-rwxrwx— 1 grid oinstall 13 Jun 11 22:54 CRSDG2
-rwxrwx— 1 grid oinstall 13 Jun 11 22:54 CRSDG3
-rwxrwx— 1 grid oinstall 13 Jun 11 22:54 MGMT1