Grid logging in 12.1.0.2

Most of the talk about Oracle’s release of 12.1.0.2 is about the InMemory feature, but more things have changed, for example some essential things about loggin in the Grid Infrastructure have changed. Normally in Oracle Grid Infrastructure logging for Grid components was done in $GI_HOME:

oracle@dm01db01(*gridinfra):/home/oracle> cd $ORACLE_HOME/log/`hostname -s`/
oracle@dm01db01(*gridinfra):/u01/app/12.1.0.2/grid/log/dm01db01>

There we have the main alert log for GI and several subdirectories for the GI binaries where they write there logging information in a list of rotating logfiles. So after upgrading a 11.2.0.4 cluster on Exadata, you will see that the alert log will be empty or just filled with a couple of lines e.g written at the point when upgrading GI stack:

oracle@dm01db01(*gridinfra):/u01/app/12.1.0.2/grid/log/dm01db01> cat alertdm01db01.log
2014-08-21 11:59:35.455
[client(9458)]CRS-0036:An error occurred while attempting to open file "UNKNOWN".
2014-08-21 11:59:35.455
[client(9458)]CRS-0004:logging terminated for the process. log file: "UNKNOWN"

So in 12.1.0.2 the old directories are still there but are not being used anymore, so where can we find them? Prior to 12.1.0.2 all GI binaries were writing directly to the alert.log file as well. Lets start by finding out were for example the crsd.bin file is writing to by looking at the op file descriptors in the proc filesystem:

[root@dm01db01 ~]# cd /proc/`ps -C crsd.bin -o pid=`/fd
[root@dm01db01 fd]# ls -la *.log
[root@dm01db01 fd]# ls -la *.trc
lrwx------ 1 root root 64 Aug 21 11:41 1 -> /u01/app/grid/crsdata/dm01db01/output/crsdOUT.trc
l-wx------ 1 root root 64 Aug 21 11:41 15 -> /u01/app/grid/diag/crs/dm01db01/crs/trace/crsd.trc
lrwx------ 1 root root 64 Aug 21 11:41 2 -> /u01/app/grid/crsdata/dm01db01/output/crsdOUT.trc
lrwx------ 1 root root 64 Aug 21 11:41 3 -> /u01/app/grid/crsdata/dm01db01/output/crsdOUT.trc

Crsd.bin is not writing to any logfile anymore, it is writing to a trace file in a new location. The new logging writes to a new location which can be found at: $ORACLE_BASE/diag/crs/`hostname -s`/crs/trace/ and is now in regular ADR formatted directories. In the old structure we have all the logfile nicely divided in subdirectories, in this new structure everything is a single directory. This directory does contain a lot of files, this is on a freshly installed cluster node:

[root@dm01db01 trace]# ls -la | wc -l
800

The majority of the files are trace files from cluster commands like oifcfg, every execution is being logged and traced. These files consist of -.trc name formatting. All GI processes can easily be listed (the example below is from an environment with role separation between GI and rdbms):

[root@dm01db01 trace]# ls -la | grep [a-z].trc
-rw-rw----  1 grid     oinstall   1695134 Aug 24 15:30 crsd_oraagent_grid.trc
-rw-rw----  1 oracle   oinstall   4555557 Aug 24 15:30 crsd_oraagent_oracle.trc
-rw-rw----  1 root     oinstall   1248672 Aug 24 15:30 crsd_orarootagent_root.trc
-rw-rw----  1 oracle   oinstall   6156053 Aug 24 15:30 crsd_scriptagent_oracle.trc
-rw-rw----  1 root     oinstall   4856950 Aug 24 15:30 crsd.trc
-rw-rw----  1 grid     oinstall  10329905 Aug 24 15:31 diskmon.trc
-rw-rw----  1 grid     oinstall   2825769 Aug 24 15:31 evmd.trc
-rw-rw----  1 grid     oinstall       587 Aug 21 11:19 evmlogger.trc
-rw-rw----  1 grid     oinstall   8994527 Aug 24 15:31 gipcd.trc
-rw-rw----  1 grid     oinstall     12663 Aug 21 11:41 gpnpd.trc
-rw-rw----  1 grid     oinstall     11868 Aug 21 11:36 mdnsd.trc
-rw-rw----  1 grid     oinstall 132725348 Aug 24 15:31 ocssd.trc
-rw-rw----  1 root     oinstall   6321583 Aug 24 15:31 octssd.trc
-rw-rw----  1 root     oinstall     59185 Aug 24 14:02 ohasd_cssdagent_root.trc
-rw-rw----  1 root     oinstall     72961 Aug 24 14:38 ohasd_cssdmonitor_root.trc
-rw-rw----  1 grid     oinstall    804408 Aug 24 15:31 ohasd_oraagent_grid.trc
-rw-rw----  1 root     oinstall   1094709 Aug 24 15:31 ohasd_orarootagent_root.trc
-rw-rw----  1 root     oinstall  10384867 Aug 24 15:30 ohasd.trc
-rw-rw----  1 root     oinstall    169081 Aug 24 15:06 ologgerd.trc
-rw-rw----  1 root     oinstall   5781762 Aug 24 15:31 osysmond.trc
[root@dm01db01 trace]#

So if we look at the old pre 12.1.0.2 cluster alert log you will see that there are a lot of processes writing directly to this file:

[root@dm02db01 ~]# lsof | grep alertdm02db01.log | wc -l
443

That are a lot of processes writing to one file, in the new environment, it is a lot less… just one:

[root@dm01db01 trace]# lsof | grep alert.log
java      23914     root  335w      REG              252,2     29668   10142552 /u01/app/grid/diag/crs/dm01db01/crs/trace/alert.log
[root@dm01db01 trace]# ps -p 23914 -o pid,args
  PID COMMAND
23914 /u01/app/12.1.0.2/grid/jdk/jre/bin/java -Xms128m -Xmx512m -classpath
/u01/app/12.1.0.2/grid/tfa/dm01db01/tfa_home/jlib/RATFA.jar:/u01/app/12.1.0.2/grid/tfa/dm01db01/tfa_home/jlib/je-5.0.84.jar:

For now this new logging structure is a lot less organized then the previous structure. It is however a lot more like regular database tracing and logging where there is one general alertlog and the different background processes like pmon, arch, lgwr etc. write to their own trace files.

Inserts on HCC tables

There are already a lot of blogposts and presentations done about Hybrid Columnar Compression and i am adding one more blogpost to that list. Recently i was doing some small tests one HCC and noticed that that inserts on a HCC row didn’t got compressed and yes i was using direct path loads:

DBA@TEST1> create table hcc_me (text1 varchar2(4000)) compress for archive high;

Table created.

KJJ@TEST1> insert /*+ append */ into hcc_me select dbms_random.string('x',100) from dual;

1 row created.

KJJ@TEST1> commit;

Commit complete.

KJJ@TEST1> select rowid from hcc_me;

ROWID
------------------
AAAWw/AAAAAACXzAAA

DBA@TEST1> @compress
Enter value for schemaname: kjj
Enter value for tablename: hcc_me
Enter value for rowid: AAAWw/AAAAAACXzAAA
old   2:    dbms_compression.get_compression_type(upper('&SchemaName'),upper('&TableName'),'&RowID'),
new   2:    dbms_compression.get_compression_type(upper('kjj'),upper('hcc_me'),'AAAWw/AAAAAACXzAAA'),

COMPRESSION_TYPE
---------------------
COMP_NOCOMPRESS

So my row did not got compressed, let’s insert a little bit more data into our test table:

declare
    sql_stmt varchar(200);
begin
for i in 1..1000 loop
    sql_stmt := 'insert /*+ append_values */ into hcc_me select dbms_random.string(''x'',100) from dual';
    execute immediate sql_stmt;
    commit;
end loop;
end;
/

And lets see what we end up with:

select count(*), compression_type from (
select decode(dbms_compression.get_compression_type('KJJ','HCC_ME',rowid),
       1, 'COMP_NOCOMPRESS',
       2, 'COMP_FOR_OLTP',
       4, 'COMP_FOR_QUERY_HIGH',
       8, 'COMP_FOR_QUERY_LOW',
      16, 'COMP_FOR_ARCHIVE_HIGH',
      32, 'COMP_FOR_ARCHIVE_LOW',
      64, 'COMP_BLOCK',
      1000000, 'COMP_RATIO_MINROWS',
      -1, 'COMP_RATIO_ALLROWS') "COMPRESSION_TYPE"
      from hcc_me)
group by compression_type;

so none of my records got compressed:

  COUNT(*) COMPRESSION_TYPE
---------- ---------------------
      1000 COMP_NOCOMPRESS

Maybe it size dependent, the row i am inserting into this HCC table is extremely small. Lets re-create the table and make every row one byte bigger then the previous row:

declare

sql_stmt    varchar(200);
v_random1   varchar2(4000);

begin

     execute immediate 'drop table hcc_me';
     execute immediate 'create table hcc_me (text1 varchar2(4000)) compress for archive high';

 for i in 1..1000 loop
     v_random1 := dbms_random.string('x', i);
     sql_stmt := 'insert /*+ append_values */ into hcc_me values (:1)';
     execute immediate sql_stmt using v_random1;
     commit;
   end loop;
end;

This will give me a table that has row 1 being 1 byte big and the last row 1000 bytes big. re-run our select statement and see if we have HCC compressed rows now:

  COUNT(*) COMPRESSION_TYPE
---------- ---------------------
       697 COMP_FOR_ARCHIVE_HIGH
       303 COMP_NOCOMPRESS

2 rows selected.

Victory! But now lets see where our records are starting to compress, lets adapt the query a bit:

select vsize(text1) row_bytes,
       decode(dbms_compression.get_compression_type('KJJ','HCC_ME',rowid),
       1, 'COMP_NOCOMPRESS',
       2, 'COMP_FOR_OLTP',
       4, 'COMP_FOR_QUERY_HIGH',
       8, 'COMP_FOR_QUERY_LOW',
      16, 'COMP_FOR_ARCHIVE_HIGH',
      32, 'COMP_FOR_ARCHIVE_LOW',
      64, 'COMP_BLOCK',
      1000000, 'COMP_RATIO_MINROWS',
      -1, 'COMP_RATIO_ALLROWS') COMPRESSION_TYPE
      from hcc_me;

 ROW_BYTES COMPRESSION_TYPE
---------- ---------------------
         1 COMP_NOCOMPRESS
         2 COMP_NOCOMPRESS
         3 COMP_NOCOMPRESS
         4 COMP_NOCOMPRESS
         5 COMP_NOCOMPRESS
<cut>
       292 COMP_NOCOMPRESS
       293 COMP_NOCOMPRESS
       294 COMP_NOCOMPRESS
       295 COMP_FOR_ARCHIVE_HIGH
       296 COMP_NOCOMPRESS
       297 COMP_NOCOMPRESS
       298 COMP_NOCOMPRESS
       299 COMP_NOCOMPRESS
       300 COMP_FOR_ARCHIVE_HIGH
       301 COMP_NOCOMPRESS
       302 COMP_NOCOMPRESS
       303 COMP_NOCOMPRESS
       304 COMP_FOR_ARCHIVE_HIGH
       305 COMP_FOR_ARCHIVE_HIGH
       306 COMP_FOR_ARCHIVE_HIGH
       307 COMP_FOR_ARCHIVE_HIGH
       308 COMP_FOR_ARCHIVE_HIGH
       309 COMP_FOR_ARCHIVE_HIGH
       310 COMP_FOR_ARCHIVE_HIGH
       311 COMP_FOR_ARCHIVE_HIGH
       312 COMP_FOR_ARCHIVE_HIGH
       313 COMP_FOR_ARCHIVE_HIGH
       314 COMP_NOCOMPRESS
       315 COMP_NOCOMPRESS
       316 COMP_FOR_ARCHIVE_HIGH
       317 COMP_FOR_ARCHIVE_HIGH
       318 COMP_FOR_ARCHIVE_HIGH
       319 COMP_FOR_ARCHIVE_HIGH
<cut>
       996 COMP_FOR_ARCHIVE_HIGH
       997 COMP_FOR_ARCHIVE_HIGH
       998 COMP_FOR_ARCHIVE_HIGH
       999 COMP_FOR_ARCHIVE_HIGH
      1000 COMP_FOR_ARCHIVE_HIGH

1000 rows selected.

Alright, this is unexpected. I ran this test multiple times and ended up with different results, some rows around the 300 bytes mark are getting compressed and some are not. So somewhere around the 300 bytes oracle decides randomly wetter or not to compress. Oh wait, randomly… i am using DBMS_RANDOM to fill my 1 column with data, so lets take the random factor out of the equation and fill our rows fit on fixed character:

declare

sql_stmt    varchar(200);

begin

     execute immediate 'create table hcc_me (text1 varchar2(4000)) compress for archive high';

 for i in 1..1000 loop
     sql_stmt := 'insert /*+ append_values */ into hcc_me select lpad( ''x'','||i||',''x'') from dual';
     execute immediate sql_stmt;
     commit;
   end loop;
end;

Now we end up with a much effective compression:

 ROW_BYTES COMPRESSION_TYPE
---------- ---------------------
         1 COMP_NOCOMPRESS
         2 COMP_NOCOMPRESS
         3 COMP_NOCOMPRESS
         4 COMP_NOCOMPRESS
         5 COMP_NOCOMPRESS
<cut>
        65 COMP_NOCOMPRESS
        66 COMP_NOCOMPRESS
        67 COMP_NOCOMPRESS
        68 COMP_FOR_ARCHIVE_HIGH
        69 COMP_FOR_ARCHIVE_HIGH
        70 COMP_FOR_ARCHIVE_HIGH
        71 COMP_FOR_ARCHIVE_HIGH
<cut>
       998 COMP_FOR_ARCHIVE_HIGH
       999 COMP_FOR_ARCHIVE_HIGH
      1000 COMP_FOR_ARCHIVE_HIGH

1000 rows selected.

Moral of this story is to be careful with HCC and small inserts, there is some logic built into the HCC engine that decides wetter or not it should compress based on size and estimated compression ratio. If you end up with a situation were not all rows are compressed (the insert was to small, forgot the do a direct load, or for whatever reason) a simple alter table move will compress all rows again.

Cellcli can lie to you…

Yes that is true, i said it, cellcli can lie to you there are some special cases were the output of cellcli is not the reality and you should double check it’s output with your standard OS tools. So this the output if dcli calling cellcli on an Exadata rack from an client:

[root@dm01db01 ~]# dcli -g cell_group -l root cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus
dm01cel01: running       running         running
dm01cel02: running       running         running
dm01cel03: stopped       running         running
dm01cel04: running       running         running
dm01cel05: running       running         running
dm01cel06: running       running         running
dm01cel07: running       running         running
dm01cel08: running       running         running
dm01cel09: running       running         running
dm01cel10: running       running         running
dm01cel11: running       running         running
dm01cel12: running       running         running
dm01cel13: running       running         running
dm01cel14: running       running         running
[root@dm01db01 ~]#

It seems that on this node cellsrv on cel03 has stopped, lets zoom in and logon to that cell and verify:

CellCLI> list cell attributes cellsrvStatus,msStatus,rsStatus
         stopped         running         running

Well that is expected, same command some output but now double-check this with what is actually happening on the OS. Let see what processes are actually running at OS level:

[root@dm01cel03 trace]# ps -ef | grep cellsrv/bin/cell[srv]
root      3143  3087  0 12:00 ?        00:00:00 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellsrvstat -interval=5 -count=720
root     21040     1 60 Mar26 ?        20:33:37 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellsrv 100 5000 9 5042
root     25662     1  0 Mar26 ?        00:02:08 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrssrm -ms 1 -cellsrv 1
root     25673 25662  0 Mar26 ?        00:00:07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsbmt -ms 1 -cellsrv 1
root     25674 25662  0 Mar26 ?        00:00:07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsmmt -ms 1 -cellsrv 1
root     25676 25673  0 Mar26 ?        00:00:01 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsbkm -rs_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsms.state -cellsrv_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsos.state -debug 0
root     25710 25676  0 Mar26 ?        00:00:07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrssmt -rs_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsms.state -cellsrv_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsos.state -debug 0
[root@dm01cel03 trace]#

That is confusing, cellcli tells me that cellsrv is down but “ye good ol'” ps is telling me that cellsrv is up-and-running as it should. It looks like my cell storage is available, lets verify if we got all 12 disks from that cell available at ASM level and if we don’t have any repair timers counting down:

SYS@+ASM1> select count(*), repair_timer from v$asm_disk where path like '%DATA%dm05cel03' group by repair_timer;

  COUNT(*) REPAIR_TIMER
---------- ------------
        12            0

1 row selected.

All disks are having a repair timer of 0, meaning that no disks have failed, if there would really be a problem with the disks we would see the repair_timer counting down.

Now we can confirm that cellsrv is available and the output of cellcli is just plain wrong here. What is going on here. Lets start by checken the cell alertlog in $CELLTRACE

[RS] monitoring process /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsomt (pid: 0) returned with error: 126
[RS] Monitoring process for service CELLSRV detected a flood of restarts. Disable monitoring process.
Errors in file /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/log/diag/asm/cell/dm05cel03/trace/rstrc_25662_4.trc  (incident=179):
RS-7445 [CELLSRV monitor disabled] [Detected a flood of restarts] [] [] [] [] [] [] [] [] [] []
Incident details in: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/log/diag/asm/cell/dm05cel03/incident/incdir_179/rstrc_25662_4_i179.trc
Sweep [inc][179]: completed

The cellrs processes are monitoring the cellms and the cellsrv processes, however there is a flood control being built in to prevent a loop of restarts. If this would happen it could bring down a cell, so in order to prevent that from happening this flood control has been built in. When this happens RS will stop monitoring the problematic service, cellsrv in this case. This also means that it will report back to cellcli that process is stopped. Personally i think that this built-in flood control is a good thing, however i would like to see cellcli report this properly. For instance it would be nice if Oracle would let cellcli report the cellsrv status to something like intermediate when RS has stopped monitoring it, it is now saying ‘stopped’ which not true at all. This also means that when you see cellcli reporting that cellsrv is down, you always need to double check if this is actually true before you try restarting cellsrv.

Rolling back a failed rootupgrade.sh CRS upgrade

Recently i was upgrading a half rack Exadata to Grid Infrastructure 11.2.0.4. for a customer who had 1 node removed from the cluster, at least so we thought. While doing the upgrade we ran rootupgrade.sh on the first 2 nodes without issues. Now when running the script on what supposed to be the 3rd and final node in the cluster, the rootupgrade.sh failed with the following error:

CRS-1119: Unable to complete Oracle Clusterware upgrade while nodes dm0104 have not yet upgraded
CRS-1112: Failed to set the Oracle Clusterware operating version 11.2.0.4.0
CRS-4000: Command Set failed, or completed with errors.
/u01/app/11.2.0.4/grid/bin/crsctl set crs activeversion ... failed
Failed to set active version of the Grid Infrastructure at /u01/app/11.2.0.4/grid/crs/install/crsconfig_lib.pm line 9284.
/u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib -I/u01/app/11.2.0.4/grid/crs/install /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl execution failed

So what to do now, first step should be to find the root cause of your failed upgrade, then fix the problem and re-run the rootupgrade.sh if possible, otherwise rollback your patch:

[root@dm0101 ~]# dcli -g dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl query crs activeversion
dm0101: Oracle Clusterware active version on the cluster is [11.2.0.3.0]
dm0102: Oracle Clusterware active version on the cluster is [11.2.0.3.0]
dm0103: Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[root@dm0101 ~]# dcli -g dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl query crs softwareversion
dm0101: Oracle Clusterware version on node [dm0201] is [11.2.0.4.0]
dm0102: Oracle Clusterware version on node [dm0202] is [11.2.0.4.0]
dm0103: Oracle Clusterware version on node [dm0203] is [11.2.0.4.0]
[root@dm0101 ~]#

Lets run cluvfy to verify the status of CRS on all nodes:

[oracle@dm0101 [] grid]$ mkdir /tmp/cvudbg
[oracle@dm0101 [] grid]$ export CV_TRACELOC=/tmp/cvudbg
[oracle@dm0101 [] grid]$ export SRVM_TRACE=true
[oracle@dm0101 [] grid]$ export SRVM_TRACE_LEVEL=1
[oracle@dm0101 [] grid]$ ./runcluvfy.sh comp crs -n all

Verifying CRS integrity

Checking CRS integrity...

WARNING:
PRVF-4038 : CRS is not installed on nodes:
dm0204
Verification will proceed with nodes:
dm0103,dm0102,dm0101


ERROR:
PRVG-10605 : Release version [11.2.0.4.0] is consistent across nodes but does not match the active version [11.2.0.3.0].
PRVG-10603 : Clusterware version consistency failed
Check failed on nodes:
dm0103,dm0102,dm0101

CRS integrity check failed

Verification of CRS integrity was unsuccessful.
Checks did not pass for the following node(s):
dm0204
[oracle@dm0101 [] grid]$

In this case the error was caused by the 4th node being removed just partially from the cluster. The fix here was to rollback the upgrade, remove the 4th node properly, then re-run the rootupgrade.sh again. Rolling back a failed rootupgrade.sh is done by running rootcrs.pl, where you start the rollback in reverse order, so in our case i start with node number 3, then run rootcrs.pl and node 2 using the following command:

[root@dm0101 ~]# /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -downgrade -oldcrshome /u01/app/11.2.0.3/grid -version 11.2.0.3.0 -force

Now, node 1 (dm0101) is the last node on which we start the rootcrs.pl script with the parameter -lastnode, this will tell rootcrs.pl to look into the $GI_HOME/cdata directory to look at the OCR backup that rootupgrade.sh made when it was started on the first node:

[root@dm0101 ~]# /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -downgrade -lastnode -oldcrshome /u01/app/11.2.0.3/grid -version 11.2.0.3.0 -force
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dm0101'
CRS-2673: Attempting to stop 'ora.crsd' on 'dm0101'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'dm0101'
CRS-2673: Attempting to stop 'ora.dm0203-bk-vip.vip' on 'dm0101'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'dm0101'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dm0101'
CRS-2673: Attempting to stop 'ora.elpadop.web.svc' on 'dm0101'
CRS-2673: Attempting to stop 'ora.lsfdp.lsfdpdg.svc' on 'dm0101'
CRS-2673: Attempting to stop 'ora.montyp.montypdg.svc' on 'dm0101'
...
..
.
CRS-2673: Attempting to stop 'ora.gpnpd' on 'dm0101'
CRS-2677: Stop of 'ora.gpnpd' on 'dm0101' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dm0101' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully downgraded Oracle Clusterware stack on this node
Run '/u01/app/11.2.0.3/grid/bin/crsctl start crs' on all nodes
[root@dm0101 ~]#

Now to finalize the rollback, install and uninstall the USM components in the same order as you ran the rootcrs.pl scripts. So start with node 3, then 2 and end at node 1:

[root@dm0101 ~]# /u01/app/11.2.0.4/grid/bin/acfsroot uninstall
[root@dm0101 ~]# /u01/app/11.2.0.3/grid/bin/acfsroot install

At this point we are back at version 11.2.0.3 and we can remove those pesky remains of node 4 that is still there, and we can restart the rootupgrade.sh scripts again.

Info about Exadata’s imageinfo and rpm’s

Recently we noticed that imageinfo was displaying the following information at a customers site. Imageinfo was not displaying data i suspected, so a quick little blogpost about what was going on there.

[root@dm01db01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "imageinfo | grep 'Image version'"
dm01db01: Image version: 11.2.3.2.1.130109
dm01db02: Image version: 11.2.3.2.1.130109
dm01db03: Image version: 11.2.3.2.1.130109
dm01db04: Image version: 11.2.3.2.1.130109
dm01db05: Image version: 11.2.3.2.0.120713
dm01db06: Image version: 11.2.3.2.0.120713
dm01db07: Image version: 11.2.3.2.0.120713
dm01db08: Image version: 11.2.3.2.0.120713

So what happened here, did those last four nodes not got updated when installing the latest CellOS QDPFE? Well it turns out that all the correct packages are installed after a quick investigation:

[root@dm0105 ~]# for i in `yum info | grep Name | grep exadata |cut -d: -f2`; do yum info $i ; done | grep 'Version\|Name'
Name       : exadata-applyconfig
Version    : 11.2.3.2.1.130109
Name       : exadata-asr
Version    : 11.2.3.2.1.130109
Name       : exadata-base
Version    : 11.2.3.2.1.130109
Name       : exadata-commonnode
Version    : 11.2.3.2.1.130109
Name       : exadata-exachk
Version    : 11.2.3.2.1.130109
Name       : exadata-firmware-compute
Version    : 11.2.3.2.1.130109
Name       : exadata-ibdiagtools
Version    : 11.2.3.2.1.130109
Name       : exadata-ipconf
Version    : 11.2.3.2.1.130109
Name       : exadata-onecommand
Version    : 11.2.3.2.1.130109
Name       : exadata-oswatcher
Version    : 11.2.3.2.1.130109
Name       : exadata-sun-computenode
Version    : 11.2.3.2.1.130109
Name       : exadata-validations-compute
Version    : 11.2.3.2.1.130109
[root@dm0101 rpm-extraxt]#

So according to yum everything is installed here, rpm -qa also confirms this so no miss-match there:

[root@dm0105 ~]# rpm -qa | grep exadata
exadata-asr-11.2.3.2.1.130109-1
exadata-validations-compute-11.2.3.2.1.130109-1
exadata-base-11.2.3.2.1.130109-1
exadata-exachk-11.2.3.2.1.130109-1
exadata-oswatcher-11.2.3.2.1.130109-1
exadata-ipconf-11.2.3.2.1.130109-1
exadata-firmware-compute-11.2.3.2.1.130109-1
exadata-applyconfig-11.2.3.2.1.130109-1
exadata-onecommand-11.2.3.2.1.130109-1
exadata-commonnode-11.2.3.2.1.130109-1
exadata-sun-computenode-11.2.3.2.1.130109-1
exadata-ibdiagtools-11.2.3.2.1.130109-1
[root@dm0101 ~]#

It looks like imageinfo is displaying incorrect information here, how come and where does it gets its information from? Well, imageinfo gets its information out of a file called image.id:

root@dm01db05 ~]# cat /opt/oracle.cellos/image.id
id:1357731445
kernel:2.6.32-400.11.1.el5uek
kernelstock:2.6.32-300.19.1.el5uek
version:11.2.3.2.1.130109
type:production
label:OSS_11.2.3.2.1_LINUX.X64_130109
cellrpmver:OSS_11.2.3.2.1_LINUX.X64_130109
cellswver:OSS_11.2.3.2.1_LINUX.X64_130109
mode:install
created:2013-01-09 03:37:26 -0800
node:COMPUTE
f331a2bfa27cc7371d936009289c23b6 *commonos.tbz
22212b4436b57fba6d53be4d7a6fd975 *dbboot.tbz
c7cf016f2180ab820010a787b77c59fb *dbfw.tbz
fd9d8b4f4a30588ca39f66015d5edc4e *dbrpms.tbz
eaa1a7973efcdcd2fe40b09828ba0e01 *debugos.tbz
290909dec5b3995aed2298f20d345aea *exaos.tbz
cfda22b7599e47bdc686acf41fd5d67c *kernel.tbz
18db43d6e94797710ffa771702be33e8 *ofed.tbz
4831ba742464caaa11c1c0153b823886 *sunutils.tbz
activated:2013-11-24 04:41:03 +0100
status:success

you can see that this file contains all the informations about the current version of your Exadata software stack as well as a list of md5 checksums of the .tbz files in /opt/oracle.cellos/iso/cellbits. We checked the md5 checksums of these files as well so now we now 100% that we are on the correct level. Now how does this file gets updated, it seems that it is out-of-date or at least not inline with our system? Well it is being generated when you are patching your compute node to another level and it is being updated by the exadata-sun-computenode rpm. Lets peek a bit into that rpm and what it is deploying, first i am going to mount an ISO file as my yum repository:

[root@dm0101 ~]# mount -o loop /u01/app/oracle/stage.2288/112_latest_repo_130109.iso /var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.2288/
[root@dm0101 ~]# cp /var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.2288/x86_64/exadata-sun-computenode-11.2.3.2.1.130302-1.x86_64.rpm 

Next step is to extract the files from the RPM and see what we get:

[root@dm0101 ~]# mkdir /tmp/exadata-sun-computenode-
[root@dm0101 ~]# cd /tmp/exadata-sun-computenode-
[root@dm0101 exadata-sun-computenode-]# rpm2cpio ../exadata-sun-computenode-11.2.3.2.1.130302-1.x86_64.rpm | cpio -idv
./etc/init.d/pciibmap
./etc/init.d/workaround_13083530
./etc/yum.repos.d/Exadata-computenode.repo.sample
./etc/yum/pluginconf.d/installonlyn.conf
./opt/oracle.SupportTools/corecontrol
./opt/oracle.SupportTools/dbserver_backup.sh
./opt/oracle.SupportTools/sundiag.sh
./opt/oracle.cellos/exadata.computenode.post.sh
./opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in
194 blocks
[root@dm0101 exadata-sun-computenode-]#

Aha! there is a file called ./opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in in that RPM, but it is not showing everything, it still doesn’t got the md5 checksum information in it:

[root@dm0101 exadata-sun-computenode-]# cat ./opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in
id:1362281283
kernel:2.6.32-400.21.1.el5uek
kernelstock:2.6.32-300.19.1.el5uek
version:11.2.3.2.1.130302
type:production
label:OSS_11.2.3.2.1_LINUX.X64_RELEASE
cellrpmver:OSS_11.2.3.2.1_LINUX.X64_130302
cellswver:OSS_11.2.3.2.1_LINUX.X64_130302
mode:install
created:2013-03-02 19:28:04 -0800
node:COMPUTE
[root@dm0101 exadata-sun-computenode-]#

To find out how that gets filled we need to take a look into the script that gets executed when we install this RPM. The output bellow is a bit truncated for readability:

[root@dm0101 rpm-extraxt]# rpm --scripts -qp exadata-sun-computenode-11.2.3.2.1.130302-1.x86_64.rpm
.
..
fi
# Don't start post script for the fresh imaging (on zero boot)
# Don't install any files (remove them)
if [ -f /.post.imaging ]; then
  rm -f /opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in
  rm -f /opt/oracle.cellos/exadata.computenode.post.sh
else
  /sbin/service exachkcfg postrpm 11.2.3.2.1.130302 exadata-sun-computenode
fi
..
.

So when we install this RPM it does (among other things not shown here) remove my image.id file and a shell script called exadata.computenode.post.sh and it then executes a new version of exadata.computenode.post.sh. Now exadata.computenode.post.sh is quiet a big script and it does setup your compute node properly and then finally it reboots your compute node. In this huge exadata.computenode.post.sh script there is a function that is being called that actually calculates the md5 checksums of my cellbits directory and places that info into a variable:

md5sum_iso_cellbits ()
{
  local imageid_out=$1

  [ "$imageid_out" == '' ] &amp;&amp; imageid_out=$IMAGE_ID_DEF
  print_log "Please wait. Calculating checksums for $imageid_out ..." INFO
  # Remove any existing md5 sums from the output file
  perl -ne '/[a-f\d]\s+\*.+/ or print $_' -i $imageid_out
  local tmp_lst=/tmp/msd5sum.cellbits.lst
  pushd /opt/oracle.cellos/iso/cellbits
  md5sum -b * &gt; $tmp_lst
  cat $tmp_lst &gt;&gt; $imageid_out
  rm -f $tmp_lst
  popd
}

Finally the function db_update_imageinfo () in exadata.computenode.post.sh creates a new and up-to-date version of my image.id file and it makes imageinfo happy again. To make a long blogpost short, the solution to our misbehaving image info (and also imagehistory btw.) was to run

yum --enablerepo=exadata_dbserver_11.2.3.2.1_x86_64_base reinstall exadata-sun-computenode-11.2.3.2.1.130109-1

and let the nodes reboot. This solved our problem with the imageinfo command, unfortunately we could not determine the root cause for the image.id problem.

About cellcli calibrate

With cellcli you can run the calibrate command, this will measure the performance of your harddisks and flashcards. It will eventually report to you the throughput and amount of IOPS that was measured during the calibrate, it will also report to you which luns are not performing up to par.

Calibrate has a few simple options:

CellCLI> help calibrate

  Usage: CALIBRATE [HARDDISK | FLASHDISK | lun1 [, lun2]] [FORCE]
  Purpose: Runs raw performance tests on disks.
           CELLSRV must be shutdown.  User id root is required.

  Options:
    [FLASHDISK]: option to specify that only flash LUNs be calibrated.
    [HARDDISK] : option to specify that only hard disk LUNs be calibrated.
    lun1 [, lun2] : option to specify list of LUNs by name    [FORCE]    : runs test even while CELLSRV is running.

  Examples:
    CALIBRATE
    CALIBRATE '0_8', '0_10' force

CellCLI>

So let’s do a test run of calibrate on one of the harddisks in this cell in this V2 half rack:

CellCLI> calibrate 0_2 force
Calibration will take a few minutes...
LUN 0_2 on drive [20:2] random read throughput: 168.21 MBPS, and 419 IOPSCALIBRATE results are within an acceptable range.

CellCLI>

Now this does tell us something about the throughput and the IOPS that was measured but it does not tells you HOW cellcli did manage to measure this. Let’s have a peek into how cellcli works, in order to set we have to increase the logmode and start cellcli in debugging mode:

[root@dm01cel01 ~]# cellcli -vvv -d
CellCLI: Release 11.2.3.2.1 - Production on Sun Oct 27 15:19:55 CET 2013

Copyright (c) 2007, 2012, Oracle.  All rights reserved.
Cell Efficiency Ratio: 236

What this does, is dumping a lot of extra information on your screen regarding calls that cellcli does to ossmgmt.jar. Now cellcli is nothing but a perl script that calls to that jar file. With cellcli now running in debug mode and loglevel set to the highest we can now run our calibrate command again:

CellCLI> calibrate 0_2 force
loop
yyn:0  state:0  yychar:-1
tok: -3 sval: calibrate
yy:calibrate
 next yychar:289
reduce
state 0, reducing 0 by rule 2 ($$1 :)
...
..
.
state 37, reducing 4 by rule 4 (line : $$1 cli $$2 EOL)
CLI command: CALIBRATE(289)
SPOOL arg : NONE(0)
CLI verb: CALIBRATE(289)
CLI object: NONE(0) 0_2
CLI addverb:
attr#1: FORCE(334)
++++Over Value is: 0
ForAll is false
CLI addverb:
attr#1: FORCE(334)
reduce
After reduction, shifting from state 0 to state 2
tok: -1 sval: null
state 2, reading 0 (end-of-file)
Getting the state of CELLSRV services...  running
Calibration will take a few minutes...
LUN 0_2 on drive [20:2] random read throughput: 168.33 MBPS, and 420 IOPSCALIBRATE results are within an acceptable range.

CellCLI>

I have truncated the output above a little bit to make readable. Now this does not tell us a whole lot more but we can now see some interesting info in the cellcli logfile in /var/log/oracle/deploy/cellcli.lst.0* where the last parts can vary on your system depending the times you have started cellcli. When scrolling through that logfile you will see the output generated by cellcli you will find our cellcli calibrate command:

Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.CellCLI readAndProcessLine
INFO: Executing cellcli command: "calibrate 0_2 force"
Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.CellCLI invokeSingleRSCmd
FINE: Executing RS command: 0.0STATE CELLSRV!!!!!!!!!!!!!!.
Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.CellCLI invokeSingleRSCmd
FINE: RS comamnd return status: 2.

You see that the oracle.ossmgmt.ms.cli.CellCLI class checks wetter or not cellsrv is running or not (i have added the force parameter so i can run calibrate without stopping cellsrv).

Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.Calibrate fetchLuns
INFO: disknames.size: 1 listluncmd - list lun "0_2" attributes name, devicename, physicaldrives, disktype, lunsize
Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli cmd: MOD CALIBRATE :list lun "0_2" attributes name, devicename, physicaldrives, disktype, lunsize
Oct 27, 2013 3:22:14 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli result: .	 0_2	 /dev/sdc	 20:2	 HardDisk	 557.861328125G

Next you see that cellcli uses oracle.ossmgmt.ms.cli.Calibrate to translate my lun name 0_2 to a blockdevice name, in my case /dev/sdc. it the goes on the gather names of all the pun:

Oct 27, 2013 3:22:14 PM oracle.ossmgmt.ms.cli.Calibrate fetchLuns
INFO: listluns: 	 0_2	 /dev/sdc	 20:2	 HardDisk	 557.861328125G

Oct 27, 2013 3:22:14 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli cmd: MOD CALIBRATE :list physicaldisk attributes name, luns, diskMediaGroup
Oct 27, 2013 3:22:20 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli result: .	 20:0	 0_0	 DMG_SYSTEM_HDD1
	 20:1	 0_1	 DMG_SYSTEM_HDD1
	 20:2	 0_2	 DMG_DATA_HDD1
	 20:3	 0_3	 DMG_DATA_HDD1
...
..
.
Oct 27, 2013 3:22:20 PM oracle.ossmgmt.ms.cli.Calibrate fetchLuns
INFO: listpds: 	 20:0	 0_0	 DMG_SYSTEM_HDD1
	 20:1	 0_1	 DMG_SYSTEM_HDD1
	 20:2	 0_2	 DMG_DATA_HDD1
	 20:3	 0_3	 DMG_DATA_HDD1
...
..
.

The output above is shortened for readability. Next oracle.ossmgmt.ms.cli.Calibrate runs scripts_common.sh, which is also run during cell deployment. scripts_common.sh tune_block_io /dev/sdc sets nr_request in /sys/block/sdc/queue/nr_requests to 4096:

Oct 27, 2013 3:22:20 PM oracle.ossmgmt.ms.cli.Calibrate calibrate
FINER: Execute: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/scripts/unix/hwadapter/diskadp/scripts_common.sh tune_block_io /dev/sdc

The nr_requests sets the amount of IO slots 4096 meaning that the deadline IO scheduler has more chance to merge IO requests. This should already been set to 4096 on all of your cells, the calibrate command sets it again anyway.

Oct 27, 2013 3:22:21 PM oracle.ossmgmt.ms.cli.CalibrateTest executeOrion
INFO: execute: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/orion -run advanced -type rand -matrix point  -disk_start_offset 0 -duration 30 -num_disks 1 -testname rand_0 -disk_end_offset 285184m -size_large 8192 -num_small 0 -num_large 10
Oct 27, 2013 3:22:51 PM oracle.ossmgmt.ms.cli.CalibrateTest executeOrion
INFO: execute: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/orion -run advanced -type rand -matrix point  -disk_start_offset 0 -duration 30 -num_disks 1 -testname rand_IOPS_0 -disk_end_offset 285184m -size_small 8 -num_large 0 -num_small 20

Finally calibrate starts the actual calibrate, it does this by calling Orion which is located in $OSS_BIN. It runs two separate runs, one doing small 8Kb requests and one doing 8192Kb large IO’s.

Exadata system stats and MBRC

Last year Oracle added a new functionality to dbms_stats.gather_system_stats, making it more Exadata aware and thus let the optimizer make choices that might result in offloading more often. See this blog post from Kerry Osborne about it, i made a small comment about the MBRC in it and i would like to zoom in on that a bit more on this post.

So what happens during gathering van of the Exadata system statistics, de easiest way to find out is to run an SQL trace along with the system stats:

SYS@kjj AS SYSDBA> alter session set events 'sql_trace wait=true, bind=true, level=12';

Session altered.

SYS@kjj AS SYSDBA> exec dbms_stats.gather_system_stats('EXADATA');

PL/SQL procedure successfully completed.

If we look in the trace file we see that Oracle starts with some recursive SQL statements to find out what datafiles there are and the starts doing a lot of IO, these are all recorded with a unique wait event:

WAIT #139869656707320: nam='Disk file I/O Calibration' ela= 3255 count=1 p2=182669 p3=2 obj#=-1 tim=1379436371918507

It looks like Oracle is doing some type of IO to find out what out what this storage can do, lets take a deeper look into these IO’s. You can use the GDB route, described by Frits Hoogland here but for this purpose there is an somewhat easier way. The new trace interface in 11g has an interesting option where we can trace the OSS calls to the storage:

SYS@kjj AS SYSDBA> oradebug setmypid
Statement processed.
SYS@kjj AS SYSDBA> oradebug unlimit
Statement processed.
SYS@kjj AS SYSDBA> alter session set events 'trace[libcell.client_library.*]';

Session altered.

SYS@kjj AS SYSDBA> alter session set events 'sql_trace wait=true, bind=true, level=12';

Session altered.

SYS@kjj AS SYSDBA> exec dbms_stats.gather_system_stats('EXADATA');

PL/SQL procedure successfully completed.

SYS@kjj AS SYSDBA> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/kjj/kjj/trace/kjj_ora_13915.trc
SYS@kjj AS SYSDBA> oradebug close_trace
Statement processed.
SYS@kjj AS SYSDBA>

So what am i doing there, first i want to remove the 5MB tracefile limit for my session because the above steps will generate a huge trace file, next is how we van enable trace of all the communication of my shadow process to the storage layer so this:

alter session set events 'trace[libcell.client_library.*]';

Will trace all of the following

Components in library LIBCELL:
--------------------------
  Client_Library	       Client Library
    Disk_Layer		       Disk Layer
    Network_Layer	       Network Layer
    IPC_Layer		       IPC Layer

libcell.client_library.* means that i want to trace all calls to libcell.client_library and all of the 3 child layers. If we now take a look at the trace file and we zoom into to the Disk file I/O Calibration waits we see that is now preceded with trace data containing information about the communication between my shadow process and the storage layer:

oss_open called for o/192.168.100.11/DATA_CD_09_dm01cel07 with guid 56d7b5d4-12a-0-0
connect: box name -192.168.100.11, port - 5042
TCP-connected to: box name -192.168.100.11, ospid - 14950, port - 5042, my box refid: 5, source id: 2575757666
SKGXP-connecting to the box 192.168.100.11 OSS OS Pid 14950
SKGXP-connected to the appliance number -728063998
Sending message 0xdabf2f0 with frag # 0, msgid 27 to box my_box_refid: 5 (my sourceid in box: 2575757666  box inc: 17)
Command with band 1, payload length 138, payload 0xdabf4f0
Command code OPEN, msgid 27, refid 0
Open flags: readwrite - 1, sync - 1, char - 0.
 Device name - DATA_CD_09_dm01cel07
Vrpc completion - cmd 0xdabf330, req 0xdabee50, msgid 27, frag # 1
Vrpc reply 0xdabf3e8 with refid 27 for existing message: 0xdabf2f0
Reply with band 0, payload length 0, payload (nil)
Command code REPLY OPEN, msgid 0, refid 27
Returned open file descriptor  0
opening file number 9
oss_read open file: 9, offset: 38498574336, blocks: 128
Sending message 0xdabf2f0 with frag # 0, msgid 28 to box my_box_refid: 5 (my sourceid in box: 2575757666  box inc: 17)
Command with band 0, payload length 0, payload (nil)
Command code BREAD, msgid 28, refid 0
Block arguments fd 0 offset 38498574336 cnt 128 blksize 512 prefetch 0
oss_read done
oss_wait called for request: 0xdabee50
Vrpc completion - cmd 0xdabf330, req 0xdabee50, msgid 28, frag # 1
Vrpc reply 0xdabf3e8 with refid 28 for existing message: 0xdabf2f0
Reply with band 7, payload length 65536, payload 0x7f35f0e37e00
Command code REPLY READ, msgid 0, refid 28
Number of bytes read 65536
oss_wait done, request 0xdabee50, result 0x7f35f116bfc8
WAIT #139869656707320: nam='Disk file I/O Calibration' ela= 3255 count=1 p2=182669 p3=2 obj#=-1 tim=1379436371918507

So the above is an excerpt of the tracefile generated, where we can see how Exadata handles its IO requests. I will zoom in on how Oracle does this, it is not specific for Exadata system stats though. In this case it is an IO call where Oracle needs to open up a new connection to a celldisk, this disk was apparently not needed in earlier IO requests. In line 1 you can see that the oss_open call says it it wants to open disk 9 in cell07. The next line it sets up the connection, it does this by first contacting the cell using a TCP call in and in then switches to RDS using SKGXP, sending a message down the wire with msgid 27. Line 12 tells us that the cell response to out msgid 27 with a refid 27 and we now have a new file descriptor to this cell disk with #9. On this level files are not datafiles but files are chunks of data on a disk so, on this half rack we have 7 cells * 12 disk is maximum of 84 disks. Now we have our file descriptor open we can issue our read request with the oss_read call in line 19, it requests to read 128 blocks, keep in mind that these are blocks on at a celldisk level and thus not database blocks. As you can see in line 21 you can see at has a 512K blksize. Also note that this read request has msgid of 28, it looks like these are sequential for your shadow process. Directly after the oss_read call exadata issues an oss_wait and tries to reap the IO request. In line 28 we can see that the 128 blocks resulted in a read of 64k and then oss_wait call is done and oracle issues our wait.

So now we know how Exadata handles IO request lets zoom in on our IO’s for our Disk file I/O Calibration wait

[oracle@dm0101 [kjj] trace]$ grep 'oss_read open file' /u01/app/oracle/diag/rdbms/kjj/kjj/trace/kjj_ora_13915.trc
...
..
.
oss_read open file: 19, offset: 37603999744, blocks: 128
oss_read open file: 12, offset: 38469820416, blocks: 16
oss_read open file: 16, offset: 38503866368, blocks: 128
oss_read open file: 43, offset: 38536380416, blocks: 16
oss_read open file: 51, offset: 37821882368, blocks: 128
oss_read open file: 78, offset: 37753462784, blocks: 16
oss_read open file: 44, offset: 38539337728, blocks: 128
oss_read open file: 33, offset: 38455091200, blocks: 16
oss_read open file: 49, offset: 37558116352, blocks: 128
oss_read open file: 30, offset: 38454730752, blocks: 16
oss_read open file: 16, offset: 38439862272, blocks: 128
oss_read open file: 67, offset: 37605408768, blocks: 16
...
..
.
[oracle@dm0101 [kjj] trace]$

So when oracle fires the dbms_stats.gather_system_stats(‘EXADATA’) you see that it is switching between requesting 128 blocks and 16 blocks from different celldisks. Apparently oracle uses these IO’s to calculate wetter or not it can set the MBRC to 128 or not (it also takes db_cache_size into account). Then it deletes the results in aux_stats$:

DELETE FROM SYS.AUX_STATS$ WHERE SNAME = 'SYSSTATS_MAIN'

and eventually it updates the values of, amongst others, MBRC to 128 in this case:

PARSING IN CURSOR #139869655553736 len=71 dep=1 uid=0 oct=2 lid=0 tim=1379436419280111 hv=4016519028 ad='db24c118' sqlid='32kbjpmrqfdvn'
INSERT INTO SYS.AUX_STATS$ VALUES ('SYSSTATS_MAIN', 'MBRC', :B1 , NULL)
END OF STMT
BINDS #139869655553736:
 Bind#0
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=206001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f35f12e4088  bln=22  avl=03  flg=09
  value=128
EXEC #139869655553736:c=0,e=267,p=0,cr=0,cu=3,mis=0,r=1,dep=1,og=1,plh=0,tim=1379436419280370
CLOSE #139869655553736:c=0,e=1,dep=1,type=3,tim=1379436419280396