# Cellcli can lie to you…

Yes that is true, i said it, cellcli can lie to you there are some special cases were the output of cellcli is not the reality and you should double check it’s output with your standard OS tools. So this the output if dcli calling cellcli on an Exadata rack from an client:

```[root@dm01db01 ~]# dcli -g cell_group -l root cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus
dm01cel01: running       running         running
dm01cel02: running       running         running
dm01cel03: stopped       running         running
dm01cel04: running       running         running
dm01cel05: running       running         running
dm01cel06: running       running         running
dm01cel07: running       running         running
dm01cel08: running       running         running
dm01cel09: running       running         running
dm01cel10: running       running         running
dm01cel11: running       running         running
dm01cel12: running       running         running
dm01cel13: running       running         running
dm01cel14: running       running         running
[root@dm01db01 ~]#
```

It seems that on this node cellsrv on cel03 has stopped, lets zoom in and logon to that cell and verify:

```CellCLI> list cell attributes cellsrvStatus,msStatus,rsStatus
stopped         running         running
```

Well that is expected, same command some output but now double-check this with what is actually happening on the OS. Let see what processes are actually running at OS level:

```[root@dm01cel03 trace]# ps -ef | grep cellsrv/bin/cell[srv]
root      3143  3087  0 12:00 ?        00:00:00 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellsrvstat -interval=5 -count=720
root     21040     1 60 Mar26 ?        20:33:37 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellsrv 100 5000 9 5042
root     25662     1  0 Mar26 ?        00:02:08 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrssrm -ms 1 -cellsrv 1
root     25673 25662  0 Mar26 ?        00:00:07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsbmt -ms 1 -cellsrv 1
root     25674 25662  0 Mar26 ?        00:00:07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsmmt -ms 1 -cellsrv 1
root     25676 25673  0 Mar26 ?        00:00:01 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsbkm -rs_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsms.state -cellsrv_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsos.state -debug 0
root     25710 25676  0 Mar26 ?        00:00:07 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrssmt -rs_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsms.state -cellsrv_conf /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/deploy/config/cellrsos.state -debug 0
[root@dm01cel03 trace]#
```

That is confusing, cellcli tells me that cellsrv is down but “ye good ol’” ps is telling me that cellsrv is up-and-running as it should. It looks like my cell storage is available, lets verify if we got all 12 disks from that cell available at ASM level and if we don’t have any repair timers counting down:

```SYS@+ASM1> select count(*), repair_timer from v\$asm_disk where path like '%DATA%dm05cel03' group by repair_timer;

COUNT(*) REPAIR_TIMER
---------- ------------
12            0

1 row selected.
```

All disks are having a repair timer of 0, meaning that no disks have failed, if there would really be a problem with the disks we would see the repair_timer counting down.

Now we can confirm that cellsrv is available and the output of cellcli is just plain wrong here. What is going on here. Lets start by checken the cell alertlog in \$CELLTRACE

```[RS] monitoring process /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/cellsrv/bin/cellrsomt (pid: 0) returned with error: 126
[RS] Monitoring process for service CELLSRV detected a flood of restarts. Disable monitoring process.
Errors in file /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/log/diag/asm/cell/dm05cel03/trace/rstrc_25662_4.trc  (incident=179):
RS-7445 [CELLSRV monitor disabled] [Detected a flood of restarts] [] [] [] [] [] [] [] [] [] []
Incident details in: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130912/log/diag/asm/cell/dm05cel03/incident/incdir_179/rstrc_25662_4_i179.trc
Sweep [inc][179]: completed
```

The cellrs processes are monitoring the cellms and the cellsrv processes, however there is a flood control being built in to prevent a loop of restarts. If this would happen it could bring down a cell, so in order to prevent that from happening this flood control has been built in. When this happens RS will stop monitoring the problematic service, cellsrv in this case. This also means that it will report back to cellcli that process is stopped. Personally i think that this built-in flood control is a good thing, however i would like to see cellcli report this properly. For instance it would be nice if Oracle would let cellcli report the cellsrv status to something like intermediate when RS has stopped monitoring it, it is now saying ‘stopped’ which not true at all. This also means that when you see cellcli reporting that cellsrv is down, you always need to double check if this is actually true before you try restarting cellsrv.

Recently i was upgrading a half rack Exadata to Grid Infrastructure 11.2.0.4. for a customer who had 1 node removed from the cluster, at least so we thought. While doing the upgrade we ran rootupgrade.sh on the first 2 nodes without issues. Now when running the script on what supposed to be the 3rd and final node in the cluster, the rootupgrade.sh failed with the following error:

```CRS-1119: Unable to complete Oracle Clusterware upgrade while nodes dm0104 have not yet upgraded
CRS-1112: Failed to set the Oracle Clusterware operating version 11.2.0.4.0
CRS-4000: Command Set failed, or completed with errors.
/u01/app/11.2.0.4/grid/bin/crsctl set crs activeversion ... failed
Failed to set active version of the Grid Infrastructure at /u01/app/11.2.0.4/grid/crs/install/crsconfig_lib.pm line 9284.
/u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib -I/u01/app/11.2.0.4/grid/crs/install /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl execution failed
```

So what to do now, first step should be to find the root cause of your failed upgrade, then fix the problem and re-run the rootupgrade.sh if possible, otherwise rollback your patch:

```[root@dm0101 ~]# dcli -g dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl query crs activeversion
dm0101: Oracle Clusterware active version on the cluster is [11.2.0.3.0]
dm0102: Oracle Clusterware active version on the cluster is [11.2.0.3.0]
dm0103: Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[root@dm0101 ~]# dcli -g dbs_group -l root /u01/app/11.2.0.4/grid/bin/crsctl query crs softwareversion
dm0101: Oracle Clusterware version on node [dm0201] is [11.2.0.4.0]
dm0102: Oracle Clusterware version on node [dm0202] is [11.2.0.4.0]
dm0103: Oracle Clusterware version on node [dm0203] is [11.2.0.4.0]
[root@dm0101 ~]#
```

Lets run cluvfy to verify the status of CRS on all nodes:

```[oracle@dm0101 [] grid]\$ mkdir /tmp/cvudbg
[oracle@dm0101 [] grid]\$ export CV_TRACELOC=/tmp/cvudbg
[oracle@dm0101 [] grid]\$ export SRVM_TRACE=true
[oracle@dm0101 [] grid]\$ export SRVM_TRACE_LEVEL=1
[oracle@dm0101 [] grid]\$ ./runcluvfy.sh comp crs -n all

Verifying CRS integrity

Checking CRS integrity...

WARNING:
PRVF-4038 : CRS is not installed on nodes:
dm0204
Verification will proceed with nodes:
dm0103,dm0102,dm0101

ERROR:
PRVG-10605 : Release version [11.2.0.4.0] is consistent across nodes but does not match the active version [11.2.0.3.0].
PRVG-10603 : Clusterware version consistency failed
Check failed on nodes:
dm0103,dm0102,dm0101

CRS integrity check failed

Verification of CRS integrity was unsuccessful.
Checks did not pass for the following node(s):
dm0204
[oracle@dm0101 [] grid]\$
```

In this case the error was caused by the 4th node being removed just partially from the cluster. The fix here was to rollback the upgrade, remove the 4th node properly, then re-run the rootupgrade.sh again. Rolling back a failed rootupgrade.sh is done by running rootcrs.pl, where you start the rollback in reverse order, so in our case i start with node number 3, then run rootcrs.pl and node 2 using the following command:

```[root@dm0101 ~]# /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -downgrade -oldcrshome /u01/app/11.2.0.3/grid -version 11.2.0.3.0 -force
```

Now, node 1 (dm0101) is the last node on which we start the rootcrs.pl script with the parameter -lastnode, this will tell rootcrs.pl to look into the \$GI_HOME/cdata directory to look at the OCR backup that rootupgrade.sh made when it was started on the first node:

```[root@dm0101 ~]# /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -downgrade -lastnode -oldcrshome /u01/app/11.2.0.3/grid -version 11.2.0.3.0 -force
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dm0101'
CRS-2673: Attempting to stop 'ora.crsd' on 'dm0101'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'dm0101'
CRS-2673: Attempting to stop 'ora.dm0203-bk-vip.vip' on 'dm0101'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'dm0101'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dm0101'
CRS-2673: Attempting to stop 'ora.elpadop.web.svc' on 'dm0101'
CRS-2673: Attempting to stop 'ora.lsfdp.lsfdpdg.svc' on 'dm0101'
CRS-2673: Attempting to stop 'ora.montyp.montypdg.svc' on 'dm0101'
...
..
.
CRS-2673: Attempting to stop 'ora.gpnpd' on 'dm0101'
CRS-2677: Stop of 'ora.gpnpd' on 'dm0101' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dm0101' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully downgraded Oracle Clusterware stack on this node
Run '/u01/app/11.2.0.3/grid/bin/crsctl start crs' on all nodes
[root@dm0101 ~]#
```

Now to finalize the rollback, install and uninstall the USM components in the same order as you ran the rootcrs.pl scripts. So start with node 3, then 2 and end at node 1:

```[root@dm0101 ~]# /u01/app/11.2.0.4/grid/bin/acfsroot uninstall
[root@dm0101 ~]# /u01/app/11.2.0.3/grid/bin/acfsroot install
```

At this point we are back at version 11.2.0.3 and we can remove those pesky remains of node 4 that is still there, and we can restart the rootupgrade.sh scripts again.

Recently we noticed that imageinfo was displaying the following information at a customers site. Imageinfo was not displaying data i suspected, so a quick little blogpost about what was going on there.

```[root@dm01db01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "imageinfo | grep 'Image version'"
dm01db01: Image version: 11.2.3.2.1.130109
dm01db02: Image version: 11.2.3.2.1.130109
dm01db03: Image version: 11.2.3.2.1.130109
dm01db04: Image version: 11.2.3.2.1.130109
dm01db05: Image version: 11.2.3.2.0.120713
dm01db06: Image version: 11.2.3.2.0.120713
dm01db07: Image version: 11.2.3.2.0.120713
dm01db08: Image version: 11.2.3.2.0.120713
```

So what happened here, did those last four nodes not got updated when installing the latest CellOS QDPFE? Well it turns out that all the correct packages are installed after a quick investigation:

```[root@dm0105 ~]# for i in `yum info | grep Name | grep exadata |cut -d: -f2`; do yum info \$i ; done | grep 'Version\|Name'
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
Version    : 11.2.3.2.1.130109
[root@dm0101 rpm-extraxt]#
```

So according to yum everything is installed here, rpm -qa also confirms this so no miss-match there:

```[root@dm0105 ~]# rpm -qa | grep exadata
[root@dm0101 ~]#
```

It looks like imageinfo is displaying incorrect information here, how come and where does it gets its information from? Well, imageinfo gets its information out of a file called image.id:

```root@dm01db05 ~]# cat /opt/oracle.cellos/image.id
id:1357731445
kernel:2.6.32-400.11.1.el5uek
kernelstock:2.6.32-300.19.1.el5uek
version:11.2.3.2.1.130109
type:production
label:OSS_11.2.3.2.1_LINUX.X64_130109
cellrpmver:OSS_11.2.3.2.1_LINUX.X64_130109
cellswver:OSS_11.2.3.2.1_LINUX.X64_130109
mode:install
created:2013-01-09 03:37:26 -0800
node:COMPUTE
f331a2bfa27cc7371d936009289c23b6 *commonos.tbz
22212b4436b57fba6d53be4d7a6fd975 *dbboot.tbz
c7cf016f2180ab820010a787b77c59fb *dbfw.tbz
fd9d8b4f4a30588ca39f66015d5edc4e *dbrpms.tbz
eaa1a7973efcdcd2fe40b09828ba0e01 *debugos.tbz
290909dec5b3995aed2298f20d345aea *exaos.tbz
cfda22b7599e47bdc686acf41fd5d67c *kernel.tbz
18db43d6e94797710ffa771702be33e8 *ofed.tbz
4831ba742464caaa11c1c0153b823886 *sunutils.tbz
activated:2013-11-24 04:41:03 +0100
status:success
```

you can see that this file contains all the informations about the current version of your Exadata software stack as well as a list of md5 checksums of the .tbz files in /opt/oracle.cellos/iso/cellbits. We checked the md5 checksums of these files as well so now we now 100% that we are on the correct level. Now how does this file gets updated, it seems that it is out-of-date or at least not inline with our system? Well it is being generated when you are patching your compute node to another level and it is being updated by the exadata-sun-computenode rpm. Lets peek a bit into that rpm and what it is deploying, first i am going to mount an ISO file as my yum repository:

```[root@dm0101 ~]# mount -o loop /u01/app/oracle/stage.2288/112_latest_repo_130109.iso /var/www/html/yum/unknown/EXADATA/dbserver/11.2/latest.2288/
```

Next step is to extract the files from the RPM and see what we get:

```[root@dm0101 ~]# mkdir /tmp/exadata-sun-computenode-
./etc/init.d/pciibmap
./etc/init.d/workaround_13083530
./etc/yum/pluginconf.d/installonlyn.conf
./opt/oracle.SupportTools/corecontrol
./opt/oracle.SupportTools/dbserver_backup.sh
./opt/oracle.SupportTools/sundiag.sh
./opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in
194 blocks
```

Aha! there is a file called ./opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in in that RPM, but it is not showing everything, it still doesn’t got the md5 checksum information in it:

```[root@dm0101 exadata-sun-computenode-]# cat ./opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in
id:1362281283
kernel:2.6.32-400.21.1.el5uek
kernelstock:2.6.32-300.19.1.el5uek
version:11.2.3.2.1.130302
type:production
label:OSS_11.2.3.2.1_LINUX.X64_RELEASE
cellrpmver:OSS_11.2.3.2.1_LINUX.X64_130302
cellswver:OSS_11.2.3.2.1_LINUX.X64_130302
mode:install
created:2013-03-02 19:28:04 -0800
node:COMPUTE
```

To find out how that gets filled we need to take a look into the script that gets executed when we install this RPM. The output bellow is a bit truncated for readability:

```[root@dm0101 rpm-extraxt]# rpm --scripts -qp exadata-sun-computenode-11.2.3.2.1.130302-1.x86_64.rpm
.
..
fi
# Don't start post script for the fresh imaging (on zero boot)
# Don't install any files (remove them)
if [ -f /.post.imaging ]; then
rm -f /opt/oracle.cellos/tmpl/image.id.11.2.3.2.1.130302.in
else
fi
..
.
```

So when we install this RPM it does (among other things not shown here) remove my image.id file and a shell script called exadata.computenode.post.sh and it then executes a new version of exadata.computenode.post.sh. Now exadata.computenode.post.sh is quiet a big script and it does setup your compute node properly and then finally it reboots your compute node. In this huge exadata.computenode.post.sh script there is a function that is being called that actually calculates the md5 checksums of my cellbits directory and places that info into a variable:

```md5sum_iso_cellbits ()
{
local imageid_out=\$1

[ "\$imageid_out" == '' ] &amp;&amp; imageid_out=\$IMAGE_ID_DEF
print_log "Please wait. Calculating checksums for \$imageid_out ..." INFO
# Remove any existing md5 sums from the output file
perl -ne '/[a-f\d]\s+\*.+/ or print \$_' -i \$imageid_out
local tmp_lst=/tmp/msd5sum.cellbits.lst
pushd /opt/oracle.cellos/iso/cellbits
md5sum -b * &gt; \$tmp_lst
cat \$tmp_lst &gt;&gt; \$imageid_out
rm -f \$tmp_lst
popd
}
```

Finally the function db_update_imageinfo () in exadata.computenode.post.sh creates a new and up-to-date version of my image.id file and it makes imageinfo happy again. To make a long blogpost short, the solution to our misbehaving image info (and also imagehistory btw.) was to run

```yum --enablerepo=exadata_dbserver_11.2.3.2.1_x86_64_base reinstall exadata-sun-computenode-11.2.3.2.1.130109-1
```

and let the nodes reboot. This solved our problem with the imageinfo command, unfortunately we could not determine the root cause for the image.id problem.

With cellcli you can run the calibrate command, this will measure the performance of your harddisks and flashcards. It will eventually report to you the throughput and amount of IOPS that was measured during the calibrate, it will also report to you which luns are not performing up to par.

Calibrate has a few simple options:

```CellCLI> help calibrate

Usage: CALIBRATE [HARDDISK | FLASHDISK | lun1 [, lun2]] [FORCE]
Purpose: Runs raw performance tests on disks.
CELLSRV must be shutdown.  User id root is required.

Options:
[FLASHDISK]: option to specify that only flash LUNs be calibrated.
[HARDDISK] : option to specify that only hard disk LUNs be calibrated.
lun1 [, lun2] : option to specify list of LUNs by name    [FORCE]    : runs test even while CELLSRV is running.

Examples:
CALIBRATE
CALIBRATE '0_8', '0_10' force

CellCLI>
```

So let’s do a test run of calibrate on one of the harddisks in this cell in this V2 half rack:

```CellCLI> calibrate 0_2 force
Calibration will take a few minutes...
LUN 0_2 on drive [20:2] random read throughput: 168.21 MBPS, and 419 IOPSCALIBRATE results are within an acceptable range.

CellCLI>
```

Now this does tell us something about the throughput and the IOPS that was measured but it does not tells you HOW cellcli did manage to measure this. Let’s have a peek into how cellcli works, in order to set we have to increase the logmode and start cellcli in debugging mode:

```[root@dm01cel01 ~]# cellcli -vvv -d
CellCLI: Release 11.2.3.2.1 - Production on Sun Oct 27 15:19:55 CET 2013

Cell Efficiency Ratio: 236
```

What this does, is dumping a lot of extra information on your screen regarding calls that cellcli does to ossmgmt.jar. Now cellcli is nothing but a perl script that calls to that jar file. With cellcli now running in debug mode and loglevel set to the highest we can now run our calibrate command again:

```CellCLI> calibrate 0_2 force
loop
yyn:0  state:0  yychar:-1
tok: -3 sval: calibrate
yy:calibrate
next yychar:289
reduce
state 0, reducing 0 by rule 2 (\$\$1 :)
...
..
.
state 37, reducing 4 by rule 4 (line : \$\$1 cli \$\$2 EOL)
CLI command: CALIBRATE(289)
SPOOL arg : NONE(0)
CLI verb: CALIBRATE(289)
CLI object: NONE(0) 0_2
attr#1: FORCE(334)
++++Over Value is: 0
ForAll is false
attr#1: FORCE(334)
reduce
After reduction, shifting from state 0 to state 2
tok: -1 sval: null
Getting the state of CELLSRV services...  running
Calibration will take a few minutes...
LUN 0_2 on drive [20:2] random read throughput: 168.33 MBPS, and 420 IOPSCALIBRATE results are within an acceptable range.

CellCLI>
```

I have truncated the output above a little bit to make readable. Now this does not tell us a whole lot more but we can now see some interesting info in the cellcli logfile in /var/log/oracle/deploy/cellcli.lst.0* where the last parts can vary on your system depending the times you have started cellcli. When scrolling through that logfile you will see the output generated by cellcli you will find our cellcli calibrate command:

```Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.CellCLI readAndProcessLine
INFO: Executing cellcli command: "calibrate 0_2 force"
Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.CellCLI invokeSingleRSCmd
FINE: Executing RS command: 0.0STATE CELLSRV!!!!!!!!!!!!!!.
Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.CellCLI invokeSingleRSCmd
FINE: RS comamnd return status: 2.
```

You see that the oracle.ossmgmt.ms.cli.CellCLI class checks wetter or not cellsrv is running or not (i have added the force parameter so i can run calibrate without stopping cellsrv).

```Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.Calibrate fetchLuns
INFO: disknames.size: 1 listluncmd - list lun "0_2" attributes name, devicename, physicaldrives, disktype, lunsize
Oct 27, 2013 3:21:33 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli cmd: MOD CALIBRATE :list lun "0_2" attributes name, devicename, physicaldrives, disktype, lunsize
Oct 27, 2013 3:22:14 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli result: .	 0_2	 /dev/sdc	 20:2	 HardDisk	 557.861328125G
```

Next you see that cellcli uses oracle.ossmgmt.ms.cli.Calibrate to translate my lun name 0_2 to a blockdevice name, in my case /dev/sdc. it the goes on the gather names of all the pun:

```Oct 27, 2013 3:22:14 PM oracle.ossmgmt.ms.cli.Calibrate fetchLuns
INFO: listluns: 	 0_2	 /dev/sdc	 20:2	 HardDisk	 557.861328125G

Oct 27, 2013 3:22:14 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli cmd: MOD CALIBRATE :list physicaldisk attributes name, luns, diskMediaGroup
Oct 27, 2013 3:22:20 PM oracle.ossmgmt.ms.cli.Calibrate fetchCliResults
FINER: fetchCli result: .	 20:0	 0_0	 DMG_SYSTEM_HDD1
20:1	 0_1	 DMG_SYSTEM_HDD1
20:2	 0_2	 DMG_DATA_HDD1
20:3	 0_3	 DMG_DATA_HDD1
...
..
.
Oct 27, 2013 3:22:20 PM oracle.ossmgmt.ms.cli.Calibrate fetchLuns
INFO: listpds: 	 20:0	 0_0	 DMG_SYSTEM_HDD1
20:1	 0_1	 DMG_SYSTEM_HDD1
20:2	 0_2	 DMG_DATA_HDD1
20:3	 0_3	 DMG_DATA_HDD1
...
..
.
```

The output above is shortened for readability. Next oracle.ossmgmt.ms.cli.Calibrate runs scripts_common.sh, which is also run during cell deployment. scripts_common.sh tune_block_io /dev/sdc sets nr_request in /sys/block/sdc/queue/nr_requests to 4096:

```Oct 27, 2013 3:22:20 PM oracle.ossmgmt.ms.cli.Calibrate calibrate
```

The nr_requests sets the amount of IO slots 4096 meaning that the deadline IO scheduler has more chance to merge IO requests. This should already been set to 4096 on all of your cells, the calibrate command sets it again anyway.

```Oct 27, 2013 3:22:21 PM oracle.ossmgmt.ms.cli.CalibrateTest executeOrion
INFO: execute: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/orion -run advanced -type rand -matrix point  -disk_start_offset 0 -duration 30 -num_disks 1 -testname rand_0 -disk_end_offset 285184m -size_large 8192 -num_small 0 -num_large 10
Oct 27, 2013 3:22:51 PM oracle.ossmgmt.ms.cli.CalibrateTest executeOrion
INFO: execute: /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/orion -run advanced -type rand -matrix point  -disk_start_offset 0 -duration 30 -num_disks 1 -testname rand_IOPS_0 -disk_end_offset 285184m -size_small 8 -num_large 0 -num_small 20
```

Finally calibrate starts the actual calibrate, it does this by calling Orion which is located in \$OSS_BIN. It runs two separate runs, one doing small 8Kb requests and one doing 8192Kb large IO’s.

# Exadata system stats and MBRC

Last year Oracle added a new functionality to dbms_stats.gather_system_stats, making it more Exadata aware and thus let the optimizer make choices that might result in offloading more often. See this blog post from Kerry Osborne about it, i made a small comment about the MBRC in it and i would like to zoom in on that a bit more on this post.

So what happens during gathering van of the Exadata system statistics, de easiest way to find out is to run an SQL trace along with the system stats:

```SYS@kjj AS SYSDBA> alter session set events 'sql_trace wait=true, bind=true, level=12';

Session altered.

PL/SQL procedure successfully completed.
```

If we look in the trace file we see that Oracle starts with some recursive SQL statements to find out what datafiles there are and the starts doing a lot of IO, these are all recorded with a unique wait event:

```WAIT #139869656707320: nam='Disk file I/O Calibration' ela= 3255 count=1 p2=182669 p3=2 obj#=-1 tim=1379436371918507
```

It looks like Oracle is doing some type of IO to find out what out what this storage can do, lets take a deeper look into these IO’s. You can use the GDB route, described by Frits Hoogland here but for this purpose there is an somewhat easier way. The new trace interface in 11g has an interesting option where we can trace the OSS calls to the storage:

```SYS@kjj AS SYSDBA> oradebug setmypid
Statement processed.
Statement processed.
SYS@kjj AS SYSDBA> alter session set events 'trace[libcell.client_library.*]';

Session altered.

SYS@kjj AS SYSDBA> alter session set events 'sql_trace wait=true, bind=true, level=12';

Session altered.

PL/SQL procedure successfully completed.

/u01/app/oracle/diag/rdbms/kjj/kjj/trace/kjj_ora_13915.trc
Statement processed.
SYS@kjj AS SYSDBA>
```

So what am i doing there, first i want to remove the 5MB tracefile limit for my session because the above steps will generate a huge trace file, next is how we van enable trace of all the communication of my shadow process to the storage layer so this:

```alter session set events 'trace[libcell.client_library.*]';
```

Will trace all of the following

```Components in library LIBCELL:
--------------------------
Client_Library	       Client Library
Disk_Layer		       Disk Layer
Network_Layer	       Network Layer
IPC_Layer		       IPC Layer
```

libcell.client_library.* means that i want to trace all calls to libcell.client_library and all of the 3 child layers. If we now take a look at the trace file and we zoom into to the Disk file I/O Calibration waits we see that is now preceded with trace data containing information about the communication between my shadow process and the storage layer:

```oss_open called for o/192.168.100.11/DATA_CD_09_dm01cel07 with guid 56d7b5d4-12a-0-0
connect: box name -192.168.100.11, port - 5042
TCP-connected to: box name -192.168.100.11, ospid - 14950, port - 5042, my box refid: 5, source id: 2575757666
SKGXP-connecting to the box 192.168.100.11 OSS OS Pid 14950
SKGXP-connected to the appliance number -728063998
Sending message 0xdabf2f0 with frag # 0, msgid 27 to box my_box_refid: 5 (my sourceid in box: 2575757666  box inc: 17)
Command code OPEN, msgid 27, refid 0
Open flags: readwrite - 1, sync - 1, char - 0.
Device name - DATA_CD_09_dm01cel07
Vrpc completion - cmd 0xdabf330, req 0xdabee50, msgid 27, frag # 1
Vrpc reply 0xdabf3e8 with refid 27 for existing message: 0xdabf2f0
Command code REPLY OPEN, msgid 0, refid 27
Returned open file descriptor  0
opening file number 9
oss_read open file: 9, offset: 38498574336, blocks: 128
Sending message 0xdabf2f0 with frag # 0, msgid 28 to box my_box_refid: 5 (my sourceid in box: 2575757666  box inc: 17)
Command code BREAD, msgid 28, refid 0
Block arguments fd 0 offset 38498574336 cnt 128 blksize 512 prefetch 0
oss_wait called for request: 0xdabee50
Vrpc completion - cmd 0xdabf330, req 0xdabee50, msgid 28, frag # 1
Vrpc reply 0xdabf3e8 with refid 28 for existing message: 0xdabf2f0
oss_wait done, request 0xdabee50, result 0x7f35f116bfc8
WAIT #139869656707320: nam='Disk file I/O Calibration' ela= 3255 count=1 p2=182669 p3=2 obj#=-1 tim=1379436371918507
```

So now we know how Exadata handles IO request lets zoom in on our IO’s for our Disk file I/O Calibration wait

```[oracle@dm0101 [kjj] trace]\$ grep 'oss_read open file' /u01/app/oracle/diag/rdbms/kjj/kjj/trace/kjj_ora_13915.trc
...
..
.
oss_read open file: 19, offset: 37603999744, blocks: 128
oss_read open file: 12, offset: 38469820416, blocks: 16
oss_read open file: 16, offset: 38503866368, blocks: 128
oss_read open file: 43, offset: 38536380416, blocks: 16
oss_read open file: 51, offset: 37821882368, blocks: 128
oss_read open file: 78, offset: 37753462784, blocks: 16
oss_read open file: 44, offset: 38539337728, blocks: 128
oss_read open file: 33, offset: 38455091200, blocks: 16
oss_read open file: 49, offset: 37558116352, blocks: 128
oss_read open file: 30, offset: 38454730752, blocks: 16
oss_read open file: 16, offset: 38439862272, blocks: 128
oss_read open file: 67, offset: 37605408768, blocks: 16
...
..
.
[oracle@dm0101 [kjj] trace]\$
```

So when oracle fires the dbms_stats.gather_system_stats(‘EXADATA’) you see that it is switching between requesting 128 blocks and 16 blocks from different celldisks. Apparently oracle uses these IO’s to calculate wetter or not it can set the MBRC to 128 or not (it also takes db_cache_size into account). Then it deletes the results in aux_stats\$:

```DELETE FROM SYS.AUX_STATS\$ WHERE SNAME = 'SYSSTATS_MAIN'
```

and eventually it updates the values of, amongst others, MBRC to 128 in this case:

```PARSING IN CURSOR #139869655553736 len=71 dep=1 uid=0 oct=2 lid=0 tim=1379436419280111 hv=4016519028 ad='db24c118' sqlid='32kbjpmrqfdvn'
INSERT INTO SYS.AUX_STATS\$ VALUES ('SYSSTATS_MAIN', 'MBRC', :B1 , NULL)
END OF STMT
BINDS #139869655553736:
Bind#0
oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
oacflg=13 fl2=206001 frm=00 csi=00 siz=24 off=0
kxsbbbfp=7f35f12e4088  bln=22  avl=03  flg=09
value=128
EXEC #139869655553736:c=0,e=267,p=0,cr=0,cu=3,mis=0,r=1,dep=1,og=1,plh=0,tim=1379436419280370
CLOSE #139869655553736:c=0,e=1,dep=1,type=3,tim=1379436419280396
```

# Analyzing page allocation failures on Exadata

I have seen several clients who are struggling to decode page allocation failures on Exadata, in this post i will try to explain how to read the backtrace. The following is an anonymized client case where page allocation failures are leading up to a node reboot.

```Jan 1 11:58:02 dm01db01 kernel: oracle: page allocation failure. order:1, mode:0x20
Jan 1 11:58:02 dm01db01 kernel: Pid: 80047, comm: oracle Tainted: P           2.6.32-400.11.1.el5uek #1
Jan 1 11:58:02 dm01db01 kernel: Call Trace:
Jan 1 11:58:02 dm01db01 kernel:  <IRQ>  [<ffffffff810ddf74>] __alloc_pages_nodemask+0x524/0x595
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8110da3f>] kmem_getpages+0x4f/0xf4
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8110dc3c>] fallback_alloc+0x158/0x1ce
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8110ddd3>] ____cache_alloc_node+0x121/0x134
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8110e3f3>] kmem_cache_alloc_node_notrace+0x84/0xb9
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8110e46e>] __kmalloc_node+0x46/0x73
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff813b9aa8>] ? __alloc_skb+0x72/0x13d
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff813b9aa8>] __alloc_skb+0x72/0x13d
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff813b9bdb>] alloc_skb+0x13/0x15
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff813b9f11>] dev_alloc_skb+0x1b/0x38
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02a3722>] ipoib_cm_alloc_rx_skb+0x31/0x1de [ib_ipoib]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02a4d04>] ipoib_cm_handle_rx_wc+0x3a1/0x5b8 [ib_ipoib]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa0198bdc>] ? mlx4_ib_free_srq_wqe+0x27/0x54 [mlx4_ib]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa01904d4>] ? mlx4_ib_poll_cq+0x620/0x65e [mlx4_ib]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa029fe97>] ipoib_poll+0x87/0x128 [ib_ipoib]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff813c4b69>] net_rx_action+0xc6/0x1cd
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8105e8cd>] __do_softirq+0xd7/0x19e
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff810aefdc>] ? handle_IRQ_event+0x66/0x120
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff81012eec>] call_softirq+0x1c/0x30
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff81014695>] do_softirq+0x46/0x89
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8105e752>] irq_exit+0x3b/0x7a
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff8145bea1>] do_IRQ+0x99/0xb0
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffff81012713>] ret_from_intr+0x0/0x11
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02bc2df>] ? kcalloc+0x35/0x3d [rds]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02bc2df>] ? kcalloc+0x35/0x3d [rds]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02bc724>] ? __rds_rdma_map+0x16c/0x32c [rds]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02bca09>] ? rds_get_mr+0x42/0x4f [rds]
Jan 1 11:58:02 dm01db01 kernel:  [<ffffffffa02b67b2>] ? rds_setsockopt+0xae/0x14f [rds]
Jan 1 11:58:02 dm01db01 kernel:  <EOI>  [<ffffffff81458045>] ? _spin_lock+0x21/0x25
```

Lets take a look at that call trace, in line 1 we see that this regarding the oracle binary that is having the page allocation failures. The way the linux kernel works is that it tries to allocatie pages is in powers of 2, so ‘order:1′ means that linux failed to allocate 2^1 ( $2^1$ ) pages, so 2 pages in total. We can see how much that is if we lookup the pagesize on Exadata:

```[root@dm0101 ~]# getconf PAGESIZE
4096
```

so this means we wanted to allocate: $2^1*4096=8192$

So in this case this system could not allocate 8k of memory, if we walk through the trace we can see where it went wrong. You can see in lines 10,11 en 12 that Linux tries to allocate Socket Kernel Buffers (alloc_skb) and fails. Walking further down the stack we can see it is IP over Infinband that tries to allocate these buffers (ipoib_cm_alloc_rx_skb).

The mode:0×020 line is the GFP (Get Free Pages) flag that is being sent with the request, you can look up what flag it is in gfp.h:

```[root@dm0101 ~]# grep 0x20 /usr/src/kernels/`uname -r`/include/linux/gfp.h
#define __GFP_HIGH	((__force gfp_t)0x20u)	/* Should access emergency pools? */
```

So if we look further down the messages file we can see the following part:

```Jun 14 11:58:02 dm04dbadm02 kernel: Node 0 DMA free:15800kB min:4kB low:4kB high:4kB active_anon:0kB inactive_anon:0kB
active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15252kB mlocked:0kB dirty:0kB
writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB
bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
```

As we can see we have only 4kB of low memory left and absolutely no room left in kernel slabs at all. As you can see rds and ip and thus also ipoib are slab allocated pages (you can monitor slabs using slabinfo):

```[root@dm0101 ~]# cat /proc/slabinfo | grep "^ip\|rds"
rds_ib_frag        23600  23600   8192    1    4 : tunables    8    4    0 : slabdata  23600  23600      0
rds_ib_incoming    23570  23570   8192    1    4 : tunables    8    4    0 : slabdata  23570  23570      0
rds_iw_frag            0      0     40   92    1 : tunables  120   60    8 : slabdata      0      0      0
rds_iw_incoming        0      0    120   32    1 : tunables  120   60    8 : slabdata      0      0      0
rds_connection        18     77    688   11    2 : tunables   54   27    8 : slabdata      7      7      0
ip6_dst_cache         10     24    320   12    1 : tunables   54   27    8 : slabdata      2      2      0
ip6_mrt_cache          0      0    128   30    1 : tunables  120   60    8 : slabdata      0      0      0
ip_mrt_cache           0      0    128   30    1 : tunables  120   60    8 : slabdata      0      0      0
ip_fib_alias           0      0     32  112    1 : tunables  120   60    8 : slabdata      0      0      0
ip_fib_hash           28    106     72   53    1 : tunables  120   60    8 : slabdata      2      2      0
ip_dst_cache         486    530    384   10    1 : tunables   54   27    8 : slabdata     53     53      0
[root@dm0101 ~]#
```

Furthermore, ipoib uses an MTU size 64k so that is at least an order 4 of slab managed pages.

```[root@dm0101 ~]# ifconfig bond0 | grep MTU
UP BROADCAST RUNNING MASTER MULTICAST  MTU:65520  Metric:1
```

With res running on our interconnect this will eventually lead to a node eviction. If rds message can’t be written in the pages then interconnect traffic won’t arrive at this host (remember that infiniband is a lossless fabric).

# Exadata V2 and CPU scaling

A while ago I was doing some bandwith and latency testing of infiniband interfaces on different Exadata configurations and ran into something that might be interesting for Exadata V2 owners out there. You can do some performance test on your IB stack using a set of tools that start with ib_read_*, ib_send_* and ib_write_*. These tools work in a client/server setting where you start a server on side of your path and start a client on the side and start sending and reading data. When running these tests on an Exadata X2 and higher your output will look somewhat like this:

```[root@dm0102 ~]# ib_read_bw dm01db01-priv -a
------------------------------------------------------------------
Connection type : RC
local address:  LID 0x0e, QPN 0x40005f, PSN 0x1f908d RKey 0x30003153 VAddr 0x007f368c713000
remote address: LID 0x0c, QPN 0x28005d, PSN 0x6bbc6e, RKey 0x400428d1 VAddr 0x007f691a135000
Mtu : 2048
------------------------------------------------------------------
#bytes #iterations    BW peak[MB/sec]    BW average[MB/sec]
2        1000               7.22                  2.25
4        1000              14.94                 13.86
8        1000              28.81                 26.53
16        1000              59.03                 54.09
32        1000             113.56                104.37
64        1000             223.18                197.08
128        1000             113.06                112.79
256        1000             226.70                226.16
512        1000             452.23                451.57
1024        1000             902.15                900.82
2048        1000            1805.63               1801.26
4096        1000            2435.64               2434.48
8192        1000            2916.02               2915.82
16384        1000            2937.44               2937.33
32768        1000            2934.50               2931.79
65536        1000            2941.48               2940.98
131072        1000            2941.67               2941.60
262144        1000            2943.20               2941.26
524288        1000            2940.64               2938.26
1048576        1000            2939.29               2939.29
2097152        1000            2942.21               2941.77
4194304        1000            2941.54               2941.00
8388608        1000            2939.31               2938.80
------------------------------------------------------------------
[root@dm0102 ~]#
```

Try running this same test on a Exadata V2 and you will see this:

```Connection type : RC
local address:  LID 0x0e, QPN 0x34004f, PSN 0xc3db36 RKey 0xf8015f71 VAddr 0x007f2a75f2d000
remote address: LID 0x0c, QPN 0x2c0065, PSN 0xdd9285, RKey 0x60003b19 VAddr 0x007f2100646000
Mtu : 2048
------------------------------------------------------------------
#bytes #iterations    BW peak[MB/sec]    BW average[MB/sec]
Conflicting CPU frequency values detected: 1600.000000 != 2534.000000
2        1000               0.00                  0.00
Conflicting CPU frequency values detected: 1600.000000 != 2534.000000
4        1000               0.00                  0.00
Conflicting CPU frequency values detected: 1600.000000 != 2534.000000
8        1000               0.00                  0.00
^C
[root@dm0102 ~]#
```

It looks like this tool is telling me that my CPU’s are running 1.6Mhz but wants to run at 2.53Ghz. Now this tool is right, on this Exadata my CPU’s are running at that speed:

```[root@dm0102 ~]# cat /proc/cpuinfo | grep "cpu MHz" | uniq
cpu MHz		: 1600.000
```

Also, my tool was right at the expected speed, my V2 system uses an Intel Xeon E5440 running at 2.53Ghz:

```[root@dm0102 ~]# cat /proc/cpuinfo | grep "model name" | uniq
model name	: Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz
[root@dm0102 ~]#
```

So your CPU’s are not running at full speed at an Exadata V2. On my test V2 there is nothing running that generates load on the CPU, if you run some load then some of the CPU’s are kicking into full throttle:

```[root@dm0102 ~]# cat /proc/cpuinfo | grep "cpu MHz" | uniq
cpu MHz		: 1600.000
cpu MHz		: 2534.000
cpu MHz		: 1600.000
```

What is causing this behavior, well it is a linux kernel architecture implementation of CPU frequency scaling. The scaling of the CPU speed is done by a process called the CPU Scaling Governor, and you can see it’s status by looking in the /sys filesystem:

```[root@dm0102 ~]# for i in {0..12}; do cat /sys/devices/system/cpu/cpu\$i/cpufreq/scaling_governor; done
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
ondemand
```

The status of all my CPU cores is ‘ondemand’ which means that Linux will dynamically scale my CPU speed when needed. We can control the behavior with a daemon simply called cpuspeed:

```[root@dm0102 ~]# /etc/init.d/cpuspeed status
Frequency scaling enabled using ondemand governor
```

We can stop this daemon to disable the CPU Scaling Governor:

```[root@dm0102 ~]# /etc/init.d/cpuspeed stop
Disabling ondemand cpu frequency scaling:                  [  OK  ]
[root@dm0102 ~]# for i in {0..12}; do cat /sys/devices/system/cpu/cpu\$i/cpufreq/scaling_governor; done
userspace
userspace
userspace
userspace
userspace
userspace
userspace
userspace
userspace
userspace
userspace
userspace
userspace
```

If we now look at the our CPU speed we can see that they are running at full speed:

```[root@dm0102 ~]# cat /proc/cpuinfo | grep "cpu MHz" | uniq
cpu MHz		: 2534.000
```

Interestingly enough Oracle apparently decided not to use CPU Scaling on their X2 and X3 models.