Adding an Exadata V2 as a target in Enterprise Manager 12c

Although Oracle says that with Enterprise Manager 12c it “provides the tools to effectively and efficiently manage your Oracle Exadata Database Machine” it is a bit of a challenge to get it all working correctly on an Exadata V2. It looks like when developing the Exadata plugin for Enterprise manager 12c they clearly developed it on a X2 only, getting a V2 as a target into Enterprise Manager does not work out of the box. In order to get Enterprise Manager 12c to discover your Exadata V2 you need to do some extra steps.

Exadata discovery is done using the first compute node in your Exadata rack (e.g. dm01db01). The agent uses a file called databasemachine.xml which is located in your One Command directory:

[oracle@dm01db01 [+ASM1] ~]$ ls -la /opt/oracle.SupportTools/onecommand/database*
-rw-r--r-- 1 root root 15790 May 10 22:07 /opt/oracle.SupportTools/onecommand/databasemachine.xml
[oracle@dm01db01 [+ASM1] ~]$

This file is being generated with dbm_configurator.xls in the One Command directory, unfortunately for V2 owners, early One Command versions did not generate these files so you have generated it yourself. Obviously you need Excel and a Windows pc to use dbm_configurator.xls as it uses VBA (Visual Basic for Applications) to generate the One Command files.

  • On the first node in the rack scp the following 2 files from /opt/oracle.SupportTools/onecommand:
    1. config.dat
    2. onecommand.params
  • Download OneCommand: Patch 13612149
  • Unzip the file p13612149_112242_Generic.zip windows host
  • Extract the tarbal onecmd.tar
  • Open dbm_configurator.xls in Excel
  • Enable macro’s withing excel
  • Click on the import button in the top left and locate the onecommand.params file (make sure that config.dat is in the same directory)
  • Check if the imported data is still correct
  • Click the generate button
  • Click the create config files button

After this upload at least the databasemachine.xml to /opt/oracle.SupportTools/onecommand on your first node in your rack.

Next step is to correct the Infiniband naming of the compute node HBA’s, right now on a V2 these are as follow:

[root@dm01db01 mlx4_0]# ibnodes | grep dm01db
Ca     : 0x00212800013f1242 ports 2 "dm01db04 HCA-1"
Ca     : 0x00212800013f12da ports 2 "dm01db02 HCA-1"
Ca     : 0x00212800013f111e ports 2 "dm01db03 HCA-1"
Ca     : 0x00212800013f2672 ports 2 "dm01db01 HCA-1"

Unfortunately the agent discovery process is looking for a naming convention that goes ‘hostname S ip-address HCA-1′. Fortunately Oracle provided us with a script to correct this: /opt/oracle.cellos/ib_set_node_desc.sh. When you run this script on a V2 not much will happen, it is broken on a V2 system. The problem is in the infiniband bond naming:

[root@dm01db01 ~]# grep IFCFG_BONDIB /opt/oracle.cellos/ib_set_node_desc.sh
  local IFCFG_BONDIB=/etc/sysconfig/network-scripts/ifcfg-bondib
        local addr=`awk -F= 'BEGIN {IGNORECASE=1} /^IPADDR=[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/ {print $2}' $IFCFG_BONDIB$id 2>/dev/null`
[root@dm01db01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bondib
cat: /etc/sysconfig/network-scripts/ifcfg-bondib: No such file or directory
[root@dm01db01 ~]# 

So Exadata V2 IB bond has a different, it is actually called bond0 instead of bondib:

[root@dm01db01 ~]# ifconfig bond0
bond0     Link encap:InfiniBand  HWaddr 80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00  
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          inet6 addr: fe80::221:2800:13f:2673/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:65520  Metric:1
          RX packets:55048256 errors:0 dropped:0 overruns:0 frame:0
          TX packets:56638365 errors:0 dropped:21 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:12207158878 (11.3 GiB)  TX bytes:18646886557 (17.3 GiB)

[root@dm01db01 ~]# 

So instead of using the broken ib_set_node_desc.sh script, fix it manually:

[root@dm01db01 ~]# dcli -g dbs_group -l root "echo -n `hostname -s` S `ifconfig bond0 | grep 'inet addr' | cut -f2 -d: | cut -f1 -d' '` HCA-1 > /sys/class/infiniband/mlx4_0/node_desc"

If all went well you should end up with the following:

[root@dm01db01 ~]# ibnodes | grep dm01db
Ca     : 0x00212800013f1242 ports 2 "dm01db01 S 192.168.100.1 HCA-1"
Ca     : 0x00212800013f12da ports 2 "dm01db01 S 192.168.100.1 HCA-1"
Ca     : 0x00212800013f111e ports 2 "dm01db01 S 192.168.100.1 HCA-1"
Ca     : 0x00212800013f2672 ports 2 "dm01db01 S 192.168.100.1 HCA-1"

After these changes the guided discovery of your Exadata should now run as is described in the cloud control manual.

Golden Gate monitoring in OEM

I have migrated several database to Exadata using Golden Gate and i really liked the Golden Gate tool. The only thing i never really understood is why there was not a simple Enterprise Manager plug-in to monitor the Golden Gate status. So i decided to create a small script that can be used as a ‘User Defined Metric’ in OEM. For monitoring purposes i am only interested in knowing if my process is abended (status stopped means that i have deliberately stopped it, so i don’t need to know that) and what the lag time is. So i have created that small script i called ogg_status.sh, which i placed in my $GG_HOME directory:

#!/bin/bash
# set -x
########################################
#
# Usage ./ogg_status.sh -n GG_PROCES_NAME -t status|lagtime
# Klaas-Jan Jongsma 2011
#
# v.01
#
########################################
#Function to get info all from GGSCI
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export LD_LIBRARY_PATH=/u01/app/oracle/product/10.2.0/db_1/lib
infoall() {
cd /u01/app/oracle/product/ogg
(
./ggsci << eof
info all
exit
eof
)}
########################################
# Commandline options:
while getopts n:t: opt
do
  case "$opt" in
    n) NAME="$OPTARG";;
    t) TYPE="$OPTARG";;
  esac
done
#########################################
# Check status of Golden Gate processes
status()
{
# infoall |grep "$NAME" | awk '{print $2}'
cSTATUS="`infoall |grep "$NAME" | awk '{print $2}'`"
if [ "${cSTATUS}" = "ABENDED" ]
  then
  echo "em_result=ABENDED"
  echo "em_message=Golden Gate process ${NAME} status: ${cSTATUS}"
else
    if [ "${cSTATUS}" = "STOPPED" ]
      then
      echo "em_result=STOPPED"
    else echo "em_result=RUNNING"
    fi
    unset cSTATUS
fi
}
#########################################
# Check lagtime of Golden Gate processes
lagtime()
{
cLAGTIME="`infoall | grep $NAME | awk '{print $4}'`"
# echo $cLAGTIME
cLAGTIME_HOURS=`echo $cLAGTIME | awk -F: '{print $1}'`
CLAGTIME_MINUTES=`echo $cLAGTIME | awk -F: '{print $2}'`
cLAGTIME_SECONDS=`echo $cLAGTIME | awk -F: '{print $3}'`
cLAGTIME_SEC_TOTAL=$(($cLAGTIME_HOURS*3600+$CLAGTIME_MINUTES*60+$cLAGTIME_SECONDS))
echo "em_result=${cLAGTIME_SEC_TOTAL}"
echo "em_message=Golden Gate process ${NAME} lagtime is: ${cLAGTIME} (${cLAGTIME_SEC_TOTAL} seconds), check Golden Gate infrastructure."
unset cLAGTIME cLAGTIME_HOURS CLAGTIME_MINUTES cLAGTIME_SECONDS cLAGTIME_SEC_TOTAL
}
#########################################
# MAIN
case "$TYPE" in
  status)
     status
     ;;
  lagtime)
     lagtime
     ;;
esac
# set +x

Now we have a script that tells something about our Golden Gate status we can create an UDM in OEM, we do this at the host target level. Now go to the top right corner and click on create, to make a new UDM:

If you want to create an UDM that monitors the lag of a process fill it in somewhat similar like below. It will create an UDM whenever the extract/datapump process gets a lag bigger then 5 seconds:

For monitoring the status of an extract or manager process create the following. It monitors if a process gets the status abended

We now have our UDM in place, next step would be to create a notification rule. For this go to your OEM preferences, this is located at the top right most place in your OEM screen. In the preferences screen click on rules (located under notifications):

This will brings in the notification rules window, add a new rule here and add you host as a target for the rule:

Next go the the metrics tab and add two metrics, one UDM String metric and UDM number metric. We need the string UDM to monitor process status, the number UDM is needed for lag monitoring:

Next check the critical severity status:

We are done on the Metrics tab, final step is to tell OEM to send us an e-mail. Go to the last tab, actions, do so and check ‘Send me an e-mail’:

If you have OEM alerting set up properly, you will now get mail if process gets abended or if the lag gets bigger then 5 seconds.