Golden Gate Activity Logging Tracing

A lot of customers of mine who are starting with Oracle Golden Gate have issues when it comes to troubleshooting Golden Gate. It is the most common question i get: “I have setup Golden Gate, it was working fine but now process X abandoned… now what do i do?”.  The OGG error log is a good starting point but is not always showing the real issue. There are a lot different way to obtain more information on why you are running into an issue. In this blogpost i want to introduce a maybe not so well known feature called Golden Gate Activity Logging Tracing and yes that is how they are really calling this feature.

The most obvious way to enable tracing are the TRACE(1)/TRACE2 commands which are fairly well documented in OGG documentation. You can either send the command directly to a running process like this:

GGSCI (rac1) 3> send extract EXT1 trace /tmp/trace_me.trc

Sending trace request to EXTRACT EXT1 ...
Trace file /tmp/trace_me.trc opened.

If we then do an insert into a table that is being monitored by this Extract process will write the following piece of data in /tmp/trace_me.trc:

12:20:26.259 (397290) * --- entering DataSource::readLCR() --- *
12:20:28.161 (399192) Extract pointer info reused [0], [file 0][src 0][tgt 0][tab PDB1.PDB1ADMIN.T1]. File [PDB1.PDB1ADMIN.T1], targ_idx [4294967295], object type [2], num_extracts [1]
12:20:28.163 (399194) exited DataSource::readLCR (stat=0, seqno=0, rba=0 io_type=5)
12:20:28.163 (399194) processing record for PDB1.PDB1ADMIN.T1
12:20:28.163 (399194) Writing seqno [21], rba [1465], table [PDB1.PDB1ADMIN.T1]
12:20:28.164 (399195) * --- entering DataSource::readLCR() --- *

The TRACE parameter shows what the extract is reading from the database and writing into the OGG trail file. This gives us some useful information we could use for troubleshooting OGG. To disable tracing from our extract we send it the TRACE OFF command:

GGSCI (rac1) 5> send extract EXT1 trace off

Sending trace request to EXTRACT EXT1 ...
Closing all trace files..

Keep in mind though that on a busy system with lots DML tracing can generate a huge amount of data. Besides sending the TRACE/TRACE2 command to a process you can add it to a parameter file directly to start tracing as soon as the the extract or replicat has finished parsing the parameter file. The actual tracing output from TRACE is limited, the main purpose is mainly to see where Golden Gate is spending most of its time on:


General statistics:
0.00% Checking messages (includes checkpointing)
0.00% Checking periodic tasks
0.00% Waiting for more data
0.00% Converting ASCII data to internal
10.45% Reading input records
0.00% Writing output records (replicate_io)
0.00% Mapping columns
0.00% Outputting data records
0.00% Performing SQL statements
0.00% Executing BATCHSQL statements
0.00% Executing SQL statements
0.00% Preparing SQL statements
0.00% Commit operations
0.00% Rollback operations
0.00% Checkpointing

The difference between TRACE and TRACE2 is minimal, it showing you how long your process in spending in de different code segments:

17:00:07.961 (2814715) * --- entering DataSource::readLCR() --- *
17:00:07.961 (2814715) Reading input records 10.357% (execute=291.531,total=2814.713,count=15230)
17:00:07.961 (2814715) Checking messages (includes checkpointing) 0.005% (execute=0.143,total=2814.713,count=15230)

If you want tracing in Golden Gate you should look elsewhere, in the past Golden Gate had features such as tltrace and traceini but they are deprecated or being deprecated. Golden Gate has introduced Activity Logging Tracing in release This feature has been out for a long while but is only documented in DocID 1204284.1 and can be very handy at some points. Activity Logging Tracing is enabled by default in OGG kernel but must be enabled which it can do on 2 ways. Activity Logging Tracing can generate a dump file as soon as one of your OGG binaries  abends. The other way to enable Activity Logging Tracing is by placing an XML file in $OGG_HOME that has this syntax: gglog-<process_name>.xml where process_name is can by the name of the group you want trace or the name of the binary. Doc ID 1204284.1 is not very clear about this but you can actuall trace almost every OGG binary including ggsci and server. Some examples to clarify how to name your Activity Logging Tracing xml file:

  • If you want to trace your extract EXT1 name the file gglog-EXT1.xml
  • If you want to trace an replicat called REP1 call it gglog-REP1.xml
  • For tracing every extract name it gglog-extract.xml
  • Similar for tracing ggsci call it gglog-ggsci.xml
  • If you want to trace everything name the file gglog.xml however this will generate a huge, huge, huge amount of data.

As soon as you place your xml file in $OGG_HOME Golden Gate will start tracing and depending on your XML file  contents these files can become very big very quickly. Activity Logging Tracing will generate a file gglog__.log, so tracing extract EXT1 will generate the file gglog-EXT1_extract.log and tracing ggsci will generate gglog-ggsci_ggsci.log. This naming behaviour can be changed in the xml-file if you want the trace files to be generated on eg. a different filesystem:

<param name="File" value="gglog-%I_%A.log"/>

The default logging format is OGG is log4j and the documentation gives some clue on the general working of this XML file. When a process sees the XML files it calls the functions form and starts processing whatever is configured in the XML file. Doc ID 1204284.1 gives you some hints on what can by used for tracing. The example file gglog-full.xml shows that with only adding this:

<level value="all"/>

Golden Gate will immediately starts tracing everything, but be warned this will output a lot of trace information. Personally i tend to use de gglog-debug.xml as a starting off point for Golden Gate tracing as it already gives some more fine grained control. As an example, the values below gives you a fine grained-control on the redo generation tracing:

<logger name="er.redo.ora">
<level value="all"/>
<logger name="">
<level value="all"/>
<logger name="er.redo.ora.rtc">
<level value="off"/>
<logger name="er.redo.ora.rtcfm">
<level value="off"/>
<logger name="er.redo.ora.thread">
<level value="all"/>
<logger name="er.redo.ora.thread.checkpoint">
<level value="off"/>
<logger name="er.redo.ora.thread.blocks">
<level value="off"/>
<logger name="er.redo.ora.thread.logswitch">
<level value="all"/>
<logger name="er.redo.ora.thread.unpack">
<level value="all"/>

Enabling all of them enables you to trace redo information in realtime. It dumps data in an output something similar to this:

IXAsyncTrans 2907 oracle/ltwtcapture.c | >>>>>> lcrcnt=1031 received lcr type Internal ROW
IXAsyncTrans 1254 oracle/ltwtcapture.c | kngl_flag 0x00 kngl_xflag 0x0000 krvx_dflag 0x0002
IXAsyncTrans 1079 oracle/ltwtcapture.c | lcrvsn: [5] unpacked scn: [0x0000000000477cee]
IXAsyncTrans 1848 oracle/ltwtcapture.c | Ignore opcode 1 (orig op: 7)
IXAsyncTrans 2950 oracle/ltwtcapture.c | received commit, return
IXAsyncTrans 455 oracle/IXAsyncTrans.cpp | Queueing committed record with tid '2.30.1940'
IXAsyncTrans 580 oracle/IXAsyncTrans.cpp | Queueing result (DataBuffer,452,0x0000000004456b80)
IXAsyncTrans 2115 oracle/redooraix.c | Try to get record from logmining server.
IXAsyncTrans 2968 oracle/ltwtcapture.c | end of lcr buffer, return nodata
IXAsyncTrans 2973 oracle/ltwtcapture.c | Release lcr buffer
IXAsyncTrans 2151 oracle/redooraix.c | no lcr returned
IXAsyncTrans 2115 oracle/redooraix.c | Try to get record from logmining server.
main 325 oracle/IXAsyncTrans.cpp | Data buffer 0x0000000004456b80 returned for 0x1c4 bytes
main 10848 oracle/redoora.c | Processing next transaction with xid '2.30.1940'
main 10626 oracle/redoora.c | Processing first record for committed transaction 2.30.1940, scn 4685038
main 4530 oracle/redooraix.c | Try to get committed lcr from COM 0x00000000031f5750 with TID 0002.01e.00000794
main 1705 oracle/redooraix.c | maintain_rc_count(-1), xid=0x0002.01e.00000794 op=INSERT lcr_flag=0x8 xflag=0x0 row_cnt=0 chunk_cnt=0 txn_flag=0x0 groupRBA=0 groupthread=0
main 11032 oracle/redooraix.c | retrieved item# 1 with op 2 (INSERT), still have 0 row, 0 chunk in COM, source COM
main 5006 oracle/redooraix.c | Retrieved item# 1 from COM with TID 0x0002.01e.00000794, skip 0, more_record 0, tind 3
main 5015 oracle/redooraix.c | Exhausted COM, no more items, delete tran now.
main 10751 oracle/redoora.c | first record from commit txn with scn 4685038, seqno 53, rba 32348688, thread_id 1, jts_time_stamp 212351410734000000
main 7753 oracle/redoora.c | FORMAT XOUT LCR:
main 1638 oracle/redooraix.c | === NEW column list (num_columns=1) ===
main 1650 oracle/redooraix.c | Col[1](id,dty,len,uid,sid,iid,flag,flg,flg2,flg3,type): 0 0 1 0 1 1 0 0 0 0 1
main 1664 oracle/redooraix.c | 000000: 58 |X |
main 1638 oracle/redooraix.c | === OLD column list (num_columns=0) ===
main 6837 oracle/redooraix.c | XOUT_INSERT: format insert
main 5566 oracle/redooraix.c | Format col[1] NEW (id,dty,len,key,flag,flg3,type): 0 1 1 1 0x0 0x0 1
main 6396 oracle/redooraix.c | AFC/CHR after format value: 000000: 58 |X |
main 7010 oracle/redooraix.c | After format lcr, len 9
main 7012 oracle/redooraix.c | 000000: 00 00 05 00 00 00 01 00 58 |........X |
main 5388 oracle/redooraix.c | After format record at 53 - 32346988, get trail record with len 9
main 5389 oracle/redooraix.c | 000000: 00 00 05 00 00 00 01 00 58 |........X |
main 349 oracle/IXAsyncTrans.cpp | Data buffer 0x0000000004456b80 released
main 567 oracle/IXAsyncTrans.cpp | Queueing free buffer 0x0000000004456b80
IXAsyncTrans 2863 oracle/ltwtcapture.c | Receive 336 bytes from LCRCaptureReader
IXAsyncTrans 2907 oracle/ltwtcapture.c | >>>>>> lcrcnt=1032 received lcr type Internal ROW

There are more loggers then mentioned by Oracle in Doc ID 1204284.1 and as far i know they are not publicly available. Some of the loggers are quiet easy to figure out what they are doing like tracing DDL in a extract process for example:

<logger name="gglog.ddl.std">
<level value="all"/>

But mostly it is just guess work and/or a lot of testing to see what they actually do (and what they are). In general, you don’t really need them, in most cases the files with loggers as they are enough to troubleshoot most of the issues.

Creating a RAC cluster using Ansible (part 2)

In my previous 2 blogposts i explained how to setup a Vagrantfile that deploys 2 virtualbox machines that can be used as a basis for a RAC cluster and how you could use Ansible to deploy RAC on those VM’s. In this post i want to dive a bit deeper into how i setup Ansible and why, keep in mind that this just one way of doing this.

The Github repository containing all the files can be found here

I am not going to go over the contents of the commons role as it probably speaks for itself on what it does. After the common role, we first need to setup networking which we need for interconnects. Because we have added extra network devices and we later on need to make sure that the device we configured as interconnect interface always keeps being the interconnect. Reason for this is that we have configured Virtualbox so that these interfaces are on their own network. To keep the device persistency we configure udev. This will work because in our Vagrantfile we have set the MAC addresses of the interfaces to a fixed value:

- name: Setup udev for network devices
  replace: dest=/etc/udev/rules.d/70-persistent-net.rules regexp='ATTR.*{{ item.device }}' replace='ATTR{address}=="{{ item.mac|lower }}", ATTR{type}=="1", KERNEL=="eth*", NAME="{{ item.device }}'
  with_items: "{{ network_ether_interfaces }}"
  when: network_ether_interfaces is defined
  register: udev_net

The array the is being referenced at the with_items line here is a host specific setting so it will take this value from the host_vars/ file. This means that the line above will make an udev rule that is specific for every network interface on the host that ansible is running on.

Ansible uses jinja2 as the format to dynamically format files. We user this format for our network configuration files. With the ansible template command, ansible will read the ethernet.j2 file and the jinja engine will use the template to create the correct file. In case of the action below, the ethernet.j2 will be uploaded as a ifcfg- file and it will do so on every host as long it can find data for that host in host_vars directory:

- name: Create the network configuration for ethernet devices
  template: src=ethernet.j2 dest=/etc/sysconfig/network-scripts/ifcfg-{{ item.device }}
  with_items: "{{ network_ether_interfaces }}"
  when: network_ether_interfaces is defined
  register: ether_result

The jinja2 code is quiet simple to read, the ethernet.j2 file looks like this:

# {{ ansible_managed }}
{% if item.bootproto == 'static' %}
DEVICE={{ item.device }}
{% if item.address is defined %}
IPADDR={{ item.address }}
{% endif %}
{% if item.onboot is defined %}
ONBOOT={{ item.onboot }}
{% endif %}
{% if item.peerdns is defined %}
PEERDNS={{ item.peerdns }}
{% endif %}
{% if item.defroute is defined %}
DEFROUTE={{ item.defroute }}
{% endif %}
{% if item.netmask is defined %}
NETMASK={{ item.netmask }}
{% endif %}
{% if item.gateway is defined %}
GATEWAY={{ item.gateway }}
{% endif %}
{% if item.mac is defined %}
HWADDR={{ item.mac }}
{% endif %}
{% endif %}

{% if item.bootproto == 'dhcp' %}
DEVICE={{ item.device }}
{% endif %}

It is basically divided into to part, the first part tells jinja2 what to do when there is static device configuration and the other one for DHCP enabled devices. Jinja2 will create a line for every item found in the host_vars. The results of this actions are being registered by Ansible with the line “register: ether_result” We are using these results in the next action:

- name: bring up network devices
  shell: ifdown {{ item.item.device }}; ifup {{ item.item.device }}
  with_items: "{{ ether_result.results }}"
  when: ether_result is defined and item.changed

Here we are only restarting those interfaces which are registered in the ether_result action and are changed. A more complex use of ansible is in the template for the hosts file. The hosts file is the basis for DNSMasq which is being used as a simpler alternative for Bind DNS. The template for the hosts file is being created with information from the ansible facts. These facts are being gathered automatically by Ansible as soon as a playbook begins but we have changed things already in this run so right after we have brought up the network interfaces we have gathered our facts again, with these updated facts we can now build our hosts file. localhost
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_eth2']['ipv4']['address'] }} {{ hostvars[host]['ansible_hostname'] }}
{{ hostvars[host]['ansible_eth2']['ipv4']['address'] | regex_replace('(^.*\.).*$', '\\1') }}{{ hostvars[host]['ansible_eth2']['ipv4']['address'].split('.')[3] | int + 10 }} {{ hostvars[host]['ansible_hostname'] }}-vip
{{ hostvars[host]['ansible_eth1']['ipv4']['address'] }} {{ hostvars[host]['ansible_hostname'] }}-priv
{% endfor %}
{% set count = 1 %}
{% for i in range(1,4) %}
{{ hostvars[inventory_hostname]['ansible_eth2']['ipv4']['address'] | regex_replace('(^.*\.).*$', '\\1') }}{{ count | int +250 }} rac-cluster-scan
{% set count = count + 1 %}
{% endfor %}

The for-loop will go through all the facts and get the IP4 address and generates the correct hosts file entry for every node in the cluster. It then takes take the IP4 address and add 10 to the last octet the create the address for the vip interface. The eth1 address i used for the interconnect in our cluster. The last part the the file is a loop the generate 3 additional ip4 addresses based on the address of eth2 and adds 250 to it plus the integer of the loop. These are the our SCAN addresses of our cluster. Now we have the hosts file setup we can install dnsmasq and have DNS ready. We are pinging the interfaces just to make sure they are up, if the ping failes ansible will stop the runbook.

Our network is now setup as we want it to be and we can go on to configure storage. Vagrant already created four shared disks for us which are presented to both virtual machines. We now have to make sure that we have device persistence storage which we can do with both ASMLib and Udev. For both methods there need to be partition, i am using sfdisk as an easy way to create partitions on all 4 disks:

- name: Create disk partitions
  shell: echo "0," | sfdisk -q {{item.1}}
  with_indexed_items: "{{ device_list.stdout_lines }}"
  when: "{{ item.0 }} > 0"
  become: true
  run_once: true
  register: sfdisk_output

The with_indexed_items command gives me an index number which is used to make sure we are not destroying the partitions on /dev/sda where the OS is installed on. Because the OS is installed on the the first disk, we can start sfdisk at index 1. When ansible needs to install asmlib, it then installs the needed RPM’s and uploads and runs a shell script. The reason for this script is that for ASMLib there is some interactivity needed to configure it. This can be solved with regular shell scripting with a here document, as far as i know there is no such thing in Ansible. For Udev we can more or less copy what we did the make our network interfaces persistent.

After we have installed a toolset we can go start with the installation of the Grid Infrastructure. The ASM diskstring and prefix depends on if we have configured it earlier with udev or with asmlib. In order to use the correct valuables i am adding these values to the ansibles facts as custom fact. Depending on which role it finds Ansible well load these values into the facts:

- set_fact: asm_diskstring="/dev/oracleasm/disks"
  when: "'configure_asmlib' in role_names"

- set_fact: asm_diskstring="/dev/ASMDISK*"
  when: "'configure_udev' in role_names"

Now we can create our SSH keys on both nodes, download the public key back to our Virtual Box host. After we have uploaded the public keys to all the hosts we can add the to our know_hosts using ssh-keyscan.

We upload the Grid Infrastructure zipfiles, extract them and install the cvuqdisk rpm file on all nodes. For the installation of Grid Infrastructure software itself we network_ether_interfaces array to set the client and interconnect interfaces correctly. The response file is made out of a jinja2 template so we can easily customise some settings depending on our needs.  Because we are doing a silent install from a response file we need to run the configToolAllCommands which is normally an interactive part when you use the OUI. Finally we create the FRA diskgroup and we can clean up everything we uploaded and now longer need.

The installation of the RDBMS software with Ansible is now very straightforward. It is just a case of adjusting the limits.conf, creating the response file using the jinja2 template and install the software like your regular silent install. The same goes for the actual creation of the database, there are the checks, the response file creation and the database creation itself. For database creation right now i use the response file method, as soon as i have some time i will switch this out for database creation using a template to have more fine-grain control on the created database.

If you take just the Vagrant file and the ansible file and start installing from scratch it will take you a couple of hours (about 3) to download the vagrant box, the software, creating the machines and the provisioning with Ansible.

  • The Github repository can be found here
  • The blogpost about Vagrant and the vagrantfile can be found here
  • Creating a RAC cluster using Ansible (part 1) can be found here

Creating a RAC cluster using Ansible (part 1)

In my previous blog post i explained how Vagrant can be used to create a 2-node VM environment for deploying a RAC cluster on it. In this post i want to dive into how to actually do the RAC install using ansible so you can easily create a RAC test environment on eg. your laptop.

I have created a git repo with the files that i am using for this blog post, this git repo can be found here

I am not going over over the basic of Ansible in this blogpost, i am assuming you have a basic understanding of Ansible and the YAML markup language. If you need a quick refresh on how Ansible works you can take a look here or here for example.

When trying to automate a RAC deployment you need to think about which steps to take and in what order. Also out-of-the-box Vagrant will do a serial deployment of your cluster which will for obvious reasons not work, the solutions for this is in my previous blogpost. It explains on how to setup Vagrant so ansible can do a “parallel” deployment. I will use a slightly modified version of the Vagrantfile i created in that blogpost.

If you go through the new vagrantfile you will see a few changes, the most obvious one are all the new parameters on top they can be used to easily tweak the RAC setup to your liking. You can change between things like CDB or not, memory settings, domain name etc. These parameters are all being passed over to ansible:

ansible.extra_vars = {
devmanager: "#{devmanager}",
db_create_cdb: "#{db_create_cdb}",
db_pdb_amount: "#{db_pdb_amount}",
db_name: "#{db_name}",
db_total_mem: "#{db_total_mem}",
gi_first_node: "#{host_prefix}1",
gi_last_node: "#{host_prefix}#{servers}",
domain_name: "#{domain_name}"

The final change i made to my vagrantfile is at the network section, i am providing the VM’s with a fix MAC address so i can refer to those later on when using udev as the device manager. I will get back to the MAC addresses later (in the next post) when we get to the part that configures device persistency

vb.customize ['modifyvm', :id, '--macaddress2', "080027AAAA#{rac_id}1"]
vb.customize ['modifyvm', :id, '--macaddress3', "080027AAAA#{rac_id}2"]

Vagrant is now able to create 2 VM’s that we can use for our RAC cluster. As you can imagine building 2 node RAC cluster with ansible will result to a rather large playbook that will be a bit cumbersome to work with. Also you might want to re-use certain parts later on for different deployments, in order to solve these issues Ansible has to ability to create roles where every role is defined set of actions all with its own set of files, templates and variables. For this RAC playbook i  have created a set of roles to handle specific parts of the installation:

  • common – setups some common linux setting we want to have no matter what
  • configure_network – setups linux network and configures stuff like DNS
  • configure_udev – If selected in the vagrantfile this will configure the system for udev
  • configure_asmlib – If selected in the vagrantfile this will configure asmlib
  • tools_install – installs a toolset with several scripts, install rlwrap etc.
  • rac_install_gi – installs and configures Oracle Grid Infrastructure
  • rac_install_db – installs and configures the Oracle Database home
  • rac_create_db – Creates the RAC database

The reason to use roles is to be keep a better overview of your ansible scripts but even more you can reuse roles for other playbooks. These roles are being called in the main Ansible file that is defined in the vagrantfile at line 88:

ansible.playbook   = "ansible/rac_gi_db.yml"

This YAML file kicks of the entire installation of our RAC cluster, if look in rac_gi_db.yml you will notice that there only a few lines there. This file is referencing the different roles. If Ansible is seeing role common, it expects that there is a subdirectory called roles, which should have a subdirectory in it with the role name, with another nested subdirectory tasks in it. In that subdirectory Ansible will look for a file called main.yml and runs that. So to visualize this, Ansible expects the following directory structure as a minimum for a role:


The Ansible documentation itself has a more in-depth explanation on this. Another thing you might notice are the roles for configuring either asmlib or udev:

- { role: configure_udev, when: devmanager == "udev" }
- { role: configure_asmlib, when: devmanager == "asmlib" }

Like mentioned above, a role is a set of subdirectories that holds the information needed to perform the tasks of the specific role. Directories must have a specific name in order for Ansible to recognise them. Roles can have the following subdirectories, depending on your need.

  • defaults/ – where variables local this role are stored
  • files/ – needed if you want to upload files with ansible to a host. Eg. zipfiles for an Oracle GI install
  • tasks/ – This directory contains at least a file named main.yml which is holds all the tasks that this role has to execute.
  • templates/ – As the name suggests, it contains templates to be converted into files by Ansible. For Example a template for a responsefile that needs to be created with the correct database name and settings or a hosts file with the correct host names. These files are in the Jinja2 format.

If you look at line 11 in the vagrantfile you can see that there is a variable devmanager defined which is being parsed into Ansible at line 90. So depending on the value of the variable devmanager Ansible will either configure udev or asmlib. Global variables are furthermore set in the groupvars/all.yml file, these variables are being passed over too all roles in the group ‘all’.  In this ansible setup variables are being set at different levels, i made it so that the most important variables can be set in top of the vagrant file in lines 11 to 16 this can of course be expanded if needed. Besides passing variable direct to Ansible, specific roles can have their own variables, these are variables are set in rac_gi_db.yml/roles//defaults/main.yml. These are variables local to the execution of this specific role and can’t be referenced outside it. The final group of variables we need are host specific variable like ip address and network configurations in general. For these settings we have a file per host, were every host has its own file in the sub directory host_vars.

Now we know how Ansibles gets the parameters, we know what the roles are doing we can can actually start deploying. As soon as your start vagrant and it is done creating the initial VM’s (covered in my previous blogpost) it will start ansible and go through the roles and runs all the YAML files in the tasks directories. At one point you will find yourself in the need of getting some configuration files during installs but these files are dependent on eg. IP addresses, hostnames, database name etc. For example you can you use this task to create a dnsmasq config based on variables in an array called ether_result.results (which we created somewhat earlier in the task.

- name: Create dnsmasq configuration
template: src=dnsmasq.conf.j2 dest=/etc/dnsmasq.conf
with_items: "{{ ether_result.results }}"

In order to get these dynamic configurations we need to write these files with a language called Jinja2. You can take your normal configuration file and replace the variable with something that Jinja2 can work with. As an example let’s look at the dnsmasq.conf file in Jinja form:

# {{ ansible_managed }}

domain={{ domain_name }}
local=/{{ domain_name }}/


{% for msg in ether_result.results %}interface={{ msg.item.device }}
{% endfor %}

As you can see it is almost your regular dnsmasq.conf file but with some changes. At the top we have a variable  {{ ansible_managed }} which will be replaced with a string containing source file name, creation date and from which host it was created. The {{ domain_name }} variable is being passed on to ansible from vagrant and is actually set in the Vagrantfile.  the last lines of the files are a jinja2 loop going through whatever is in ether_result.results and finding every msg in msg.item.device, which will contain all the names of the network devices i have configured and placing it after the string “interface=”. After ansible is done with the task above you will end up with a file that looks something like this:

# Ansible managed: /my/path/t0/dnsmasq.conf.j2 modified on 2016-08-09 12:24:40 by klaasjan on exabook.local




If everything is configured correctly, installation of the software can begin. So we need to move some zip files from the your host to the guest vm. Ansible does this with the copy command:

- name: Copy RDBMS zipfiles
copy: src={{ item }} dest={{ ora_stage }} owner={{ oracle_user }} group={{ oracle_group }} mode=0644
with_items: "{{ rdbms_zips }}"
when: inventory_hostname == "{{ gi_first_node }}" and rdbms_check.stdout != rdbms_home

The command above will copy the files in the variable {{ item }}, which is being read from the array {{ rdbms_zips }} on the line “with_items:”, it then copies these files to the whatever location is set in {{ ora_stage }} and with the correct user and group permissions. The final line means that it will only copy these files to the first node (we do not need them twice) and when there is not already and oracle home installed.

I mentioned above that ansible will look for the file item.rdbms_zips in the ./files subdirectory. So before installing make sure that all the needed files are there. To make this a bit easier for myself i have created a small script called which will download these files for you (you need to have an MOS account). It will do so using Maris Elsin excellent getMOSPatch command, which is a jar files now and no longer a shell script. After downloading will do what the name suggests and remove and rebuild your vagrant RAC environment. Use at your own risk, if it breaks a network config for you in VirtualBox or something in Vagrant… i have warned you.

In the next blogpost i will explain in somewhat more detail what the different steps are doing.

Vagrant for you RAC test environment

Creating your own test VM environment with RAC is a fun exercise to do, however after you have rebuild your environment a couple of times this will get a very tiresome exercise and you want to start to automate your deployments. People have already written several blogposts about the GI and RDBMS orchestrations and the several tools that are available for this. Within the Oracle community Ansible seems to be a very popular choice for the part of getting your test environment up-and-running. But what about the part to get your VM up-and-running, a very repetitive task and not a very interesting task to say the least.

One of your options would be to start scripting the creation of your VM’s and the installation of Linux afterwards. If you are using VirtualBox you could do this by writing a script around the VB commandline VBoxManage and create it from there. For your Linux deployment, PXE boot seems to be a logical choice to me but it still involves in running a DHCP and TFTP server locally (hint: dnsmasq), getting the ISO, bootloaders etc. A fun exercise to test but still quite a lot of work to automate or keep up-and-running.

So this is where Vagrant comes in, it can easily configure and deploy virtual machines for you. Basically it is “nothing  more” then a Ruby shell around several (VM) providers like VirtualBox, AWS, VMWare, Hyper-V and Dockers. Vagrant works with so called boxes, which are nothing more then compressed VM’s that can get modified for your needs at the moment you spin them up. You can choose to let Vagrant download a box from the Vagrant cloud or you could make your own box if your want. Running this:

vagrant init hashicorp/precise64

followed by:

vagrant up

This will give you a VirtualBox VM running Ubuntu which will be dowloaded from the Vagrant cloud. Out of the box Vagrant assumes you have VirtualBox installed. You can then ssh into the box with “vagrant ssh” or, within a multiple hosts scenario you can use “vagrant ssh nodename”. Stopping and starting your vagrant boxes can be done by respectively “vagrant halt” and “vagrant up”. If your are done and want to remove the VM, run “vagrant destroy”.

What if you want to do something more interesting, like deploying a RAC cluster which needs shared storage and multiple network interfaces. You need to create a file called Vagrantfile in your working directory. This file contains the code to modify your boxes with Vagrant. A very basic Vagrantfile should look something like this:

Vagrant.configure("2") do = "hashicorp/precise64"

Let’s assume we want to create 2 VM’s with 4 disks of storage shared between them and as for networking we want a management interface, a public interface and interconnect network. We will end up with a this file: Here on GitHub Gist

Let’s break this file down so you got an understanding of what is going on. First of this file is, just as the rest of Vagrant, written in Ruby so all Ruby syntax will work in this file as well.

At the top i have defined some variables, such as the amount of servers i want to generate, hardware dimensions, shared disks etc. The API version is needed for Vagrant so it knows what syntax it can expect.

ASM_LOC     = "/pathto/vagrant/rac/asmdisk"
num_disks   = 4
servers     = 2
mem         = 4096
cpu         = 2

The first step to do is to tell Vagrant how you want to setup your vagrant environment for this Vagrantfile. I am telling vagrant i want to use a box called oel68 (which is a custom Vagrant box i made) and i want X11 to be enabled for ease of use if i need to need to use DBCA or something similiar:

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "oel68"
config.ssh.forward_x11 = true
config.ssh.forward_agent = true

Now for the interesting stuff, creating multiple VM’s for our RAC cluster. I didn’t want to copy and paste several server configurations and make small adjustments to them, instead i wanted to make it a bit more flexible so i just an each iterator that just loops through it until it reaches the “servers” variable. I am creating VM’s that are called rac1 until racn.

(1..servers).each do |rac_id|
config.vm.define "rac#{rac_id}" do |config|
config.vm.hostname = "rac#{rac_id}"

Next step is to create a ruby block that does the VirtualBox configuration. I am adding 2 nic’s, Nic 1 is already in the box by default and is a interface that is connected to my host with a nat network. Nic 2 is for the interconnects, nic 3 is my public interface. Further i am setting all the the nics to an Intel PRO/1000 MT Server card, changing the CPU and Memory settings and updating the SATA port count to 5 so we can add the shared storage later on.

# Do Virtualbox configuration
config.vm.provider :virtualbox do |vb|
	vb.customize ['modifyvm', :id, '--nic2', 'intnet', '--intnet2', 'rac-priv']
	vb.customize ['modifyvm', :id, '--nic3', 'hostonly', '--hostonlyadapter3', 'vboxnet0']

	# Change NIC type (
	vb.customize ['modifyvm', :id, '--nictype1', '82545EM']
	vb.customize ['modifyvm', :id, '--nictype2', '82545EM']
	vb.customize ['modifyvm', :id, '--nictype3', '82545EM']  

	# Change RAC node specific settings
	vb.customize ['modifyvm', :id, '--cpus', cpu]
	vb.customize ['modifyvm', :id, '--memory', mem]  

	# Increase SATA port count
	vb.customize ['storagectl', :id, '--name', 'SATA', '--portcount', 5]

We can now create the shared storage for our RAC cluster. We want to create 4 disks so we can use the same trick for this as we are doing for our server creation; an each iterator. We do need to take care of a few things here, we don’t want to overwrite an existing disk and only create and attach it once when we give the “vagrant up” command (this line). To be more precise, i only need one VM to create the disks with VBoxManage createmedium but i need all VM’s to attach these disks. The next IF loop makes sure that only the first node creates the disks and every other node only attaches the storage.

(1..num_disks).each do |disk|
	if ARGV[0] == "up" &amp;&amp; ! File.exist?(ASM_LOC + "#{disk}.vdi")
		if rac_id == 1
			vb.customize ['createmedium',
						'--filename', ASM_LOC + "#{disk}.vdi",
						'--format', 'VDI',
						'--variant', 'Fixed',
						'--size', 5 * 1024]
			vb.customize ['modifyhd',
						 ASM_LOC + "#{disk}.vdi",
						'--type', 'shareable']
		end # End createmedium on rac1

		vb.customize ['storageattach', :id,
				'--storagectl', 'SATA',
				'--port', "#{disk}",
				'--device', 0,
				'--type', 'hdd',
				'--medium', ASM_LOC + "#{disk}.vdi"]
	end  # End if exist
end    # End of EACH iterator for disks

The code below is a workaround for a nasty bug with my CPU which i have both with VMWare Fusion and Virtual Box. It is well documented by Laurent Leturgez and Danny Bryant

# Workaound for Perl bug with segmentation fault,
# see this blogpost from Danny Bryant
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/Leaf", "0x4"]
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/SubLeaf", "0x4"]
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/eax", "0"]
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/ebx", "0"]
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/ecx", "0"]
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/edx", "0"]
vb.customize ['setextradata', :id, "VBoxInternal/CPUM/HostCPUID/Cache/SubLeafMask", "0xffffffff"]

We now have our VM’s ready and we can start the provisioning of these VM’s. If we just add a provisioning block like this Vagrant will start the provisioning in series. So create VM rac1, do provisioning, create rac2, do provisioning:

# Create disk partitions
if rac_id ==  1
       config.vm.provision "shell", inline: <<-SHELL
if [ -f /etc/SFDISK_CREATE_DATE ]; then
echo "Partition creation already done."
exit 0
for i in `ls /dev/sd* | grep -v sda`;  do echo \\; | sudo sfdisk -q $i; done
end # End create disk partitions

In most cases you want to start the provisioning when all VM’s are ready. Vagrant supports several provisioning methods like Ansible, Shell scripting, Puppet, Chef etc. If we are intalling a $GI_HOME we need both nodes to be up, have all the interfaces up with IP’s assigned etc.

if rac_id == servers
	# Start Ansible provisioning
	config.vm.provision "ansible" do |ansible|
		#ansible.verbose = "-v"
		ansible.limit = "all"
		ansible.playbook = "ansible/rac_gi_db.yml"
	end # End of Ansible provisioning

Above i am only starting the provision block once my rac_id equals the server variable, meaning when i have created all my RAC nodes. Now Ansible can do the provisioning of my servers in parallel because the ansible limit variable is set to all. Vagrant makes an Ansible host file with all the hosts which you can use for the provisioning. The whole provisioning of a RAC cluster itself is outside the scope of this blogpost. If you want Vagrant a go, you can download it here

The link to the full Vagrantfile on Gist is here

Patching the Big Data Appliance

I have been patching engineered systems since the launch of the Exadata V2 and recently i had the opportunity to patch the BDA we have in house. As far as comparisons go, this is were the similarities stop between Exadata and a Big Data Appliance (BDA) patching.
Our BDA is a so called startes rack consisting of 6 nodes running a hadoop cluster, for more information about this read my First Impressions blog post. On Exadata patching consist of a whole set of different patches and tools like patchmgr and, on the Big Data Appliance we are patching with a tool called Mammoth. In this blogpost we will use the the upgrade from BDA software release 4.0 to 4.1 as an example to describe the patching process. You can download the BDA Mammoth bundle patch through DocID 1485745.1 (Oracle Big Data Appliance Patch Set Master Note). This patchset contains of 2 zipfiles which you should upload to your primary node in the cluster (usually node number 1 or 7), if you are not sure which node to use, you can use bdacli to find out which node you should use:

[root@bda1node01 BDAMammoth-ol6-4.1.0]# bdacli getinfo cluster_primary_host

After determining which node to use you can upload the 2 zipfiles from MOS and unzip them somewhere on that specific node:

[root@bda1node01 patch]# for i in `ls *.zip`; do unzip $i; done
  inflating: README.txt
   creating: BDAMammoth-ol6-4.1.0/
  inflating: BDAMammoth-ol6-4.1.0/
   creating: BDABaseImage-ol6-4.1.0_RELEASE/
  inflating: BDABaseImage-ol6-4.1.0_RELEASE/json-select
[root@bda1node01 patch]#

After unzipping these files you end up with a huge file containing a shell script with an uuencode added binary payload:

[root@bda1node01 BDAMammoth-ol6-4.1.0]# ll
total 3780132
-rwxrwxrwx 1 root root 3870848281 Jan 16 10:17
[root@bda1node01 BDAMammoth-ol6-4.1.0]# 

This .run files does a couple of checks, like determining on which version of Linux we are running (this BDA is on OEL6), it installs a new version of BDAMammoth and moves to previous version to /opt/oracle/BDAMammoth/previous-BDAMammoth/. It also updates the Cloudera parcels and yum repository in /opt/oracle/BDAMammoth/bdarepo and places a new basimeage iso file on this node which is needed if you are adding new nodes to the cluster are need to reimaging them. Finally it also updates all the puppet files in /opt/oracle/BDAMammoth/puppet/manifests, Oracle uses puppet a lot for deploying software on the BDA, this means that patching the BDA is also done by a collection of puppet scripts. When this is done we can start the patch process by running mammoth -p, so here we go:

[root@bda1node01 BDAMammoth]# ./mammoth -p
INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20150130103200.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20150130103200.trc
INFO: all_nodes not generated yet, skipping check for password-less ssh
ERROR: Big Data SQL is not supported on BDA v4.1.0
ERROR: Please uninstall Big Data SQL before upgrade cluster
ERROR: Cannot continue with operation
INFO: Running bdadiagcluster...
INFO: Starting Big Data Appliance diagnose cluster at Fri Jan 30 10:32:12 2015
INFO: Logging results to /tmp/bda_diagcluster_1422610330.log
SUCCESS: Created BDA diagcluster zipfile on node bda1node01
SUCCESS: Created BDA diagcluster zipfile on node bda1node02
SUCCESS: Created BDA diagcluster zipfile on node bda1node03
SUCCESS: Created BDA diagcluster zipfile on node bda1node04
SUCCESS: Created BDA diagcluster zipfile on node bda1node05
SUCCESS: Created BDA diagcluster zipfile on node bda1node06
SUCCESS: created
INFO: Big Data Appliance diagnose cluster complete at Fri Jan 30 10:32:54 2015
INFO: Please get the Big Data Appliance cluster diagnostic bundle at /tmp/
[root@bda1node01 BDAMammoth]# 

This error is obvious, we have Big Data SQL installed on our cluster (which was introduced in the BDA4.0 software) but this version we are running is not supported for BDA4.1. Unfortunately we are running BDSQL 1.0 which is the only version of BDSQL there is at this point, this is was also a known bug described in MOS Doc ID 1964471.1. So we have 2 options wait for a new release of BDSQL and postpone the upgrade or remove BDSQL and continue with the upgrade. We decided to continue with the upgrade and deinstall BDSQL for now, it was not working on the Exadata site due to conflicting patches with the latest RDBMS patch. Removing BDSQL can be done with bdacli or mammoth-reconfig, bdacli calls mammoth-reconfig so as far as i know it doesn’t really matter. So lets give that a try:

[root@bda1node01 BDAMammoth]# ./mammoth-reconfig remove big_data_sql
INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20150130114222.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20150130114222.trc
INFO: This is the install of the primary rack
ERROR: Version mismatch between mammoth and params file
ERROR: Mammoth version: 4.1.0, Params file version: 4.0.0
ERROR: Cannot continue with install #Step -1#
INFO: Running bdadiagcluster...
INFO: Starting Big Data Appliance diagnose cluster at Fri Jan 30 11:42:25 2015
INFO: Logging results to /tmp/bda_diagcluster_1422614543.log
SUCCESS: Created BDA diagcluster zipfile on node bda1node01
SUCCESS: Created BDA diagcluster zipfile on node bda1node02
SUCCESS: Created BDA diagcluster zipfile on node bda1node03
SUCCESS: Created BDA diagcluster zipfile on node bda1node04
SUCCESS: Created BDA diagcluster zipfile on node bda1node05
SUCCESS: Created BDA diagcluster zipfile on node bda1node06
SUCCESS: created
INFO: Big Data Appliance diagnose cluster complete at Fri Jan 30 11:43:03 2015
INFO: Please get the Big Data Appliance cluster diagnostic bundle at /tmp/
[root@bda1node01 BDAMammoth]#

Because we have already extracted the new version of Mammoth when executed the .run file we now have a version mismatch between the existing the mammoth software and params file /opt/oracle/BDAMammoth/bdaconfig/VERSION. We know that when we ran the old mammoth software was backed up to /opt/oracle/BDAMammoth/previous-BDAMammoth/. Our first thought was to replace to new version with the previous version:

[root@bda1node01 oracle]# mv BDAMammoth
[root@bda1node01 oracle]# cp -R ./BDA
BDABaseImage/           BDABaseImage-ol6-4.0.0/ BDABaseImage-ol6-4.1.0/
[root@bda1node01 oracle]# cp -R ./ ./BDAMammoth

Running it again, resulted in lots more errors, i am just pasting parts of the output here, just to give you an idea:

[root@bda1node01 BDAMammoth]# ./mammoth-reconfig remove big_data_sql
INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20150130113539.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20150130113539.trc
INFO: This is the install of the primary rack
INFO: Checking if password-less ssh is set up
json-select: no valid input found for "@CIDR@"

*** json-select *** Out of branches near "json object <- json value
     OR json array <- json value" called from "json select command line"
*** json-select *** Syntax error on line 213:
ERROR: Puppet agent run on node bda1node01 had errors. List of errors follows

Error [6928]: Report processor failed: Permission denied - /opt/oracle/BDAMammoth/puppet/reports/

INFO: Also check the log file in /opt/oracle/BDAMammoth/bdaconfig/tmp/pagent-bda1node01-20150130113621.log

So after fiddling around with this we found out that the soluting was actually extremely simple, there was no need to move the old 4.0 mammoth software back into place. The solution was to simply run the mammoth-reconfig script directly from backup directory: /opt/oracle/BDAMammoth/previous-BDAMammoth/ and we finally have BDSQL (mainly the cellsrv software) disabled on the cluster:

[root@bda1node01 BDAMammoth]# bdacli status big_data_sql_cluster
ERROR: Service big_data_sql_cluster is disabled. Please enable the service first to run this command.
[root@bda1node01 BDAMammoth]#

With BDSQL disabled we van give mammoth -p another shot and try to upgrade the hadoop cluster and wait for all the puppet scripts the finish. The output of the mammoth script is long so i won’t post it in here for readability reasons. All the logging from the mammoth script and puppet agents are all written to this /opt/oracle/BDAMammoth/bdaconfig/tmp/ during the patch process and eventually it is all nicely put into a single zipfile when the patching is done in /tmp/ The patch process on our starter rack took about +2 hours in total and about 40 minutes into the patching process we get this:

WARNING: The OS kernel was updated on some nodes - so those nodes need to be rebooted
INFO: Nodes to be rebooted: bda1node01,bda1node02,bda1node03,bda1node04,bda1node05,bda1node06
Proceed with reboot? [y/n]: y

Broadcast message from
	(unknown) at 13:09 ...

The system is going down for reboot NOW!
INFO: Reboot done.
INFO: Please wait until all reboots are complete before continuing
[root@bda1node01 BDAMammoth]# Connection to bda1node01 closed by remote host.

Don’t be fooled that the reboot is the end of the patch process, the interesting part is just about the start. After the system is back online you can see that we are still on the old BDA software version:

[root@bda1node01 ~]# bdacli getinfo cluster_version

This is not really documented clearly by oracle, there is no special command to continue the patch process (like -c on Exadata), just simply enter mammoth -p again and mammoth will pick up the patching process were it left off. In this second part of the patching process mammoth will actually update the Cloudera stack and all it components like Hadoop, Hive, Impala etc. this will take about 90 minutes to finish with a whole bunch of tests runs from map reduce jobs, ooze jobs, teragen sorts etc. (in my case it reported that i had a flume agent in concerning health after the upgrade). After all this is done the mammoth script will finish and we can verify if we have an upgraded cluster or not:

[root@bda1node01 BDAMammoth]# bdacli getinfo cluster_version

[root@bda1node01 BDAMammoth]# bdacli getinfo cluster_cdh_version

Except from the issues surrounding BDSQL patching the BDA worked like a charm, although Oracle could have documented it a bit better on how to continue after the OS upgrade and reboot. Personally i really like the fact that oracle uses puppet for orchestration on the BDA. Next step would be waiting on the update for BDSQL so we can do some proper testing.

Oracle Big Data Appliance X4-2: First impressions

The last 2 weeks we are lucky enough to have the Big Data Appliance (BDA) from Oracle in our lab/demo environment at VX Company and iRent. In this blog post i am trying to share my first experiences and some general observations. I am coming from an Oracle (Exadata) RDBMS background so i will probably reflect some of that experience on the BDA. The BDA we got here at is a starters rack which consists of 6 stock X4-2 servers, which have 2 sockets, 8 core Intel Xeon E5-2650 processors:

[root@bda1node01 bin]# cat /proc/cpuinfo | grep "model name" | uniq
model name	: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
[root@bda1node01 bin]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Stepping:              4
CPU MHz:               2593.864
BogoMIPS:              5186.76
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              20480K
NUMA node0 CPU(s):     0-7,16-23
NUMA node1 CPU(s):    8-15,24-31
[root@bda1node01 bin]#

Half of the memory banks are filled with 8GB DIMM’s, giving the BDA nodes a total of 64GB per node:

[root@bda1node01 ~]# dmidecode  --type memory | grep Size
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
	Size: 8192 MB
	Size: No Module Installed
[root@bda1node01 ~]#

The HDFS filesystem is writing to the 12 disks in the server, all mounted at /u[1-12]:

[root@bda1node01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              459G   40G  395G  10% /
tmpfs                  32G   12K   32G   1% /dev/shm
/dev/md0              184M  116M   60M  67% /boot
/dev/sda4             3.1T  214M  3.1T   1% /u01
/dev/sdb4             3.1T  203M  3.1T   1% /u02
/dev/sdc1             3.6T  4.3G  3.6T   1% /u03
/dev/sdd1             3.6T  4.1G  3.6T   1% /u04
/dev/sde1             3.6T  4.2G  3.6T   1% /u05
/dev/sdf1             3.6T  3.8G  3.6T   1% /u06
/dev/sdg1             3.6T  3.4G  3.6T   1% /u07
/dev/sdh1             3.6T  3.1G  3.6T   1% /u08
/dev/sdi1             3.6T  3.9G  3.6T   1% /u09
/dev/sdj1             3.6T  3.4G  3.6T   1% /u10
/dev/sdk1             3.6T  3.2G  3.6T   1% /u11
/dev/sdl1             3.6T  3.8G  3.6T   1% /u12
cm_processes           32G  8.9M   32G   1% /var/run/cloudera-scm-agent/process
[root@bda1node01 bin]# hdparm -tT /dev/sda

 Timing cached reads:   18338 MB in  2.00 seconds = 9183.19 MB/sec
 Timing buffered disk reads:  492 MB in  3.03 seconds = 162.44 MB/sec
[root@bda1node01 bin]#

Deploying Cloudera CDH on the BDA is done by the onecommand alternative for the BDA called mammoth. Mammoth is used not only for deploying your rack but also for extending it with additional nodes:

[root@bda1node01 bin]# mammoth -l
INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20141116155709.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20141116155709.trc
The steps in order are...
Step  1 = PreinstallChecks
Step  2 = SetupPuppet
Step  3 = PatchFactoryImage
Step  4 = CopyLicenseFiles
Step  5 = CopySoftwareSource
Step  6 = CreateLogicalVolumes
Step  7 = CreateUsers
Step  8 = SetupMountPoints
Step  9 = SetupMySQL
Step 10 = InstallHadoop
Step 11 = StartHadoopServices
Step 12 = InstallBDASoftware
Step 13 = HadoopDataEncryption
Step 14 = SetupKerberos
Step 15 = SetupEMAgent
Step 16 = SetupASR
Step 17 = CleanupInstall
Step 18 = CleanupSSHroot (Optional)
[root@bda1node01 bin]#

Interestingly Oracle is using puppet te deploy CDH and configuring the BDA nodes. Deploying a starter rack from start is quick, within a few hours we had our BDA installed and running. As an Exadata ‘fanboy’ i also have to say some words about cellars running on the BDA:

[root@bda1node01 bin]# ps -ef | grep [c]ell
oracle    8665     1  0 Nov05 ?        00:01:38 /opt/oracle/cell/cellsrv/bin/bdsqlrssrm -ms 1 -cellsrv 1
oracle    8672  8665  0 Nov05 ?        00:02:11 /opt/oracle/cell/cellsrv/bin/bdsqlrsomt -rs_conf /opt/oracle/cell/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsms.state -cellsrv_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsos.state -debug 0
oracle    8673  8665  0 Nov05 ?        00:00:47 /opt/oracle/cell/cellsrv/bin/bdsqlrsbmt -rs_conf /opt/oracle/cell/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsms.state -cellsrv_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsos.state -debug 0
oracle    8674  8665  0 Nov05 ?        00:00:30 /opt/oracle/cell/cellsrv/bin/bdsqlrsmmt -rs_conf /opt/oracle/cell/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsms.state -cellsrv_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsos.state -debug 0
oracle    8675  8673  0 Nov05 ?        00:00:08 /opt/oracle/cell/cellsrv/bin/bdsqlrsbkm -rs_conf /opt/oracle/cell/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsms.state -cellsrv_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsos.state -debug 0
oracle    8678  8674  0 Nov05 ?        00:10:57 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:-UseLargePages -Djava.library.path=/opt/oracle/cell/cellsrv/lib -Ddisable.checkForUpdate=true -jar /opt/oracle/cell/oc4j/ms/j2ee/home/oc4j.jar -out /opt/oracle/cell/cellsrv/deploy/log/ms.lst -err /opt/oracle/cell/cellsrv/deploy/log/ms.err
oracle    8707  8675  0 Nov05 ?        00:00:48 /opt/oracle/cell/cellsrv/bin/bdsqlrssmt -rs_conf /opt/oracle/cell/cellsrv/deploy/config/cellinit.ora -ms_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsms.state -cellsrv_conf /opt/oracle/cell/cellsrv/deploy/config/bdsqlrsos.state -debug 0
oracle    8710  8672 16 Nov05 ?        1-21:15:17 /opt/oracle/cell/cellsrv/bin/bdsqlsrv 100 5000 9 5042
oracle    8966     1  0 Nov05 ?        02:18:06 /opt/oracle/bd_cell/bd_cellofl- -startup 1 0 1 5042 8710 SYS_1212099_140907 cell
[root@bda1node01 bin]#

Of course this Oracle’s new product Big Data SQL which is cellsrv being ported so it can run on top of hadoop. Unfortunately we could not get Big Data SQL running yet because the mandatory patch that needs to be installed on top of the RDBMS/GI homes on the Exadata side conflicts with Oracle ( is mandatory for Big Data SQL), so we are waiting right now on development on fixing that issue. Strangly enough BD SQL also says that we have flashcache in write though mode on the BDA, and a whole lot of other abnormal readings:

[root@bda1node01 bin]# bdscli -e list bdsql detail
	 name:              	 bda1node01
	 bdsVersion:        	 OSS_PT.EXADOOP2_LINUX.X64_140907.2307
	 cpuCount:          	 0
	 diagHistoryDays:   	 7
	 fanCount:          	 0/0
	 fanStatus:         	 normal
	 flashCacheMode:    	 WriteThrough
	 interconnectCount: 	 0
	 kernelVersion:     	 2.6.39-400.215.9.el6uek.x86_64
	 locatorLEDStatus:  	 unknown
	 memoryGB:          	 0
	 metricHistoryDays: 	 7
	 offloadEfficiency: 	 1.0
	 powerCount:        	 0/0
	 powerStatus:       	 normal
	 releaseTrackingBug:	 17885582
	 status:            	 online
	 temperatureReading:	 0.0
	 temperatureStatus: 	 normal
	 upTime:            	 0 days, 0:00
	 bdsqlsrvStatus:    	 running
	 bdsqlmsStatus:     	 running
	 bdsqlrsStatus:     	 running
[root@bda1node01 bin]#

I am sure we have actually some CPU’s in here and that this machine is powered on, we have some actual memory in here and the temperature is more then zero degrees. Apart from the issues with Big Data SQL (which i am sure will be resolved soon) i am impressed with the set of tools that Oracle delivered with the BDA.

Grid logging in

Most of the talk about Oracle’s release of is about the InMemory feature, but more things have changed, for example some essential things about loggin in the Grid Infrastructure have changed. Normally in Oracle Grid Infrastructure logging for Grid components was done in $GI_HOME:

oracle@dm01db01(*gridinfra):/home/oracle> cd $ORACLE_HOME/log/`hostname -s`/

There we have the main alert log for GI and several subdirectories for the GI binaries where they write there logging information in a list of rotating logfiles. So after upgrading a cluster on Exadata, you will see that the alert log will be empty or just filled with a couple of lines e.g written at the point when upgrading GI stack:

oracle@dm01db01(*gridinfra):/u01/app/> cat alertdm01db01.log
2014-08-21 11:59:35.455
[client(9458)]CRS-0036:An error occurred while attempting to open file "UNKNOWN".
2014-08-21 11:59:35.455
[client(9458)]CRS-0004:logging terminated for the process. log file: "UNKNOWN"

So in the old directories are still there but are not being used anymore, so where can we find them? Prior to all GI binaries were writing directly to the alert.log file as well. Lets start by finding out were for example the crsd.bin file is writing to by looking at the op file descriptors in the proc filesystem:

[root@dm01db01 ~]# cd /proc/`ps -C crsd.bin -o pid=`/fd
[root@dm01db01 fd]# ls -la *.log
[root@dm01db01 fd]# ls -la *.trc
lrwx------ 1 root root 64 Aug 21 11:41 1 -> /u01/app/grid/crsdata/dm01db01/output/crsdOUT.trc
l-wx------ 1 root root 64 Aug 21 11:41 15 -> /u01/app/grid/diag/crs/dm01db01/crs/trace/crsd.trc
lrwx------ 1 root root 64 Aug 21 11:41 2 -> /u01/app/grid/crsdata/dm01db01/output/crsdOUT.trc
lrwx------ 1 root root 64 Aug 21 11:41 3 -> /u01/app/grid/crsdata/dm01db01/output/crsdOUT.trc

Crsd.bin is not writing to any logfile anymore, it is writing to a trace file in a new location. The new logging writes to a new location which can be found at: $ORACLE_BASE/diag/crs/`hostname -s`/crs/trace/ and is now in regular ADR formatted directories. In the old structure we have all the logfile nicely divided in subdirectories, in this new structure everything is a single directory. This directory does contain a lot of files, this is on a freshly installed cluster node:

[root@dm01db01 trace]# ls -la | wc -l

The majority of the files are trace files from cluster commands like oifcfg, every execution is being logged and traced. These files consist of -.trc name formatting. All GI processes can easily be listed (the example below is from an environment with role separation between GI and rdbms):

[root@dm01db01 trace]# ls -la | grep [a-z].trc
-rw-rw----  1 grid     oinstall   1695134 Aug 24 15:30 crsd_oraagent_grid.trc
-rw-rw----  1 oracle   oinstall   4555557 Aug 24 15:30 crsd_oraagent_oracle.trc
-rw-rw----  1 root     oinstall   1248672 Aug 24 15:30 crsd_orarootagent_root.trc
-rw-rw----  1 oracle   oinstall   6156053 Aug 24 15:30 crsd_scriptagent_oracle.trc
-rw-rw----  1 root     oinstall   4856950 Aug 24 15:30 crsd.trc
-rw-rw----  1 grid     oinstall  10329905 Aug 24 15:31 diskmon.trc
-rw-rw----  1 grid     oinstall   2825769 Aug 24 15:31 evmd.trc
-rw-rw----  1 grid     oinstall       587 Aug 21 11:19 evmlogger.trc
-rw-rw----  1 grid     oinstall   8994527 Aug 24 15:31 gipcd.trc
-rw-rw----  1 grid     oinstall     12663 Aug 21 11:41 gpnpd.trc
-rw-rw----  1 grid     oinstall     11868 Aug 21 11:36 mdnsd.trc
-rw-rw----  1 grid     oinstall 132725348 Aug 24 15:31 ocssd.trc
-rw-rw----  1 root     oinstall   6321583 Aug 24 15:31 octssd.trc
-rw-rw----  1 root     oinstall     59185 Aug 24 14:02 ohasd_cssdagent_root.trc
-rw-rw----  1 root     oinstall     72961 Aug 24 14:38 ohasd_cssdmonitor_root.trc
-rw-rw----  1 grid     oinstall    804408 Aug 24 15:31 ohasd_oraagent_grid.trc
-rw-rw----  1 root     oinstall   1094709 Aug 24 15:31 ohasd_orarootagent_root.trc
-rw-rw----  1 root     oinstall  10384867 Aug 24 15:30 ohasd.trc
-rw-rw----  1 root     oinstall    169081 Aug 24 15:06 ologgerd.trc
-rw-rw----  1 root     oinstall   5781762 Aug 24 15:31 osysmond.trc
[root@dm01db01 trace]#

So if we look at the old pre cluster alert log you will see that there are a lot of processes writing directly to this file:

[root@dm02db01 ~]# lsof | grep alertdm02db01.log | wc -l

That are a lot of processes writing to one file, in the new environment, it is a lot less… just one:

[root@dm01db01 trace]# lsof | grep alert.log
java      23914     root  335w      REG              252,2     29668   10142552 /u01/app/grid/diag/crs/dm01db01/crs/trace/alert.log
[root@dm01db01 trace]# ps -p 23914 -o pid,args
23914 /u01/app/ -Xms128m -Xmx512m -classpath

For now this new logging structure is a lot less organized then the previous structure. It is however a lot more like regular database tracing and logging where there is one general alertlog and the different background processes like pmon, arch, lgwr etc. write to their own trace files.