Exadata versus IPv6

Recently one of my customers got a complaint from their DNS administrators, our Exadata’s are doing 40.000 DNS requests per minute. We like our DNS admins so we had a look into these request and what was causing them. I started with just firing up a tcpdump on one of the bonded client interfaces on a random compute node:

[root@dm01db01 ~]# tcpdump -i bondeth0 -s 0 port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bondeth0, link-type EN10MB (Ethernet), capture size 65535 bytes
15:41:04.937009 IP dm0101.domain.local.59868 > dnsserver01.domain.local:  53563+ AAAA? dm0101-vip.domain.local. (41)
15:41:04.937287 IP dm0101.domain.local.46672 > dnsserver01.domain.local:  44056+ PTR? 8.18.68.10.in-addr.arpa. (41)
15:41:04.938409 IP dnsserver01.domain.local > dm0101.domain.local.59868:  53563* 0/1/0 (116)
15:41:04.938457 IP dm0101.domain.local.56576 > dnsserver01.domain.local:  45733+ AAAA? dm0101-vip.domain.local.domain.local. (54)
15:41:04.939547 IP dnsserver01.domain.local > dm0101.domain.local.46672:  44056* 1/1/1 PTR dnsserver01.domain.local. (120)
15:41:04.940204 IP dnsserver01.domain.local > dm0101.domain.local.56576:  45733 NXDomain* 0/1/0 (129)
15:41:04.940237 IP dm0101.domain.local.9618 > dnsserver01.domain.local:  64639+ A? dm0101-vip.domain.local. (41)
15:41:04.941912 IP dnsserver01.domain.local > dm0101.domain.local.9618:  64639* 1/1/1 A dm0101-vip.domain.local (114)

So what are we seeing here, there are a bunch of AAAA requests to the DNS server and only one A record request. But the weirdest thing is of course the requests with the double domainname extensions. If we zoom in at those AAAA records requests we see the following, here is the request:

15:41:04.937009 IP dm0101.domain.local.59868 > dnsserver01.domain.local:  53563+ AAAA? dm0101-vip.domain.local. (41)

And here is our answer:

15:41:04.938409 IP dnsserver01.domain.local > dm0101.domain.local.59868:  53563* 0/1/0 (116)

The interesting part is in the answer of the dnsserver, in 0/1/0 the DNS server tells me that for this lookup it found 0 answer resource records, 1 authority resource records, and 0 additional resource records. So it could not resolve my VIP name in DNS. Now if we look at the A records request:

15:41:04.945697 IP dm0101.domain.local.10401 > dnsserver01.domain.local:  37808+ A? dm0101-vip.domain.local. (41)

and the answer:

15:41:04.947249 IP dnsserver01.domain.local > dm0101.domain.local.10401:  37808* 1/1/1 A dm0101-vip.domain.local (114)

Now by looking at the answer: 1/1/1 we can see that i got 1 answer record in return (the first 1), so the DNS server knows the IP for dm0101-vip.domain.local when an A record is requested. What is going on here? Well the answer is simple, AAAA records are IPv6 DNS requests, our DNS servers are not configured for IPv6 name request so it rejects these requests. So what about those weird double domain names like dm0101-vip.domain.local.domain.local? When Linux requests a DNS record the following happens:

1. Linux issues DNS request for dm0101-vip.domain.local, because IPv6 is enabled, it issues an AAAA request.
2. DNS server is not configured for IPv6 requests and discards request.
3. Linux retries the requests, looks into resolv.conf and adds domainame, we now have dm0101-vip.domain.local.domain.local
4. Once again, the DNS server discards this request.
5. Linux once agains retries the AAAA request, adds domain name: dm0101-vip.domain.local.domain.local.domain.local
6. DNS server discards AAAA request
7. Linux now falls back to a DNS IPv4 request and does an A request: dm0101-vip.domain.local
8. DNS servers understands this and replies

This happens because Exadata comes with IPv6 Enabled on both infiniband and ethernet interfaces:

[root@dm0101 ~]# ifconfig bond0;ifconfig bond1
bond0     Link encap:InfiniBand  HWaddr 80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00  
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          inet6 addr: fe80::221:2800:13f:2673/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:65520  Metric:1
          RX packets:226096104 errors:0 dropped:0 overruns:0 frame:0
          TX packets:217747947 errors:0 dropped:55409 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:320173078389 (298.1 GiB)  TX bytes:176752381042 (164.6 GiB)

bond1     Link encap:Ethernet  HWaddr 00:21:28:84:16:49  
          inet addr:10.18.1.10  Bcast:10.18.1.255  Mask:255.255.255.0
          inet6 addr: fe80::221:28ff:fe84:1649/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:14132063 errors:2 dropped:0 overruns:0 frame:2
          TX packets:7334898 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2420637835 (2.2 GiB)  TX bytes:3838537234 (3.5 GiB)

[root@dm0101 ~]# 

Let’s disable ipv6, my client is not using on ipv6 its internal network anyway (like most companies i assume). You can edit /etc/modprobe.conf to prevent it from being loaded at boot time, add the following 2 lines modprobe.conf:

alias ipv6 off
install ipv6 /bin/true

Then add the below entries to /etc/sysconfig/network

IPV6INIT=no

Reboot the host and lets look at what we see after the host is up again:

[root@dm0103 ~]# cat /proc/net/if_inet6
00000000000000000000000000000001 01 80 10 80       lo
fe8000000000000002212800013f111f 08 40 20 80    bond0
fe80000000000000022128fffe8e5f6a 02 40 20 80     eth0
fe80000000000000022128fffe8e5f6b 09 40 20 80    bond1
[root@dm0103 ~]# ifconfig bond0;ifconfig bond1
bond0     Link encap:InfiniBand  HWaddr 80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00  
          inet addr:192.168.100.3  Bcast:192.168.100.255  Mask:255.255.255.0
          inet6 addr: fe80::221:2800:13f:111f/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:65520  Metric:1
          RX packets:318265 errors:0 dropped:0 overruns:0 frame:0
          TX packets:268072 errors:0 dropped:16 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:433056862 (412.9 MiB)  TX bytes:190905039 (182.0 MiB)

bond1     Link encap:Ethernet  HWaddr 00:21:28:8E:5F:6B  
          inet addr:10.18.1.12  Bcast:10.18.1.255  Mask:255.255.255.0
          inet6 addr: fe80::221:28ff:fe8e:5f6b/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:10256 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5215 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1559169 (1.4 MiB)  TX bytes:1350653 (1.2 MiB)

[root@dm0103 ~]# 

So disabling ipv6 modules through modprobe.conf did not do the trick, what did broughgt the ipv6 stack:

[root@dm0103 ~]# lsmod | grep ipv6
ipv6 291277 449 bonding,ib_ipoib,ib_addr,cnic

The infiniband stack brought up ipv6, we can disable ipv6 at kernel level:

root@dm0103 ~]# sysctl -a | grep net.ipv6.conf.all.disable_ipv6 
net.ipv6.conf.all.disable_ipv6 = 0
[root@dm0103 ~]# echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
[root@dm0103 ~]# sysctl -a | grep net.ipv6.conf.all.disable_ipv6 
net.ipv6.conf.all.disable_ipv6 = 1
[root@dm0103 ~]# cat /proc/net/if_inet6
[root@dm0103 ~]# 

Now we are running this exadata compute node without ipv6, we can now check if we still have infiniband connectivity, on a cell start a ibping server and use ibstat to get the port GUID:

[root@dm01cel01 ~]# ibstat -p
0x00212800013ea3bf
0x00212800013ea3c0
[root@dm01cel01 ~]# ibping -S

On our ipv6 disabled host start the ibping to one of the

[root@dm0103 ~]# ibping -c 4 -v -G 0x00212800013ea3bf
ibwarn: [14476] ibping: Ping..
Pong from dm01cel01.oracle.vxcompany.local.(none) (Lid 6): time 0.148 ms
ibwarn: [14476] ibping: Ping..
Pong from dm01cel01.oracle.vxcompany.local.(none) (Lid 6): time 0.205 ms
ibwarn: [14476] ibping: Ping..
Pong from dm01cel01.oracle.vxcompany.local.(none) (Lid 6): time 0.247 ms
ibwarn: [14476] ibping: Ping..
Pong from dm01cel01.oracle.vxcompany.local.(none) (Lid 6): time 0.139 ms
ibwarn: [14476] report: out due signal 0

--- dm01cel01.oracle.vxcompany.local.(none) (Lid 6) ibping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 4001 ms
rtt min/avg/max = 0.139/0.184/0.247 ms
[root@dm0103 ~]# 

So we have infiniband connectivity, lets see how Oracle reacts:

[root@dm0103 ~]# crsctl stat res -t

And now we play the waiting game… well basically it never comes back, it tries to read from 2 network sockets if we look with strace, it hangs at:

[pid 15917] poll([{fd=3, events=POLLIN|POLLRDNORM}, {fd=4, events=POLLIN|POLLRDNORM}], 2, -1

Which points to 2 file descriptors which it can’t read from:

[root@dm0103 ~]# ls -altr /proc/15917/fd
total 0
dr-xr-xr-x 7 root root  0 Feb  3 18:37 ..
lrwx------ 1 root root 64 Feb  3 18:37 4 -> socket:[3447070]
lrwx------ 1 root root 64 Feb  3 18:37 3 -> socket:[3447069]
lrwx------ 1 root root 64 Feb  3 18:37 2 -> /dev/pts/0
lrwx------ 1 root root 64 Feb  3 18:37 1 -> /dev/pts/0
lrwx------ 1 root root 64 Feb  3 18:37 0 -> /dev/pts/0
dr-x------ 2 root root  0 Feb  3 18:37 .
[root@dm0103 ~]# 

There is an dependency between ipv6 and CRS on an Exadata, disabling ipv6 will cripple your clusterware. There is no real solution for this problem because we need ipv6 on an Exadata, we can’t disable it. However we van easily reduce the amount of ipv6 DNS lookups by extending our /etc/hosts file and adding all hostnames, vip names etc. of all our hosts in our cluster in every single hostfile on computenodes. Unfortunately we can’t do this on our Cell servers, because oracle does not want us to go ‘messing’ with them so you have to live with it for now.

Using HCC on ZFS Storage Appliances

Hybrid Columnar Compression (HCC) is one of the Exadata features but lately Oracle has been pushing this featurei to other Oracle hardware like the ZFS Storage Appliance and Axiom Pillar Storage series. We recently got a ZFS Storage Appliance (ZFSSA) at VX Company, so we are now able to use HCC on the Oracle Database Appliance (ODA). To use HCC we need to create a tablespace with datafiles on a ZFS Storage Appliance, in order to de so we going to hookup our ODA using directNFS (dNFS). I am not going in the details of explaining dnfs in this blogpost, there are enough blogs around who have excellent background information on dnfs:

http://www.pythian.com/news/34425/oracle-direct-nfs-how-to-start/
https://blogs.oracle.com/XPSONHA/entry/using_dnfs_for_test_purposes

So lets setup dnfs for our HCC table on my ODA. First we create a mountpoint (on all the RAC nodes):

[root@vxoda12 ~]# mkdir /mnt/vxzfs
[root@vxoda12 ~]# chown oracle:oinstall /mnt/vxzfs

Add the NFS export to fstab (on all the RAC nodes), this should of course be a NFS share on our ZFSSA

[root@vxoda11 ~]# cat /etc/fstab|grep nfs
vxzfs.oracle.vxcompany.local:/export/odatestdrive  /mnt/vxzfs  nfs  rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600
[root@vxoda11 ~]# mount /mnt/vxzfs/
[root@vxoda11 ~]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
vxzfs.oracle.vxcompany.local:/export/odatestdrive on /mnt/vxzfs type nfs (rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,nfsvers=3,timeo=600,addr=10.12.0.211)

Next create a file called oranfstab, if you are on a RAC Instance don’t forget to change local IP address on the other nodes:

[oracle@vxoda11 [odadb011] trace]$ vi $ORACLE_HOME/dbs/oranfstab
server: vxzfs.oracle.vxcompany.local
path:  10.12.0.211
local:  10.12.0.202
export: /export/odatestdrive mount: /mnt/vxzfs

Change the Oracle ODM library to the Oracle ODM NFS library (on all the RAC nodes):

[oracle@vxoda11 [] ~]$ cd $ORACLE_HOME/lib
[oracle@vxoda11 [] lib]$ ls -al libodm11.so
lrwxrwxrwx 1 oracle oinstall 12 Nov  8 13:15 libodm11.so -> libodmd11.so
[oracle@vxoda11 [] lib]$ ln -sf libnfsodm11.so libodm11.so
[oracle@vxoda11 [] lib]$ ls -al libodm11.so
lrwxrwxrwx 1 oracle oinstall 14 Nov 13 12:54 libodm11.so -> libnfsodm11.so

Restart the database:

[oracle@vxoda11 [odadb011] lib]$ srvctl stop database -d odadb01 -o immediate
[oracle@vxoda11 [odadb011] lib]$ srvctl start database -d odadb01
[oracle@vxoda11 [odadb011] lib]$ srvctl status database -d odadb01
Instance odadb011 is running on node vxoda11
Instance odadb012 is running on node vxoda12

Check alertlog you should now see:

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0

Create a tablespace on the NFS share

SYS@odadb011 AS SYSDBA> create bigfile tablespace dnfs datafile '/mnt/vxzfs/dnfs.dbf' size 500G extent management local autoallocate;

Now you should see some entries in v$dnfs_servers:

SYS@odadb012 AS SYSDBA> col dirname form a50
SYS@odadb012 AS SYSDBA> col svrname form a50
SYS@odadb012 AS SYSDBA> select * from v$dnfs_servers;

   INST_ID        ID SVRNAME                               DIRNAME                                MNTPORT       NFSPORT      WTMAX       RTMAX
---------- ---------- -------------------------------------------------- -------------------------------------------------- ---------- ---------- ---------- ----------
      1         1 vxzfs.oracle.vxcompany.local                /export/odatestdrive                          59286          2049    1048576     1048576


SYS@odadb012 AS SYSDBA> 

If we then try to create a HCC table in our dNFS tablespace we see this:

SYS@kjj2 AS SYSDBA> create table t3 compress for archive low tablespace dnfs_kjj as select * from dba_objects;
create table t3 compress for archive low tablespace dnfs_kjj as select * from dba_objects
                                                                              *
ERROR at line 1:
ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

Apperently Oracle does not know that we are storing our data on a ZFS Appliance. To see what is going we can take a look at the traffic between my ODA en ZFSSA using tcpdump. When looking at the dump in wireshark we see these packages going between ODA and the ZFSSA:

wireshark

So Oracle tries to do SNMP calls to the ZFSSA, lets see what happens if we do a get to the MIB displayed in the tcp frame above:

[root@vxoda11 ~]# snmpget -v1 -c public 10.12.0.211 1.3.6.1.4.1.42.2.225.1.4.2.0
Timeout: No Response from 10.12.0.211.

We need to enable SNMP on the ZFSSA (needless to say you need to replace the e-maill address with proper one):

user@localhost:~$ ssh admin@vxzfs.oracle.vxcompany.local
Password:
Last login: Sun Nov 18 18:20:32 2012 from 10.12.0.252
Waiting for the appliance shell to start ... 
vxzfs:> configuration services snmp
vxzfs:configuration services snmp> set network=10.12.0.0/24
                       network = 10.12.0.0/24 (uncommitted)
vxzfs:configuration services snmp> set syscontact=<someusername>@vxcompany.com
                    syscontact = <someusername>@vxcompany.com (uncommitted)
vxzfs:configuration services snmp> enable
vxzfs:configuration services snmp> commit
vxzfs:configuration services snmp> show
Properties:
                      <status> = online
                     community = public
                       network = 10.12.0.0/24
                    syscontact = <someusername>@vxcompany.com
                     trapsinks = 127.0.0.1

vxzfs:configuration services snmp>

Our snmpget should now work:

[root@vxoda11 ~]# snmpget -v1 -c public 10.12.0.211 1.3.6.1.4.1.42.2.225.1.4.2.0
SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: "Sun ZFS Storage 7120"

Now due to Unpublished Bug 12979161 we need to make some symlinks so that dNFS finds the correct snmp libraries libnetsnmp.so, apparently this is fixed in 11.2.0.4:

[oracle@vxoda11 [odadb011] ~]$ locate libnetsnmp.so
/usr/lib64/libnetsnmp.so.10
/usr/lib64/libnetsnmp.so.10.0.3
[root@vxoda11 ~]# cd /usr/lib64/
[root@vxoda11 lib64]# ln -s libnetsnmp.so.10.0.3 libnetsnmp.so

Last step is to restart our database so it can pickup symlinks for the snmp libraries:

[oracle@vxoda11 [kjj1] ~]$ srvctl stop database -d kjj -o immediate
[oracle@vxoda11 [kjj1] ~]$ srvctl start database -d kjj

Now we are able to create our HCC table on our ZFSSA tablespace:

SYS@odadb011 AS SYSDBA> create table t1 compress for archive low tablespace dnfs as select * from dba_objects;

Table created.

SYS@kjj2 AS SYSDBA> select compression, compress_for from dba_tables where table_name='T2';

COMPRESS COMPRESS_FOR
-------- ------------
ENABLED  ARCHIVE LOW

Happy compressing!

Peeking at your Exadata infiniband traffic

As a DBA you are probably very curious on what is going on, on your system. So when you have a shiny Exadata you probably had a look at the infiniband fabric that is connecting the compute nodes and storage nodes together. When you want to see what kind traffic is going from the compute nodes to the storage nodes, or on the RAC interconnects you can use tcpdump to do so (if it is not install you can do a ‘yum install tcpdump’):

[root@dm01db02 ~]# tcpdump -i bond0 -s 0 -w /tmp/tcpdump.pcap
tcpdump: WARNING: arptype 32 not supported by libpcap - falling back to cooked socket
tcpdump: listening on bond0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
2073 packets captured
2073 packets received by filter
0 packets dropped by kernel
[root@dm01db02 ~]#

This will give you a dump file (/tmp/tcpdump.pcap) which you can analyze with your favorite network analyzer (probably Wireshark). If you are new to this you can download and install Wireshark here: http://www.wireshark.org/download.html

Using tcpdump you can sniff all the IPOIB traffic (ip over infiniband), but can you take a peak at the other traffic that is going on the Infiniband wire? Yes there is a way, you can use Mellanox’s ibdump. This tool is not installed by default on your compute nodes so need to download it and install it on the node of your choice (as a reminder: don’t install anything on your cellservers!):

[root@dm01db02 ~]# wget http://www.mellanox.com/downloads/tools/ibdump-1.0.5-4-rpms.tgz
--2012-02-11 15:13:27--  http://www.mellanox.com/downloads/tools/ibdump-1.0.5-4-rpms.tgz
Resolving www.mellanox.com... 98.129.157.233
Connecting to www.mellanox.com|98.129.157.233|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 486054 (475K) [application/x-gzip]
Saving to: `ibdump-1.0.5-4-rpms.tgz'

100%[==========================================================================================================================================>] 486,054      290K/s   in 1.6s

2012-02-11 15:13:29 (290 KB/s) - `ibdump-1.0.5-4-rpms.tgz' saved [486054/486054]
[root@dm01db02 ~]

Extract the tarball:

[root@dm01db02 ~]# tar -xvf ibdump-1.0.5-4-rpms.tgz
ibdump-1.0.5-4-rpms/
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i386-rhel5.4.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-rhel5.4.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i386-rhel5.5.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-rhel5.5.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i386-rhel5.6.rpm
ibdump-1.0.5-4-rpms/ibdump_release_notes.txt
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-rhel5.4.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-rhel5.5.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-rhel5.6.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-rhel5.6.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i686-rhel6.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-rhel6.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-rhel6.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i586-sles10sp3.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-sles10sp3.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-sles10sp3.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i586-sles11.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-sles11.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.i586-sles11sp1.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.ppc64-sles11sp1.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-sles11sp1.rpm
ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-sles11.rpm
[root@dm01db02 ~]#

Next step, install it. It will be placed into your /usr/bin folder:

[root@dm01db02 ~]# rpm -i ./ibdump-1.0.5-4-rpms/ibdump-1.0.5-4.x86_64-rhel`lsb_release -r|awk '{print $2}'`.rpm
[root@dm01db02 ~]# ls -la /usr/bin/ibdump
-rwxr-xr-x 1 root root 41336 Dec 19  2010 /usr/bin/ibdump
[root@dm01db02 ~]#

Now you are ready to play with ibdump, running it without parameters will make ibdump sniffing interface mlx4_0 (which is ib0) and writes the frames into a file called sniffer.pcap in your working directory. Some parameters can be added such as the dump file location:

[root@dm01db02 ~]# ibdump -o /tmp/ibdump.pcap
 ------------------------------------------------
 IB device                      : "mlx4_0"
 IB port                        : 1
 Dump file                      : /tmp/ibdump.pcap
 Sniffer WQEs (max burst size)  : 4096
 ------------------------------------------------

Initiating resources ...
searching for IB devices in host
Port active_mtu=2048
MR was registered with addr=0x1bc58590, lkey=0x8001c34e, rkey=0x8001c34e, flags=0x1
QP was created, QP number=0x60005b

Ready to capture (Press ^c to stop):
Captured:     11711 packets, 10978982 bytes

Interrupted (signal 2) - exiting ...

[root@dm01db02 ~]#

There are some drawback to ibdump though:

  • ibdump may encounter packet drops upon a burst of more than 4096 (or 2^max-burst) packets.
  • Packets loss is not reported by ibdump.
  • Outbound retransmitted and multicast packets may not be collected correctly.
  • ibdump may stop capturing packets when run on the same port of the Subnet Manager (E.G.: opensm). It is advised not to run the SM and ibdump on the same port.

Be aware of the issues above, besides that: Have fun peeking around at your Exadata infiniband fabric!