Exploring the iTerm2 Python API

A few months ago, iTerm2 released a new version that included a Python API. At first I didn’t pay much attention to it until I recently tried to configure sessions using Profiles, realising that this was way easier to do when I could actually script all of this. This reminded me that there is a Python API build in into iTerm2 which could just that.

The Python API is not enabled by default, you must enable it by going to preference > magic and check “Enable Python API” and agree with the warning it gives on what this API can do.

After you have enabled the Python API you can use the menu items under the script pulldown menu. The first time you are going to use the API, iTerm2 will download a Python Runtime environment for you. After start building your scripts, the iTerm2 documentation also includes a bunch of example scripts for inspiration. iTerm2 uses a virtualenv to separate all the scripts, this also means that any non-default libraries should be added to the virtualenv.

So lets get going, I often find myself in a situation where I am opening a new session, cd to a working dir, startup vagrant, ssh into the box, open a new session and ssh as a different user into that box. I want to automate that process so I don’t have to repeat those steps every time. Here is the complete script, we will break down the most important parts:

#!/usr/bin/env python3
import iterm2
import os, subprocess

# Variables
vagrantDir = '/my/path/to/my/vagrantfile'

# extend path to find vagrant and vbox binaries
my_path=os.environ.get('PATH')
my_path = '/usr/local/bin/:' + str(my_path)

# Check Vagrant status
def vagrant_stat():
    p = subprocess.check_output(['vagrant', 'status', '--machine-readable'], cwd=vagrantDir, env={"PATH": my_path})
    p1 = p.decode("utf-8")
    return p1

async def main(connection):
    app = await iterm2.async_get_app(connection)

    # Get the current Terminal window
    window = app.current_terminal_window

    # Make sure that we are in a iTerm2 window
    if window is not None:

	    # Create a new tab, split panes into 2 sessions, set profiles
	    tab = await window.async_create_tab()
	    split = await tab.current_session.async_split_pane(vertical=True)

	    # Startup vagrant box if it is not running
	    vagrant_action = vagrant_stat()
	    if (vagrant_action.find('poweroff') != -1):
	    	subprocess.call(['vagrant', 'up', '--machine-readable'], cwd=vagrantDir, env={"PATH": my_path})

	    # Change the prompt to the VagrantFile and SSH into the box
	    for session in tab.sessions:
	    	await session.async_send_text(f"cd {vagrantDir}\n")
	    	await session.async_send_text("vagrant ssh\n")

	    # Sudo to oracle on the left window:
	    await tab.sessions[0].async_send_text("sudo su - oracle\n")

	    # Sudo to root on the right window:
	    await tab.sessions[1].async_send_text("sudo -i\n")

    else:
        # You can view this message in the script console.
        print("No current window")

iterm2.run_until_complete(main)

The first few lines are importing some libraries, with iterm2 (available on PyPi) being the obvious one, setting some variables and making sure that python can find tools look vagrant and vboxmanage. Also a helper function if I need to check later on if the vagrant box is up-and-running.

async def main(connection):
    app = await iterm2.async_get_app(connection)

    # Get the current Terminal window
    window = app.current_terminal_window

Here we begin our main function, as you can see this uses this uses python’s asyncio libraries. Most of iTerm2’s functions are using async, which can be handy as we can continue even if we are still waiting for other steps to finish.

# Create a new tab, split panes into 2 sessions, set profiles
     tab = await window.async_create_tab()
     split = await tab.current_session.async_split_pane(vertical=True)

After we have defined our current iTerm2 window we can call the async_create_tab() function to create a new tab in the current window. We then will split that tab into 2 vertical session with async_split_pane()

 # Startup vagrant box if it is not running
	    vagrant_action = vagrant_stat()
	    if (vagrant_action.find('poweroff') != -1):
	    	subprocess.call(['vagrant', 'up', '--machine-readable'], cwd=vagrantDir, env={"PATH": my_path})

We then want to check if the vagrant box is actually up-and-running, if it is not then we issue a subprocess.call() to startup the box. I am using subprocess.call() here to make sure that python waits until the vagrant command is done.

# Change the prompt to the VagrantFile and SSH into the box
for session in tab.sessions:
    	await session.async_send_text(f"cd {vagrantDir}\n")
    	await session.async_send_text("vagrant ssh\n")

Vagrant is now up and we can now ssh into the box. By looping through all the sessions we can send text into the sessions as it was a user typing into the iTerm2 window.

# Sudo to oracle on the left window:
await tab.sessions[0].async_send_text("sudo su - oracle\n")

# Sudo to root on the right window:
await tab.sessions[1].async_send_text("sudo -i\n")

Finally I want to sudo the session in the vagrant box to oracle and the other session to root.

iterm2.run_until_complete(main)

Now are main function is defined, we can call the main function and let iTerm2 configure our tab, start the vagrant box, set the working dir, ssh into the box and sudo to oracle and root.

Of course this is a very simple example of what is possible with the Python API.pytho

Creating a RAC cluster using Ansible (part 2)

In my previous 2 blogposts i explained how to setup a Vagrantfile that deploys 2 virtualbox machines that can be used as a basis for a RAC cluster and how you could use Ansible to deploy RAC on those VM’s. In this post i want to dive a bit deeper into how i setup Ansible and why, keep in mind that this just one way of doing this.

The Github repository containing all the files can be found here

I am not going to go over the contents of the commons role as it probably speaks for itself on what it does. After the common role, we first need to setup networking which we need for interconnects. Because we have added extra network devices and we later on need to make sure that the device we configured as interconnect interface always keeps being the interconnect. Reason for this is that we have configured Virtualbox so that these interfaces are on their own network. To keep the device persistency we configure udev. This will work because in our Vagrantfile we have set the MAC addresses of the interfaces to a fixed value:

- name: Setup udev for network devices
  replace: dest=/etc/udev/rules.d/70-persistent-net.rules regexp='ATTR.*{{ item.device }}' replace='ATTR{address}=="{{ item.mac|lower }}", ATTR{type}=="1", KERNEL=="eth*", NAME="{{ item.device }}'
  with_items: "{{ network_ether_interfaces }}"
  when: network_ether_interfaces is defined
  register: udev_net

The array the is being referenced at the with_items line here is a host specific setting so it will take this value from the host_vars/ file. This means that the line above will make an udev rule that is specific for every network interface on the host that ansible is running on.

Ansible uses jinja2 as the format to dynamically format files. We user this format for our network configuration files. With the ansible template command, ansible will read the ethernet.j2 file and the jinja engine will use the template to create the correct file. In case of the action below, the ethernet.j2 will be uploaded as a ifcfg- file and it will do so on every host as long it can find data for that host in host_vars directory:

- name: Create the network configuration for ethernet devices
  template: src=ethernet.j2 dest=/etc/sysconfig/network-scripts/ifcfg-{{ item.device }}
  with_items: "{{ network_ether_interfaces }}"
  when: network_ether_interfaces is defined
  register: ether_result

The jinja2 code is quiet simple to read, the ethernet.j2 file looks like this:

# {{ ansible_managed }}
{% if item.bootproto == 'static' %}
DEVICE={{ item.device }}
BOOTPROTO=static
{% if item.address is defined %}
IPADDR={{ item.address }}
{% endif %}
{% if item.onboot is defined %}
ONBOOT={{ item.onboot }}
{% endif %}
{% if item.peerdns is defined %}
PEERDNS={{ item.peerdns }}
{% endif %}
{% if item.defroute is defined %}
DEFROUTE={{ item.defroute }}
{% endif %}
{% if item.netmask is defined %}
NETMASK={{ item.netmask }}
{% endif %}
{% if item.gateway is defined %}
GATEWAY={{ item.gateway }}
{% endif %}
{% if item.mac is defined %}
HWADDR={{ item.mac }}
{% endif %}
{% endif %}

{% if item.bootproto == 'dhcp' %}
DEVICE={{ item.device }}
BOOTPROTO=dhcp
{% endif %}

It is basically divided into to part, the first part tells jinja2 what to do when there is static device configuration and the other one for DHCP enabled devices. Jinja2 will create a line for every item found in the host_vars. The results of this actions are being registered by Ansible with the line “register: ether_result” We are using these results in the next action:

- name: bring up network devices
  shell: ifdown {{ item.item.device }}; ifup {{ item.item.device }}
  with_items: "{{ ether_result.results }}"
  when: ether_result is defined and item.changed

Here we are only restarting those interfaces which are registered in the ether_result action and are changed. A more complex use of ansible is in the template for the hosts file. The hosts file is the basis for DNSMasq which is being used as a simpler alternative for Bind DNS. The template for the hosts file is being created with information from the ansible facts. These facts are being gathered automatically by Ansible as soon as a playbook begins but we have changed things already in this run so right after we have brought up the network interfaces we have gathered our facts again, with these updated facts we can now build our hosts file.

127.0.0.1 localhost
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_eth2']['ipv4']['address'] }} {{ hostvars[host]['ansible_hostname'] }}
{{ hostvars[host]['ansible_eth2']['ipv4']['address'] | regex_replace('(^.*\.).*$', '\\1') }}{{ hostvars[host]['ansible_eth2']['ipv4']['address'].split('.')[3] | int + 10 }} {{ hostvars[host]['ansible_hostname'] }}-vip
{{ hostvars[host]['ansible_eth1']['ipv4']['address'] }} {{ hostvars[host]['ansible_hostname'] }}-priv
{% endfor %}
{% set count = 1 %}
{% for i in range(1,4) %}
{{ hostvars[inventory_hostname]['ansible_eth2']['ipv4']['address'] | regex_replace('(^.*\.).*$', '\\1') }}{{ count | int +250 }} rac-cluster-scan
{% set count = count + 1 %}
{% endfor %}

The for-loop will go through all the facts and get the IP4 address and generates the correct hosts file entry for every node in the cluster. It then takes take the IP4 address and add 10 to the last octet the create the address for the vip interface. The eth1 address i used for the interconnect in our cluster. The last part the the file is a loop the generate 3 additional ip4 addresses based on the address of eth2 and adds 250 to it plus the integer of the loop. These are the our SCAN addresses of our cluster. Now we have the hosts file setup we can install dnsmasq and have DNS ready. We are pinging the interfaces just to make sure they are up, if the ping failes ansible will stop the runbook.

Our network is now setup as we want it to be and we can go on to configure storage. Vagrant already created four shared disks for us which are presented to both virtual machines. We now have to make sure that we have device persistence storage which we can do with both ASMLib and Udev. For both methods there need to be partition, i am using sfdisk as an easy way to create partitions on all 4 disks:

- name: Create disk partitions
  shell: echo "0," | sfdisk -q {{item.1}}
  with_indexed_items: "{{ device_list.stdout_lines }}"
  when: "{{ item.0 }} > 0"
  become: true
  run_once: true
  register: sfdisk_output

The with_indexed_items command gives me an index number which is used to make sure we are not destroying the partitions on /dev/sda where the OS is installed on. Because the OS is installed on the the first disk, we can start sfdisk at index 1. When ansible needs to install asmlib, it then installs the needed RPM’s and uploads and runs a shell script. The reason for this script is that for ASMLib there is some interactivity needed to configure it. This can be solved with regular shell scripting with a here document, as far as i know there is no such thing in Ansible. For Udev we can more or less copy what we did the make our network interfaces persistent.

After we have installed a toolset we can go start with the installation of the Grid Infrastructure. The ASM diskstring and prefix depends on if we have configured it earlier with udev or with asmlib. In order to use the correct valuables i am adding these values to the ansibles facts as custom fact. Depending on which role it finds Ansible well load these values into the facts:

- set_fact: asm_diskstring="/dev/oracleasm/disks"
  when: "'configure_asmlib' in role_names"

- set_fact: asm_diskstring="/dev/ASMDISK*"
  when: "'configure_udev' in role_names"

Now we can create our SSH keys on both nodes, download the public key back to our Virtual Box host. After we have uploaded the public keys to all the hosts we can add the to our know_hosts using ssh-keyscan.

We upload the Grid Infrastructure zipfiles, extract them and install the cvuqdisk rpm file on all nodes. For the installation of Grid Infrastructure software itself we network_ether_interfaces array to set the client and interconnect interfaces correctly. The response file is made out of a jinja2 template so we can easily customise some settings depending on our needs.  Because we are doing a silent install from a response file we need to run the configToolAllCommands which is normally an interactive part when you use the OUI. Finally we create the FRA diskgroup and we can clean up everything we uploaded and now longer need.

The installation of the RDBMS software with Ansible is now very straightforward. It is just a case of adjusting the limits.conf, creating the response file using the jinja2 template and install the software like your regular silent install. The same goes for the actual creation of the database, there are the checks, the response file creation and the database creation itself. For database creation right now i use the response file method, as soon as i have some time i will switch this out for database creation using a template to have more fine-grain control on the created database.

If you take just the Vagrant file and the ansible file and start installing from scratch it will take you a couple of hours (about 3) to download the vagrant box, the software, creating the machines and the provisioning with Ansible.

  • The Github repository can be found here
  • The blogpost about Vagrant and the vagrantfile can be found here
  • Creating a RAC cluster using Ansible (part 1) can be found here