Running Ansible in a Vagrant-Controlled Virtual Machine
From Building Network Automation Solutions
At a Glance
- High-intensity interactive online course;
- Jump-start your network automation career;
- Hands-on experience working on a solution to your own problem;
- 9 module course spread across ~3 months;
- Live discussion and guest speaker sessions;
- Design and coding assignments and group work;
- Final course completion certificate.
- While Mac OSX is almost Unix, but there might be slight discrepancies between Linux and OSX behavior that would result in hard-to-troubleshoot failures.
- I managed to completely mess up my Python environment while trying to work with multiple Ansible versions simultaneously. Doing that on my workstation would be a disaster. Recovering the virtual machine took just a few minutes.
- If you’re running Windows you have no options anyway – Ansible doesn’t run on Windows.
However, Ansible running in a virtual machine has to be able to access other virtual machines (network devices) via an SSH connection.
VMware Desktop Virtualization
When you’re using VMware Fusion or Workstation as your virtualization environment Vagrant uses vmnet interface to connect to the individual virtual machines. Each virtual machine thus has a fixed IP address reachable from other virtual machines (like an out-of-band management network).
Sample VMware Fusion Environment
In the Ansible for Networking Engineers webinar I demonstrated scripts that work on Cisco IOS, Nexus OS and Junos. Cisco IOS and Nexus OS were running in a VIRL VM, Ansible and Junos vSRX were running in Vagrant-controlled VMs. The Vagrantfile I used is in my Network Automation Workshop GitHub repository.
vagrant ssh-config shows both Vagrant-controlled VMs connected to vmnet network (IP prefix 192.168.178.0/24 on my laptop) and reachable on port 22:
$ vagrant ssh-config | ack 'Host|Port' Host nms HostName 192.168.178.141 Port 22 UserKnownHostsFile /dev/null StrictHostKeyChecking no Host srx HostName 192.168.178.142 Port 22 UserKnownHostsFile /dev/null StrictHostKeyChecking no
Conclusion: NMS VM can reach vSRX VM directly by connecting to port 22 on IP address 192.168.178.142.
Vagrant implements access to VMs running in VirtualBox environment though a dynamically configured NAT interface. The VMs started by Vagrant are not connected to an internal management network (like in the VMware case) and thus cannot communicate to each other directly.
It is, however, still possible to reach other VMs by going through the host IP stack and using SSH port numbers assigned to individual VMs by Vagrant. This concept is best illustrated with an example.
Sample VirtualBox Environment
One of the VirtualBox environments you’ll find in my Network Automation GitHub repository is a leaf-and-spine topology using Arista vEOS (explore the topologies directory for more details).
That topology has four vEOS switches and an Ansible VM. While I added a dedicated management network to the topology, it’s still possible to reach vEOS switches from the Ansible VM going through the host TCP/IP stack.
vagrant ssh-config executed in a VirtualBox environment doesn’t display VM-specific IP addreses. VMs are reachable through the local loopback interface (127.0.0.1) on ports mapped by Vagrant:
$ vagrant ssh-config | ack 'Host|Port' Host spine-1 HostName 127.0.0.1 Port 2222 UserKnownHostsFile /dev/null StrictHostKeyChecking no Host spine-2 HostName 127.0.0.1 Port 2200 UserKnownHostsFile /dev/null StrictHostKeyChecking no Host leaf-1 HostName 127.0.0.1 Port 2201 UserKnownHostsFile /dev/null StrictHostKeyChecking no Host leaf-2 HostName 127.0.0.1 Port 2202 UserKnownHostsFile /dev/null StrictHostKeyChecking no Host nms HostName 127.0.0.1 Port 2203 UserKnownHostsFile /dev/null StrictHostKeyChecking no
When logging into a VM from the host, the SSH session seems to be coming from a weird (hidden) IP address:
[email protected]:~$ who vagrant pts/0 2016-12-18 15:00 (10.0.2.2)
The Ethernet interface of the Vagrant-controlled VM has another weird IP address:
[email protected]:~$ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:7e:84:45 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe7e:8445/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1336 errors:0 dropped:0 overruns:0 frame:0 TX packets:1161 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:160029 (160.0 KB) TX bytes:107337 (107.3 KB)
Using the IP address of the host TCP/IP stack (10.0.2.2) and TCP port number mapped to SSH port on another VM it’s possible to open a SSH session from Ansible VM to a virtual networking device. For example, to reach leaf-1 vEOS node, I’d run ssh -p 2201 10.0.2.2.
[email protected]:~$ ssh -p 2201 10.0.2.2 Password: Last login: Sun Dec 18 15:01:07 2016 from 10.0.2.2 leaf-1#
All you have to do to access virtual networking devices from Ansible playbooks is to build an inventory file that specifies ansible_host and ansible_port for every managed device. In my case that inventory file would look like this:
spine-1 ansible_host=10.0.2.2 ansible_port=2222 ansible_user=vagrant ansible_ssh_pass=vagrant spine-2 ansible_host=10.0.2.2 ansible_port=2200 ansible_user=vagrant ansible_ssh_pass=vagrant leaf-1 ansible_host=10.0.2.2 ansible_port=2201 ansible_user=vagrant ansible_ssh_pass=vagrant leaf-2 ansible_host=10.0.2.2 ansible_port=2202 ansible_user=vagrant ansible_ssh_pass=vagrant
To build that inventory file use the Vagrant2Inventory.py tool [[http://automation.ipspace.net/Example:Creating_Ansible_Inventory_from_Vagrant_SSH_Configuration%7Cdescribed here}}