Wednesday, January 24, 2018

Eve-NG supported images

Dynamips (Cisco IOS emulation)

  • c7200-adventerprisek9-mz.152-4.S6 (supported cards: PA-FE-TX, PA-4E, PA-8E)
  • c3725-adventerprisek9-mz.124-15.T14 (supported cards: NM-1FE-TX, NM-16ESW)
  • c1710-bk9no3r2sy-mz.124-23
  • Other images from the above series should work too

IOL (IOS on Linux also known as IOU)

  • L2-ADVENTERPRISE-M-15.1-20131216.bin
  • L2-ADVENTERPRISE-M-15.1-20140814.bin
  • L2-IPBASEK9-M-15.1-20130726.bin
  • L3-ADVENTERPRISEK9-M-15.4-1T.bin
  • L3-ADVENTERPRISEK9-M-15.4-2T.bin
  • L3-ADVIPSERVICES-M-15.1-2.9S.bin
  • And others

Qemu

  • Cisco ACS 5.6, 5.8.1.4
  • Cisco AMP Private cloud
  • Cisco ISE 1.2, 1.4
  • Cisco ISE 2.1
  • Cisco ISE 2.2, 2.3
  • Cisco ASA 8.0.2 (Single and Multi Context)
  • Cisco ASA 8.4 Multicontext Support
  • Cisco ASA 9.1.5 Multicontext Support
  • Cisco ASAv 9.6.2, 9.7.1 or later
  • Cisco IPS 7.1
  • Cisco Firepower 6.1, 6.2 Management centre (FMC)
  • Cisco Firepower 6.1, 6.2 Treat defence ASAv (FTD)
  • Cisco Firepower 6.1, 6.2 NGIPSv
  • Cisco Firepower 5.4 (NGIPS, FMC)
  • Cisco CSR 3.16, 3,17
  • Cisco CSR 16.03 Denali
  • Cisco CSR 16.04 Everest
  • Cisco vIOS L3
  • Cisco vIOS L2
  • Cisco ESA 9.7, 9.8 Email Security Appliance
  • Cisco WSA 8.6, 9.2 10.0 Web Security Appliance
  • Cisco CDA 1.0 Context Delivery Agent
  • Cisco NXOS Titanium 7.1.0.3
  • Cisco NXOSk9 (require source of 2xCPU and 8G RAM for single node)
  • Cisco Prime Infra 3.X
  • Cisco XRv 5.2.2, 5.3.2, 6.0.1, 6.0.2
  • Cisco XRvK9 6.0.1, 6.1.2 (require source of 4xCPU and 16G RAM for single node)
  • Cisco vWLC, 7.4. 8.0.100, 8.2.X, 8.3.X
  • Cisco vNAM Virtual Network Analysis Module 6.2.x
  • Cisco vWAAS 200.5.5, 6.2.3
  • Cisco CUCM 11.5.1.11900-26
  • Juniper vSRX 12.1.47D
  • Juniper Olive M series
  • Juniper vSRX NG 15.1x49-D40.6, D70, D100
  • Juniper vSRX NG 17.3
  • Juniper vMX 14.1.4R8
  • Juniper vMX 16.1R3.10 VCP (control plane node)
  • Juniper vMX 16.1R3.10 VFP (forwarding plane node)
  • Juniper vMX 17.1, 17.2, 17.3 VCP (control plane node)
  • Juniper vMX 17.1, 17.2, 17.3 VFP (forwarding plane node)
  • Juniper vQFX 10K VRE 15.1X53 (routing engine)
  • Juniper vQFX 10K VFE 15.1X53 (forwarding engine)
  • Juniper VRR
  • Junos Space 16.1
  • Alcatel 7750SR: 13.0.R3
  • A10, vThunder 2.7.1
  • Apple OSX ( https://github.com/kholia/OSX-KVM )
  • Aruba: Clearpass 6.4.X
  • Aruba Virtual Mobility Controller 8.X
  • Arista vEOS 4,17.2F and later versions
  • Barracuda NGFW
  • Brocade vADX 3.01.1
  • Checkpoint FW: R77-20, R77-30
  • Citrix Netscaler 11.0.62
  • Dell SonicWall 11.3.0
  • CumulusVX 2.5.3
  • ExtremeOS 21.1.14
  • F5 BIG-IP 12.0.0 Supports LTM, GTM HA
  • Fortinet Manager v5
  • Fortinet Mail 5.3, 5.4
  • Fortinet FGT v5.X
  • Fortinet 5.2.3, 5.6
  • Huawei USG6000v v5.1.6
  • HP VSR 1000 7.10
  • Mikrotik 6.30.2, 6.40
  • PaloAlto FW 7.0, 7.1, 8.0
  • pfSense FW 2.3
  • Radware Alteon
  • S-Terra FW, Gate 4.1,
  • S-Terra CSP-VPN gate 3.1
  • VMWare ESXi 6.5
  • VMWare vCenter 6.5
  • VyOS 1.1.6
  • Windows XP
  • Windows 7
  • Windows 8.1
  • Windows 10
  • Windows Server 2003
  • Windows Server 2008R2
  • Windows Server 2012 R2
  • Windows Server 2016
  • Linux TinyCore 6.4
  • Linux Slax 7.08
  • Linux Mint 18
  • Linux Kali x64 Full
  • Linux Kali x386 light
  • Linux Mint 18
  • Linux Ubuntu Desktop 16.04
  • Linux DSL 4.4.10
  • Linus Ubuntu Server 16.04 Webmin
  • Linux NETem: NETem bandwidth limitation, delay, jitter and packet loss.
  • Ostinato traffic generator 0.7, 0.8

Eve-NG Qemu image Naming

This table shows correct foldername for QEMU images used under EVE. As well right qcow image filename.
Qemu image location is /opt/unetlab/addons/qemu/
Be sure that your image folder name starts as per table, after the "-" you can add version of your image:
Foldername examples:
firepower6-FTD-6.2.1
acs-5.8.1.4
Inside of folder must be placed image with correct name like:
hda.qcow2 or virtioa.qcow2
Example: opt/unetlab/addons/qemu/acs-5.8.1.4/hda.qcow2
Qemu folder name EVE
Vendor
Qemu image .qcow2 name
a10-
A10-vthunder
hda
acs-
ACS
hda
asa-
ASA ported
hda
asav-
ASAv
virtioa
ampcloud-
Ampcloud Private
hda, hdb, hdc
barracuda-
Barracuda FW
hda
bigip-
F5
hda, hdb
brocadevadx-
Brocade
hda
cda-
Cisco CDA
hda
cips-
Cisco IPS
hda, hdb
clearpass-
Aruba ClearPass
hda, hdb
aruba-
Aruba Virtual Mobility Controller
hda, hdb
coeus-
Cisco WSA coeus
virtioa
phoebe-
Cisco ESA
virtioa
cpsg-
Checkpoint
hda
csr1000v-
Cisco CSR v1000
virtioa
csr1000vng-
Cisco CSR v1000 Denali & Everest
virtioa
prime-
Cisco Prime Infra
virtioa
cucm-
Cisco CUCM
virtioa
cumulus-
Cumulus
hda
extremexos-
ExtremeOS
hda
firepower-
Cisco FirePower 5.4 NGIPS
scsia
firepower-
Cisco FirePower 5.4 FMC
scsia
firepower6-
Cisco FirePower 6.x NGIPS
scsia
firepower6-
Cisco FirePower 6.x FMC
hda
firepower6-
Cisco FirePower 6.x FTD
hda
fortinet-
Fortinet FW
virtioa
fortinet-
Fortinet SGT
virtioa
fortinet-
Fortinet mail
virtioa, virtiob
fortinet-
Fortinet manager
virtioa
hpvsr-
HP virt router
hda
huaweiusg6kv-
Huawei USG6000v
hda
ise-
ISE 1.x cisco
hda
ise-
ISE 2.x cisco
virtioa
jspace-
Junos Space
hda
junipervrr
Juniper vRR
virtioa
linux-
any linux
hda
mikrotik-
Mikrotik router
hda
nsvpx-
Citrix Netscaler
virtioa
nxosv9k-
NX9K Cisco Nexus ( SATA best perf)
sataa
olive-
Juniper
hda
ostinato-
Ostinato traffic generator
hda
osx-
Apple OSX
hda + kernel.img
paloalto-
PaloAlto FW
virtioa
pfsense-
pFsense FW
hda
riverbed-
vRiverbed
virtioa, virtiob
sonicwall-
DELL FW Sonicwall
hda
sourcefire-
Sourcefire NGIPS
scsia
sterra-
S-terra VPN
hda
sterra-
S-terra Gate
virtioa
timos-
Alcatel Lucent Timos
hda
titanium-
NXOS Titanium Cisco
virtioa
vcenter-
VMWare vCenter
sataa ( 12G )
satab ( 1.8G )
satac ( 15G )
satad ( 25G )
satae ( 25G )
sataf ( 10G )
satag ( 10G )
satah ( 15G )
satai ( 10G )
sataj ( 1.0G )
satak ( 10G )
satal ( 10G )
satam ( 100G )

veos-
Arista SW
hda, cdrom.iso
vios-
L3 vIOS Cisco Router
virtioa
viosl2-
L2 vIOS Cisco SW
virtioa
vmx-
Juniper vMX router
hda
vmxvcp-
Juniper vMX-VCP
hda, hdb, hdc
vmxvfp-
Juniper vMX-VFP
hda
vnam-
Cisco VNAM
hda
vqfxpfe-
Juniper vQFX-PFE
hda
vqfxre-
Juniper vQFX-RE
hda
vsrx-
vSRX 12.1 Juniper FW/router
virtioa
vsrxng-
vSRX v15.x Juniper FW/router
hda
vwaas-
Cisco WAAS
virtioa,virtiob,virtioc
vwlc-
vWLC Cisco WiFi controller
megasasa
vyos-
VYOS
virtioa
win-
Windows Hosts (Not Server Editions)
hda or virtioa(using driver)
winserver-
Windows Server Editions
hda or virtioa(using driver)
xrv-
XRv Cisco router
hda
xrv9k-
XRv 9000 Cisco router
virtioa

How to create own custom Windows 7 host for EVE:

For this you will need real Windows installation CD ISO distro.
We are using: Windows7SP1Ultimate_64 Bit.iso. Be sure that distro name has not spaces in the filename! Any windows host installation is same procedure.
1. Create new image directory:
mkdir /opt/unetlab/addons/qemu/win-7test/
2. Use WINSCP or FileZilla SFTP or SCP (port 22) to copy distro ISO image into the newly created directory, path: /opt/unetlab/addons/qemu/winserver-test/
3. From cli go to
cd /opt/unetlab/addons/qemu/win-7test/
4. Rename this distro to cdrom.iso
mv Windows7SP1Ultimate_64_Bit.iso cdrom.iso
5. Create new virtioa.qcow2
/opt/qemu/bin/qemu-img create -f qcow2 virtioa.qcow2 30G
6. Create new lab and add newly created win-7-test node
7. Edit node settings and set, qemu version 2.2.0 and NIC e1000.
8. Connect it to your home LAN cloud/internet, this need to get updates from internet
9. Start node in lab and do install of your Windows, customize it as you like, as you have connected it to home LAN and internet this install will be like normal windows installation.
10. IMPORTANT: When windows installation ask to choose a hdd where Windows will be installed, choose Load driver, Browse, choose FDD B/storage/2003R2/AMD64 or 86/, (AMD or x86 depends which version of windows you are installing 64 or 32 bit), click next and you will see HDD RedHat VIRTIO SCSI HDD now.
11. Choose this HDD and continue install Windows as usual.
12. Option, if you like to use this image with RDP in the EVE, then you have to allow RDP on this Windows machine and create password for user. My case it is user/Test123. Be sure that in Windows firewall Remote access inbound rules are allowed for Public access.
13. Finish installation and shutdown properly the VM from inside VM OS. Start/shutdown
14. On EVE LAB web UI left side bar choose “Lab Details” to get your lab uuid details: my case: UUID: 3491e0a7-25f8-46e1-b697-ccb4fc4088a2
IMPORTANT: Convert your installed image to be as default for further use in EVE-NG:
qemu-img convert -c -O qcow2 /opt/unetlab/tmp/0/3491e0a7-25f8-46e1-b697-ccb4fc4088a2/1/virtioa.qcow2  /opt/unetlab/addons/qemu/win-7test/virtioa.qcow2
(0 is POD number of user, main admin user POD Nr. is 0)
15. Remove cdrom.iso from /opt/unetlab/addons/qemu/win-7test/
cd /opt/unetlab/addons/qemu/win-7test/
rm -f cdrom.iso
DONE
Advanced option how to make your default image tiny.
1. After you have done all steps above and default image is created, you can compress its HDD and make it smaller.
IMPORTANT: for compressing image you must have on your EVE free space matching HDD sice which you used for install, our case it is 30Gb. Fo our image compression you must have on your EVE at least 35Gb HDD free space !!!
2. From CLI: go to your windows image directory:
cd /opt/unetlab/addons/win-7test
and do sparsify command:
virt-sparsify  --compress virtioa.qcow2 compressedvirtioa.qcow2
3. it will take some time and another compressed image will be created in same image directory win-7test
4. now you can rename your original virtioa.qcow2 file to orig.qcow2
mv virtioa.qcow2 orig.qcow2
5. Rename compressed image name to virtioa:
mv compressedvirtioa.qcow2 virtioa.qcow2
6. now you can test your new compressed image on lab if all is right and image works, just wipe node and start it.
7. If compressed node works fine, you can delete your source original image
rm -f orig.qcow2
DONE

Wednesday, January 17, 2018

Cumulus - Create a Two-Leaf, Two-Spine Topology


The following sections describe how to configure network interfaces and FRRouting for a two-leaf/two-spine Cumulus VX network topology:
  • Two spine VMs that represent two spine (aggregation layer) switches on the network.
  • Two leaf VMs that represent two leaf (access layer) switches on the network.

     
These instructions assume that you have installed the relevant images and hypervisors, created four VMs with appropriate names, and that the VMs are running. Refer to the Getting Started chapter of this guide for more information on setting up the VMs.

Contents

Configure CumulusVX-leaf1

You can configure each of the VMs using the Network Command Line Utility (NCLU) or by editing the /etc/network/interfaces and /etc/frr/frr.conf files as the sudo user.
The following configuration uses unnumbered IP addressing, where you use the same /32 IPv4 address on multiple ports. OSPF unnumbered doesn't have an equivalent to RFC-5549, so you need to use an IPv4 address to bring up the adjacent OSPF neighbors, allowing you to reuse the same IP address. You can see some example unnumbered OSPF configurations in the knowledge base.
To configure CumulusVX-leaf1:
  1. Log into the CumulusVX-leaf1 VM using the default credentials:
    • username: cumulus
    • password: CumulusLinux!
  2. As the sudo user, edit the /etc/frr/daemons file in a text editor. Set zebra, bgpd, and ospfd to yes, and save the file.
    zebra=yes
    bgpd=yes
    ospfd=yes
    ...
  3. Run the following commands to configure the switch:
    cumulus@switch:~$ net add loopback lo ip address 10.2.1.1/32
    cumulus@switch:~$ net add interface swp1 ip address 10.2.1.1/32
    cumulus@switch:~$ net add interface swp2 ip address 10.2.1.1/32
    cumulus@switch:~$ net add interface swp3 ip address 10.4.1.1/24
    cumulus@switch:~$ net add interface swp1 ospf network point-to-point
    cumulus@switch:~$ net add interface swp2 ospf network point-to-point
    cumulus@switch:~$ net add ospf router-id 10.2.1.1
    cumulus@switch:~$ net add ospf network 10.2.1.1/32 area 0.0.0.0
    cumulus@switch:~$ net add ospf network 10.4.1.0/24 area 0.0.0.0
    cumulus@switch:~$ net pending
    cumulus@switch:~$ net commit
    These commands configure both /etc/network/interfaces and /etc/frr/frr.conf. The output of each file is shown below.
    To edit the configuration files directly as the sudo user, copy the configurations below.
    /etc/network/interfaces
    # The loopback network interface
    auto lo
      iface lo inet loopback
      address 10.2.1.1/32
     
    # The primary network interface
    auto eth0
      iface eth0 inet dhcp
     
    auto swp1
      iface swp1
      address 10.2.1.1/32
     
    auto swp2
      iface swp2
      address 10.2.1.1/32
     
    auto swp3
      iface swp3
      address 10.4.1.1/24
    /etc/frr/frr.conf
    service integrated-vtysh-config
     
    interface swp1
      ip ospf network point-to-point
     
    interface swp2
      ip ospf network point-to-point
     
    router-id 10.2.1.1
     
    router ospf
      ospf router-id 10.2.1.1
      network 10.2.1.1/32 area 0.0.0.0
      network 10.4.1.0/24 area 0.0.0.0
  4. Restart the networking service:
    cumulus@switch:~$ sudo systemctl restart networking
  5. Restart FRRouting:
    cumulus@switch:~$ sudo systemctl restart frr.service

Configure the Remaining VMs

The configuration steps for CumulusVX-leaf2, CumulusVX-spine1, and CumulusVX-spine2 are the same as CumulusVX-leaf1; however, the file configurations are different. Listed below are the configurations for each additional VM:
  • CumulusVX-leaf2:
    /etc/network/interfaces
    # The loopback network interface
    auto lo
      iface lo inet loopback
      address 10.2.1.2/32
     
    # The primary network interface
    auto eth0
      iface eth0 inet dhcp
     
    auto swp1
      iface swp1
      address 10.2.1.2/32
     
    auto swp2
      iface swp2
      address 10.2.1.2/32
     
    auto swp3
      iface swp3
      address 10.4.2.1/24
    /etc/frr/frr.conf
    service integrated-vtysh-config
     
    interface swp1
      ip ospf network point-to-point
     
    interface swp2
      ip ospf network point-to-point
     
    router-id 10.2.1.2
     
    router ospf
      ospf router-id 10.2.1.2                                                          
      network 10.2.1.2/32 area 0.0.0.0 
      network 10.4.2.0/24 area 0.0.0.0
  • CumulusVX-spine1:
    /etc/network/interfaces
    # The loopback network interface
    auto lo
      iface lo inet loopback
      address 10.2.1.3/32
     
    # The primary network interface
    auto eth0
      iface eth0 inet dhcp
     
    auto swp1
      iface swp1
      address 10.2.1.3/32
     
    auto swp2
      iface swp2
      address 10.2.1.3/32
     
    auto swp3
      iface swp3
    /etc/frr/frr.conf
    service integrated-vtysh-config
     
    interface swp1
      ip ospf network point-to-point
     
    interface swp2
      ip ospf network point-to-point
     
    router-id 10.2.1.3
     
    router ospf
      ospf router-id 10.2.1.3
      network 10.2.1.3/32 area 0.0.0.0
  • CumulusVX-spine2:
    /etc/network/interfaces
    # The loopback network interface
    auto lo
      iface lo inet loopback
      address 10.2.1.4/32
     
    # The primary network interface
    auto eth0
      iface eth0 inet dhcp
     
    auto swp1
      iface swp1
      address 10.2.1.4/32
     
    auto swp2
      iface swp2
      address 10.2.1.4/32
     
    auto swp3
      iface swp3
    /etc/frr/frr.conf
    service integrated-vtysh-config
     
    interface swp1
      ip ospf network point-to-point
     
    interface swp2
      ip ospf network point-to-point
     
    router-id 10.2.1.4
     
    router ospf
      ospf router-id 10.2.1.4
      network 10.2.1.4/32 area 0.0.0.0
Restart the networking and FRRouting services on all VMs before continuing.

Create Point-to-Point Connections Between VMs

To use the two-leaf/two-spine Cumulus VX network topology you configured above, you need to configure the network adapter settings for each VM to create point-to-point connections. The following example shows how to create point-to-point connections between each VM in VirtualBox. If you are not using VirtualBox, refer to your hypervisor documentation to configure network adapter settings.
Follow these steps for each of the four VMs.
Make sure that the VM is powered off.
  1. In the VirtualBox Manager window, select the VM.
  2. Click Settings, then click Network.
  3. Click Adapter 2.
  4. Click the Enable Network Adapter check box.
  5. From the Attached to list, select Internal Network.
  6. In the Name field, type a name for the internal network, then click OK.
    The internal network name must match the internal network name on the corresponding network adapter on the VM to be connected to this VM. For example, in the two-leaf/two-spine Cumulus VX network topology, Adapter 2 (swp1) on CumulusVX-leaf1 is connected to Adapter 2 (swp1) on CumulusVX-spine1; the name must be the same for Adapter 2 on both VMs. Use the internal network names and the connections shown in the illustration and table below.
  7. Click Adapter 3 and repeat steps 4 thru 6. Use the internal network names and the connections shown in the illustration and table below.

CumulusVX-leaf1 Adapter 1NAT 
 swp1Adapter 2InternalIntnet-1
 swp2Adapter 3InternalIntnet-3
 swp3Adapter 4InternalIntnet-5
CumulusVX-leaf2 Adapter 1NAT 
 swp1Adapter 2InternalIntnet-2
 swp2Adapter 3InternalIntnet-4
 swp3Adapter 4InternalIntnet-6
CumulusVX-spine1 Adapter 1NAT 
 swp1Adapter 2InternalIntnet-1
 swp2Adapter 3InternalIntnet-2
 swp3Adapter 4 (disabled)  
CumulusVX-spine2 Adapter 1NAT 
 swp1Adapter 2InternalIntnet-3
 swp2Adapter 3InternalIntnet-4
 swp3Adapter 4 (disabled)  

Test the Network Topology Connections

After you restart the VMs, ping across VMs to test the connections:
  1. Run the following commands from CumulusVX-leaf1:
    • Ping CumulusVX-leaf2:
      cumulus@CumulusVX-leaf1:~$ ping 10.2.1.2
    • Ping CumulusVX-spine1:
      cumulus@CumulusVX-leaf1:~$ ping 10.2.1.3
    • Ping CumulusVX-spine2:
      cumulus@CumulusVX-leaf1:~$ ping 10.2.1.4

Cumulus - Libvirt and KVM - QEMU


Source: https://docs.cumulusnetworks.com/display/VX/Libvirt+and+KVM+-+QEMU

The following sections describe how to set up a two-leaf/two-spine Cumulus VX topology with QEMU/KVM on a Linux server.

These sections assume a basic level of Linux and KVM experience. For detailed instructions, refer to the QEMU and KVM documentation. 

Contents

Overview

Performing virtualization in Linux requires three components:
  • Libvirt provides an abstraction language to define a VM. It uses XML to represent and define the VM.
  • KVM works exclusively with QEMU and performs hardware acceleration for x86 VMs with Intel and AMD CPUs. The pair is often called KVM/QEMU or just KVM. 
  • QEMU is a machine emulator that allows the host machine to emulate the CPU architecture of the guest machine. Because QEMU does not provide hardware acceleration, it works well with KVM.

Install libvirt

  1. Review the Linux version of the host:

    This guide is validated and verified for Ubuntu Trusty 14.04.5 LTS starting from a clean install.
    local@host:~$ uname -a
    Linux ubuntu-trusty-64 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  2. Run the following commands to install libvirt:
    local@host:~$ sudo add-apt-repository ppa:linuxsimba/libvirt-udp-tunnel
    local@host:~$ sudo apt-get update -y
    local@host:~$ sudo apt-get install libvirt-bin libvirt-dev qemu-utils qemu
    local@host:~$ sudo /etc/init.d/libvirt-bin restart

    The linuxsimba/libvirt-udp-tunnel package repository provides an updated libvirtd version that includes enhancements required to launch Cumulus VX. The example below shows the installation output:
    local@host:~/$ sudo apt-get install libvirt-bin libvirt-dev qemu-utils qemu
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    tree is already the newest version.
    git is already the newest version.
    qemu-utils is already the newest version.
    qemu-utils set to manually installed.
    The following packages were automatically installed and are no longer required:
      bsdtar libarchive13 liblzo2-2 libnettle4 linux-headers-4.2.0-34
      linux-headers-4.2.0-34-generic linux-image-4.2.0-34-generic
      linux-image-extra-4.2.0-34-generic ruby-childprocess ruby-erubis ruby-ffi
      ruby-i18n ruby-log4r ruby-net-scp ruby-net-ssh
    Use 'apt-get autoremove' to remove them.
    The following extra packages will be installed:
      libvirt0 python-paramiko python-support qemu-system qemu-system-arm
      qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-sparc
      qemu-user sshpass
    Suggested packages:
      radvd lvm2 qemu-user-static samba vde2 openbios-ppc openhackware qemu-slof
    The following NEW packages will be installed:
      htop python-paramiko python-support qemu qemu-system qemu-system-arm
      qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-sparc
      qemu-user sshpass
    The following packages will be upgraded:
      ansible libvirt-bin libvirt-dev libvirt0
    4 upgraded, 12 newly installed, 0 to remove and 18 not upgraded.
    Need to get 31.1 MB of archives.
    After this operation, 166 MB of additional disk space will be used.
    Do you want to continue? [Y/n] Y
  3. After the installation process is complete, log out, then log back in to verify the libvirt version.

    In this guide, libvirt 1.2.16 was verified.
    local@host:~# libvirtd --version
    libvirtd (libvirt) 1.2.16

Configure Cumulus VX VMs with QEMU/KVM


This section assumes that you have installed QEMU/KVM and the Cumulus VX disk image for KVM. For download locations and steps, refer to the Getting Started page.

This configuration is tested on a server running Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux with 3.2.0-4-amd64 #1 SMP processors.
After you follow the steps below, the interfaces will be connected as follows:
  • leaf1:swp1--->spine1:swp1
  • leaf1:swp2--->spine2:swp1
  • leaf2:swp1--->spine1:swp2
  • leaf2:swp2--->spine2:swp2
  1. Copy the qcow2 image onto a Linux server four times to create the four VMs, then name them as follows:
    • leaf1.qcow2
    • leaf2.qcow2
    • spine1.qcow2
    • spine2.qcow2
  2. Power on leaf1.qcow2 and configure it as follows:
    sudo /usr/bin/kvm   -curses                             \
                        -name leaf1                       \
                        -pidfile leaf1.pid                \
                        -smp 1                              \
                        -m 256                              \
                        -net nic,vlan=10,macaddr=00:01:00:00:01:00,model=virtio \
                        -net user,vlan=10,net=192.168.0.0/24,hostfwd=tcp::1401-:22 \
                        -netdev socket,udp=127.0.0.1:1602,localaddr=127.0.0.1:1601,id=dev0 \
                        -device virtio-net-pci,mac=00:02:00:00:00:01,addr=6.0,multifunction=on,netdev=dev0,id=swp1 \
                        -netdev socket,udp=127.0.0.1:1606,localaddr=127.0.0.1:1605,id=dev1 \
                        -device virtio-net-pci,mac=00:02:00:00:00:02,addr=6.1,multifunction=off,netdev=dev1,id=swp2 \
                        leaf1.qcow2
  3. Power on leaf2.qcow2 and configure it as follows:
    sudo /usr/bin/kvm   -curses                             \
                        -name leaf2                       \
                        -pidfile leaf2.pid                \
                        -smp 1                              \
                        -m 256                              \
                        -net nic,vlan=10,macaddr=00:01:00:00:02:00,model=virtio \
                        -net user,vlan=10,net=192.168.0.0/24,hostfwd=tcp::1402-:22 \
                        -netdev socket,udp=127.0.0.1:1604,localaddr=127.0.0.1:1603,id=dev0 \
                        -device virtio-net-pci,mac=00:02:00:00:00:03,addr=6.0,multifunction=on,netdev=dev0,id=swp1 \
                        -netdev socket,udp=127.0.0.1:1608,localaddr=127.0.0.1:1607,id=dev1 \
                        -device virtio-net-pci,mac=00:02:00:00:00:04,addr=6.1,multifunction=off,netdev=dev1,id=swp2 \
                        leaf2.qcow2
  4. Power on spine1.qcow2 and configure it as follows:
    sudo /usr/bin/kvm   -curses                             \
                    -name spine1                       \
                    -pidfile spine1.pid                \
                    -smp 1                              \
                    -m 256                              \
                    -net nic,vlan=10,macaddr=00:01:00:00:03:00,model=virtio \
                    -net user,vlan=10,net=192.168.0.0/24,hostfwd=tcp::1403-:22 \
                    -netdev socket,udp=127.0.0.1:1601,localaddr=127.0.0.1:1602,id=dev0 \
                    -device virtio-net-pci,mac=00:02:00:00:00:05,addr=6.0,multifunction=on,netdev=dev0,id=swp1 \
                    -netdev socket,udp=127.0.0.1:1603,localaddr=127.0.0.1:1604,id=dev1 \
                    -device virtio-net-pci,mac=00:02:00:00:00:06,addr=6.1,multifunction=off,netdev=dev1,id=swp2 \
                    spine1.qcow2
  5. Power on spine2 and configure it as follows:
    sudo /usr/bin/kvm   -curses                             \
                    -name spine2                       \
                    -pidfile spine2.pid                \
                    -smp 1                              \
                    -m 256                              \
                    -net nic,vlan=10,macaddr=00:01:00:00:04:00,model=virtio \
                    -net user,vlan=10,net=192.168.0.0/24,hostfwd=tcp::1404-:22 \
                    -netdev socket,udp=127.0.0.1:1605,localaddr=127.0.0.1:1606,id=dev0 \
                    -device virtio-net-pci,mac=00:02:00:00:00:07,addr=6.0,multifunction=on,netdev=dev0,id=swp1 \
                    -netdev socket,udp=127.0.0.1:1607,localaddr=127.0.0.1:1608,id=dev1 \
                    -device virtio-net-pci,mac=00:02:00:00:00:08,addr=6.1,multifunction=off,netdev=dev1,id=swp2 \
                    spine2.qcow2

    The QEMU/KVM commands used here are minimal. You can add more parameters, such as -enable-kvm, -serial or -monitor, as needed.
    Bridging Switch Port Interfaces

    If you intend to bridge the switch ports in the VM, place each switch port in the bridge in its own virtual network on the host. Otherwise, you might see this error:
    br0: received package on swp1 with own address as source address

Next Steps


This section assumes that you are configuring a two-leaf/two-spine network topology, that you have completed the steps in Create a Cumulus VX Virtual Machine with VMware vSphere - ESXi 5.5 above, and that you now have a VM called CumulusVX-leaf1.
After you create all four VMs, follow the steps in Create a Two-Leaf, Two-Spine Topology to configure the network interfaces and routing.