OpenStack-Multa Part 2: configuring your infrastructure VMs using Ansible

In the (386) 279-6305 of this blog series, I covered why you might deploy OpenStack on top of OpenStack using Heat and Ansible.  If you followed part one, you’ll have deployed 15 VMs using Heat, ready to be prepped for your OpenStack installation…

In this post, I’ll cover the roles of various playbooks and explain how they prepare the VMs for the installation of OpenStack.

Requirements

We’ll be using Ansible to perform the VM configuration.  I recommend installing the latest version of Ansible – this was initially developed with Ansible 2.4.1.0 running on macOS High Sierra.

Github Repo

Clone the following repo to your deployment machine

/github.com/geoffhigginbottom/openstack-multa

Authentication

Firstly, you’ll need to configure your deployment machine so Ansible can authenticate to your cloud.  Ansible uses clouds.yaml to store this connection information – you’ll find a ‘clouds_example.yaml’ file which can be used to create your own clouds.yaml with appropriate account settings. 

You’ll also need to set up group_vars/general/vault.yml by using the vault_example.yml file as a template.  This file stores your chosen password and allocates it to a user account, which is created once OpenStack is configured.  It also contains settings to enable mail alerting which, due to the long running nature of the deployment, is useful to monitor progress.

Notes:

  • refer to the README.md on github for more details about configuring and using clouds.yaml and vault.yml
  • whilst files should be prevented from syncing back to Github by being included in .gitignore, care should be taken not to expose these files if you create your own repo

Initially deploy it in phases

The site.yml file calls roles in the correct order, and in theory can be run in one hit using the following command:

ansible-playbook site.yml

However, it’s recommended you break the deployment down to enable testing of each phase. Once you’re satisfied that each role works as expected, run them all via site.yml

Gateway VM

The gateway role sets the VM up as a simple Linux based router, which’ll provide the final hop for Neutron Networks within the OpenStack deployment. This will ensure the VMs have outbound internet access and prevent public cloud network restrictions interfering with traffic.

The second function of this – and the reason it’s the first VM we deploy – is that it also acts as a proxy server for the underlying host VMs.  A large amount of code needs to be pulled and installed on every VM – using a caching proxy can significantly reduce the time it takes to install the various components.

To deploy the gateway role, run the following command from your host which has Ansible installed and the clone of the git repo, which you’ll have updated as per the instructions above:

ansible-playbook gw.yml

Volume groups

OpenStack-Ansible expects to find volume groups on the controller and compute nodes, so this role sets the appropriate groups for each node.

Setting up volume groups entails running the following command from your host:

ansible-playbook volume-group.yml

NTP server / client

We need to ensure all host VMs are using a common time source and are in sync.  Whilst the underlying public cloud infrastructure should take care of this, I believe it’s best practice to actively manage this.  By default, controller1 is used as the NTP server – all other nodes leverage this as a time source.

To configure the NTP server and clients, run the following commands from your deployment host:

ansible-playbook ntp-server.yml

ansible-playbook ntp-client.yml

Generate / fetch / distribute key

We need a ‘SSH’ key on the deployment host, which by default is controller1.  Controller1 uses this key to connect to other nodes when running OpenStack-Ansible.  We first generate a random key on controller1, then fetch the public key before distributing it to all other nodes.

To create and distribute the SSH keys, run the following commands from your deployment host:

ansible-playbook generate-key.yml

ansible-playbook fetch-key.yml

ansible-playbook distribute-key.yml

Swift disks

Each Swift Node has five volumes associated with it, which you need to partition and mount within the host OS before OpenStack-Ansible can use them when deploying Swift.

To prepare the volumes on all swift nodes, run the following command from your deployment host:

ansible-playbook swift-disks.yml 

OSA prep

The OSA prep role prepares each node for the installation of OpenStack.  After installing many required packages, you’ll need to gather the various IP addresses allocated to each VM by the public cloud, followed by configuration of the network bridges – rebooting only if required and waiting for all nodes to reboot.

This play relies on the various interfaces files which can be found in the /files folder. These files are used to configure networking for each node type depending on its role. Various update emails are sent during this process as it can take some time to complete.

To prepare all the nodes for the installation of OpenStack run the following command from your deployment host:

ansible-playbook osa-prep.yml

Summary

You now have 15 nodes which’ve been prepared and are ready for the installation of OpenStack.  Part three of this blog series will dive deeper into the process of deploying OpenStack onto these nodes.

OpenStack-Multa Part 1: deploying on VMs using Heat

In an ideal world, every OpenStack deployment should have a ‘staging’ version, so production critical workloads can be tested against the next release.  You should also have an R&D environment for more disruptive testing and staff training purposes. But we know this isn’t always possible, so what’s the alternative?

The cost of implementing a new environment for testing is often prohibitive, from a time and money perspective.  So, how do you go about training new staff preparing for their Certified OpenStack Administrator (COA) certification?  How do you test new OpenStack projects without impacting your current production system?

One option is to deploy OpenStack on OpenStack, giving you the option of running multiple versions on the same hardware.  Now, it’s worth pointing out at this early stage that performance will be significantly down on a production system running on dedicated hardware, so this is only aimed at functional testing, and certainly can’t be used to run production workloads.

So assuming you want to deploy this type of testing environment, how do you go about it and what do you need?

In the first part of this blog series, I’m going to cover how to deploy the infrastructure on virtual machines (VMs) using Heat. My upcoming articles will continue to explore the implementation process in detail:

  • Part 2: Configuring the infrastructure VMs using Ansible
  • Part 3: Installing OpenStack using OpenStack-Ansible
  • Part 4: Setting up OpenVPN and accessing your VMs

DEPLOYING USING HEAT

The aim of this deployment is to mirror a production system where possible, whilst allowing for limitations by running with nested virtualisation.  As an OpenStack Specialist Architect, I have access to the Rackspace Public Cloud powered by OpenStack, so I leverage this for my own testing. My approach can be easily adapted and used to deploy onto your own OpenStack Private Cloud, or even as an alternate public cloud.

You’ll need to prepare your environment by setting up networking and deploying a set of VMs which’ll simulate your physical servers in a real deployment.  The easiest way to achieve this is by using Heat – 855-899-6922.

NETWORKING

My Heat template will deploy the following networks:


PublicNet and ServiceNet are Rackspace public cloud specific networks created by default, but all other networks are created by the Heat template.  The PublicNet network allocates a public IP address to every VM providing access to the Internet, whilst the ServiceNet network provides backend services such as monitoring and backups. I’ll be leveraging the PublicNet network, overlooking the ServiceNet network in this instance.

The following networks are created by mirroring a physical installation of OpenStack, using OpenStack-Ansible:

  • Management network: used by all hosts, and containers which run on controllers, and is the central OpenStack management network.
  • Storage: provides access to Swift object storage via Swift proxies running on controllers, and access to Ceph storage nodes.
  • Storage replication: handles Swift and Ceph storage replication between nodes.
  • Flat: enables provider networks without VLAN tagging, keeping it public cloud friendly.
  • VXLAN: used by OpenStack when creating private networks using VXLAN, as VLAN tagged networks on public cloud aren’t easy to leverage.

In hindsight, a more suitable name for the VXLAN network could’ve been ‘tenant’ or ‘private’ as the network isn’t using VXLAN tech natively at the public cloud layer. However, the virtual hypervisors use VXLAN when creating their own private networks which run across this network.

This is where it gets interesting. For a physical install, the above networks would be used with bridges and possibly bonds to provide High Availability (HA).  If you have full control of the underlying OpenStack installation you’re deploying on, you could create suitable networks which don’t have security layers – such as IP and MAC address filtering – to enable a seamless deployment.  However, if running this deployment on a public cloud, you’ll need to implement an overlay network to surpass security restrictions. You can learn more around advanced networking on public clouds here.

The trick is to deploy what I call the ‘encapsulation network’, followed by modifying the configuration of all VMs to enable VXLAN encapsulation of any network where containers are required to communicate with other containers or hosts. The flat, management, storage and VXLAN networks are encapsulated with a VXLAN tunnel – preventing the public cloud IP and MAC filtering from blocking traffic to/from containers. The configuration of the VXLAN tunnel is covered in the second part of this blog series.

VIRTUAL MACHINES

The following VMs are deployed by the Heat template:

  • ceph1-3: three Ceph storage nodes to provide block storage and Glance backend storage
  • compute1-3: three compute nodes which’ll run QEMU
  • console: admin VM to enable direct access to VMs – can implement a desktop environment to provide local browser
  • controller1-3: three controller nodes mirroring a production configuration
  • gw: Linux router and Squid proxy enabling provider network functionality and improved deployment speeds
  • lb1: HA proxy load balancer to front the three controller nodes
  • swift1-3: three Swift storage nodes to provide Object storage

VOLUMES

To simulate a physical installation of OpenStack, some VMs have additional volumes assigned to them. The controller and compute nodes utilise volume groups, which OpenStack-Ansible expects to find on nodes with these roles.  The Swift nodes have five volumes assigned, OpenStack-Ansible expects to see these mounted as individual partitions – keep an eye out for part two of my blog series where I’ll be covering the configuration of these partitions. Finally, Ceph nodes also have five additional volumes assigned, but these do not require pre-configuration as the OpenStack-Ansible tooling handles this for you.

DEPLOYMENT

To deploy the environment you simply need to use the Heat template “openstack-osa-framework.yaml” to create all of the infrastructure on your OpenStack cloud.

NEXT STEPS

Once you have the networks, VMs and volumes deployed, proceed to part two of this blog series where I’ll cover how to prepare the VMs for OpenStack installation. In the meantime, check out my recent blog on how to implement advanced networking on public clouds.

(248) 580-7558

How do you deploy complex applications which require VLANs or multiple network segments into the public cloud?  How can you make containers communicate with other containers running on different virtual machines (VMs), when the Public Cloud Network Security prevents this type of promiscuous traffic?

If any of the above scenarios sound familiar, then read on…

Whilst attempting to setup an OpenStack test environment recently, where I needed containers to run and communicate on many different VMs, I realised the Public Cloud Networking Security was getting in my way.  The security is there for good reason and that’s to protect my data from other users. I needed a way to get around these barriers, without compromising security.

I had several ‘host’ VMs which could communicate directly via different layer 2 networks.  On these, I had a number of Linux Containers (LXC) which needed to communicate with all other hosts and containers. Host-to-host traffic wasn’t a problem and containers could communicate with the host they were running on but couldn’t break out to other hosts or containers.

The solution was quite simple – VXLAN. So what is VXLAN, and how did it help?

Virtual Extensive Local Area Network (VXLAN) has been around since 2011 and is a method of encapsulating layer 2 network traffic over an existing physical layer. The solution was to connect all the physical hosts with an ‘Encapsulation Network’, then encapsulate all the container related traffic within this network. So, how did we achieve this?

My VMs were running Ubuntu 16.04 which has native support for VXLAN, so it was simply a case of updating the network configurations to enable VXLAN and encapsulate the required traffic. As my environment is used to run test installations of OpenStack using OpenStack-Ansible (OSA), I have many different networks configured on my Public Cloud deployment.

Advanced Cloud Networking

PublicNet and ServiceNet are default networks on our Rackspace Public Cloud. The Encapsulation Network is created specifically to carry all traffic encapsulated by VXLAN.  The remaining networks are specific to my OpenStack test installation and will be more thoroughly covered in a follow-on article.

Within my VMs, there are bridges created for each of the ‘Flat’, ‘Management’, ‘Storage’, ‘Replication’, and ‘VXLAN’ networks. Note, the VXLAN network is used within OpenStack and shouldn’t be confused with the ‘Encapsulation Network’, which also uses VXLAN technology.

By following a standard configuration approach, I’ve mirrored as closely as possible a standard deployment, but then added the VXLAN Encapsulation as an extra layer. The configuration of each VM needs to be set to enable the Encapsulation Network, followed by creating an encapsulated interface for every network, before finally mapping a bridge to each one.

This is the configuration of the Encapsulation Network Interface. Whilst it was initially allocated via DHCP by the Rackspace Public Cloud, it now needs a Static IP – so I used Ansible automation to set it within the VM.

auto eth2
iface eth2 inet static
    address 10.240.0.2
    netmask 255.255.252.0

Then we created an interface to encapsulate the ‘management’ traffic, as follows:

auto encap-mgmt
iface encap-mgmt inet manual
    pre-up ip link add encap-mgmt type vxlan id 236 group 239.0.0.236 dev eth2 ttl 5 dstport 4789 || true
    up ip link set $IFACE up
    down ip link set $IFACE down
    post-down ip link del encap-mgmt || true

The key VXLAN settings are:

ip link add encap-mgmt type vxlan id 236 group 239.0.0.236 dev eth2 ttl 5 dstport 4789
  • id: the Segment ID or VXLAN Logical Network ID (similar to a VLAN Tag)
  • group: the Multicast Group, uses a class D address (I set the last octet to match the third octet, but any range from 224.0.0.0 to 239.255.255.255 can be used)
  • dev: the device to use, in this case eth2
  • ttl: needs to be set >1 for multicast and large enough to allow traversal of the DC networking
  • dstport: 4789 is the default port for VXLAN traffic, assigned by iana

And finally, we create a bridge which uses the ‘encap-mgmt’ interface: 

auto br-mgmt
iface br-mgmt inet static
    bridge_stp off
    bridge_waitport 0
    bridge_fd 0
    bridge_ports encap-mgmt
    offload-sg off
    address 172.29.236.2
    netmask 255.255.252.0

All the configuration was handled with Ansible which used the IPs assigned by the Public Cloud and then converted them into static IPs. The interfaces would need to be taken down and brought up again for the settings to take effect, however as I was applying the configuration – along with a raft of updates and other changes – a full reboot of each VM was triggered, ensuring the new configuration was activated.

Once back online the Encapsulation Network began carrying all the traffic for the various networks, wrapped in a VXLAN tunnel. The tunnel enables the traffic to get past cloud security restrictions whilst keeping your data safe from other users.

Keep an eye out for my upcoming blogs, the first of the series will be honing in on deploying infrastructures on VMs using Heat…