OpenStack Nova Compute

April 26th, 2016 | SpinDance | Hosting,Systems

In this portion of the OpenStack series we will be focusing on Nova Compute.

The Nova Compute project is responsible for the creation of the virtual machine on the hypervisor. Nova is made up of the following main components.

  • Nova API – This is how the clients interact with Nova and so do all of the other services.
  • Nova Scheduler – In charge figuring out which compute node can host the virtual machine that is being created.
  • Nova Conductor – This is charge of mediating the communication between Nova Compute and the database
  • Messaging – This is the backbone for all inter service communication in OpenStack
  • SQL Database – Store details about the OpenStack infrastructure for Nova Computer, things like VM state, IPs, configuration, etc..
  • Nova Compute – This is the piece that makes the magic happen of spinning up VMs.

For further in-depth details about each component and other pulled in dependencies check out the Nova documentation.

Nova is a hypervisor?

When I first heard about OpenStack, all I knew was that it was an open source hypervisor.  Well, that isn’t quite what is going on here.  The Nova project specifically isn’t a hypervisor, it’s just python code that interacts with an underlying hypervisor.  This means that you in theory could use a range of different hypervisor technologies on the host.  …

Read more

OpenStack Glance

April 21st, 2016 | SpinDance | Hosting


Anytime you spin up a server on a cloud provider you are able to select from their predisposed configurations that consist of specific memory and disk size configuration.  During this stage you can also select what operating system you want your server to use.  The service that handles the images for your operating systems in the OpenStack world is called Glance.


Like the other OpenStack projects Glance can be interacted with using one of three things, API, GUI, or CLI.  In the examples below we will use the CLI client.  It is important to note that glance supports a variety of disk formats, to obtain an exhaustive list and further information about Glance you can consult the documentation here.  If you don’t define a storage location for your images Glance will store them by default locally at the following path /var/lib/glance/images.  The recommended storage platform to use is Swift.

# Create image
glance image-create --name "CentOS7" --is-public true --disk-format qcow2 --container-format bare --file /PATH/TO/IMAGE.img

#View images
glance image-list

#Deleting an Image
glance image-delete <uuid>

If you have used cloud providers in the past you might have noticed that magically your root user has a password that you can login with.  In some cases you are able to inject your public key so you can ssh with it.  …

Read more

OpenStack Swift

April 19th, 2016 | SpinDance | Hosting,Systems


Swift addresses the use case for storing static data. Swift lets you upload a file, which will be available when you need it again.  On the surface this sounds very simple, but lets take a look at what is actually going on here.

Anything you put into swift is an object. All objects exist in containers. Containers themselves belong to an account.  It is important to point out that Swift does not allow nesting of containers, so you will have to find other forms of organization.  When you put an object into swift you can set different meta data on the object to help you with later retrieval or organization.   For example, one important piece of metadata is expiration.  This allows you to have your swift object automatically delete after a certain amount of time.

The rest of this post will go into a little more detail about what happens behind the scenes with swift.

Swift Components

Swift is made up of multiple components (micro-services) which communicate together in order to make the Swift magic happen.  The Swift documentation can be found at .

  • Object Service – Saving and deleting objects
  • Container Service – Listing objects in container
  • Account Service – List containers in account
  • Proxy Service – Exposes public API and routes requests

Just like the other OpenStack components,…

Read more

OpenStack Cinder

April 14th, 2016 | Erik Kranzusch | Hosting,Systems


Cinder is the block storage component for OpenStack.  It’s built using LVM and iSCSI, but also supports wide variety of vendor specific plugins.  It runs on port 8776.  Like everything else in OpenStack, volumes are owned by the project you are scoped to.  Nova and Cinder work together to host and present storage to tenants.  This means, some commands will use the cinder-client, and some will use nova-client.

Cinder’s components

  • cinder-api (used by Horizon and client)
  • relational database (MySQL/MariaDB/Percona)
  • cinder-scheduler (responsible for allocating block storage)
  • cinder-volume (reads health of the volume group)
  • open-iscsi (manages iSCSI connections with the Nova/compute nodes)
  • tgt/tgtd (client for Ubuntu/CentOS respectively)

OpenStack’s storage terminology

Ephemeral Storage

Ephemeral is considered non-persistent storage. It will be lost as soon as the VM is terminated. It is normally used to hold the Operating System.

Persistent Storage

There are currently three types of persistent storage in OpenStack.

  1. Block (Cinder)
    • Used for local persistent storage, which can be easily moved to another VM in the event of failure.
    • Example usage: A database
  2. Object (Swift)
    • Used for Write Once Read Many (WORM) storage.  API driven.
    • Example usage: OS Golden Image
  3. Shared File System (Manila)
    • Used like block storage,

Read more

OpenStack Neutron

April 12th, 2016 | Erik Kranzusch | Hosting,Systems


Neutron is the networking piece of OpenStack.  It actually works a bit differently than one would expect.  This is because of the use of an abstraction layer.  Because of this, it’s possible to do some very interesting things, that would break many of the normally accepted rules of networking.  Neutron was formerly known as Quantum, the name was changed due to legal reasons.  Like all other pieces of OpenStack, it’s based off existing Linux utilities.  Specifically, neutron uses the ip command, iproute2, iptables, and bridge-utils.  The Neutron project runs on port 9696.

Neutron’s components

  • Neutron Server (API)
  • Relational DB (MySQL/MariaDB/Percona)
  • Message Queue (AMQP)
  • Plugin Agent
  • DHCP Agent
  • L3 Agent
  • Modular Layer 2 (ML2) Plugin framework
    • Allows you to use more than 1 plugin

Networking has quite a bit of jargon associated with it.  OpenStack has a bit of its own as well.  Let’s define some of the terms:

  • Provider network – Provides access to an external network resource
  • Tenant network – Local network for spinning up VMs
  • Flat network – A network where all hosts share the same collision domain.  No segmentation is applied.
  • Local network – A network that is only available for a single host.  Useful for Proof-of-concept or troubleshooting only.

Read more

OpenStack Keystone

April 6th, 2016 | Erik Kranzusch | Hosting,Systems


Keystone is a very important piece of OpenStack, since it handles the authentication for all services.  It’s also the only service that needs to use two ports.  Keystone uses port 5000 for public queries, and 35357 for administration.  It is also the only component of OpenStack that does not use eventlet, instead it uses Apache to host the API.

Getting an Un-Scoped token from Keystone
To get authorization, you need three things:

  • Username
  • Password
  • Authorization URL

If you successfully pass these three items, you will get an un-scoped token back, along with a list of projects your user belongs to.

The Service Catalog

This brings up one of Keystone’s important functions.  The “Service Catalog” is exactly what you would expect it to be.  It’s a list of the projects, services, and endpoints available to your user in OpenStack.  Since OpenStack at its core is very modular, the Service Catalog is a way of listing which pieces are available for use.  It’s basically a service map.

Getting a properly scoped token from Keystone

You can’t do much with an unscoped token.  In OpenStack, practically everything is owned by a project.  This means, that since the unscoped token is not tied to a project, there’s not a lot to do with it.  To get a scoped token you need the following:

  • Username
  • Password
  • Authorization URL
  • Project


Each project / component of OpenStack will have a /etc/$PROJECT_NAME/policy.json file.  …

Read more

OpenStack Overview

April 1st, 2016 | Erik Kranzusch | Hosting,Systems

This is the start of a blog series about OpenStack. During this series we will cover the different components that create OpenStack from a high-level point of view.

OpenStack is a collection of open source projects, modeled off of Amazon’s AWS suite. It is written in Python, and available at

The full documentation for OpenStack is available at

The following blog posts are co-authored by both Erik Kranzusch and David Rodriguez.

Why OpenStack?

OpenStack was developed to address a need in the industry to have an open source Cloud.  Something similar to Amazon’s AWS was needed for companies that are not able to leverage that particular cloud.

The main advantages of OpenStack are:

  • Can run on commodity hardware
  • Built with “Cloudy” mentality
  • Supports a self-service model
  • Doesn’t have the added licensing costs of enterprise solutions
  • Can be introduced into a traditional IT environment smoothly

Paradigm Shift

Old cloud:
The old cloud way of thinking is also referred to as “Mode 1”. This would be similar to a VMWare style deployment. It is traditionally GUI driven, ticket based, and hosts usually scale horizontally. This type of cloud is often expensive, proprietary, and requires smart hardware to handle fault tolerance….

Read more