Instances

Instances are the running virtual machines within an OpenStack cloud. This section deals with how to work with them and their underlying images, their network properties, and how they are represented in the database.

 Starting Instances

To launch an instance, you need to select an image, a flavor, and a name. The name needn't be unique, but your life will be simpler if it is because many tools will use the name in place of the UUID so long as the name is unique. You can start an instance from the dashboard from the Launch Instance button on the Instances page or by selecting the Launch action next to an image or snapshot on the Images & Snapshots page.

On the command line, do this:

$ nova boot --flavor <flavor> --image <image> <name>

There are a number of optional items that can be specified. You should read the rest of this section before trying to start an instance, but this is the base command that later details are layered upon.

To delete instances from the dashboard, select the Terminate instance action next to the instance on the Instances page. From the command line, do this:

$ nova delete <instance-uuid>

It is important to note that powering off an instance does not terminate it in the OpenStack sense.

 Instance Boot Failures

If an instance fails to start and immediately moves to an error state, there are a few different ways to track down what has gone wrong. Some of these can be done with normal user access, while others require access to your log server or compute nodes.

The simplest reasons for nodes to fail to launch are quota violations or the scheduler being unable to find a suitable compute node on which to run the instance. In these cases, the error is apparent when you run a nova show on the faulted instance:

$ nova show test-instance
+------------------------+-----------------------------------------------------\
| Property               | Value                                               /
+------------------------+-----------------------------------------------------\
| OS-DCF:diskConfig      | MANUAL                                              /
| OS-EXT-STS:power_state | 0                                                   \
| OS-EXT-STS:task_state  | None                                                /
| OS-EXT-STS:vm_state    | error                                               \
| accessIPv4             |                                                     /
| accessIPv6             |                                                     \
| config_drive           |                                                     /
| created                | 2013-03-01T19:28:24Z                                \
| fault                  | {u'message': u'NoValidHost', u'code': 500, u'created/
| flavor                 | xxl.super (11)                                      \
| hostId                 |                                                     /
| id                     | 940f3b2f-bd74-45ad-bee7-eb0a7318aa84                \
| image                  | quantal-test (65b4f432-7375-42b6-a9b8-7f654a1e676e) /
| key_name               | None                                                \
| metadata               | {}                                                  /
| name                   | test-instance                                       \
| security_groups        | [{u'name': u'default'}]                             /
| status                 | ERROR                                               \
| tenant_id              | 98333a1a28e746fa8c629c83a818ad57                    /
| updated                | 2013-03-01T19:28:26Z                                \
| user_id                | a1ef823458d24a68955fec6f3d390019                    /
+------------------------+-----------------------------------------------------\
            

In this case, looking at the fault message shows NoValidHost, indicating that the scheduler was unable to match the instance requirements.

If nova show does not sufficiently explain the failure, searching for the instance UUID in the nova-compute.log on the compute node it was scheduled on or the nova-scheduler.log on your scheduler hosts is a good place to start looking for lower-level problems.

Using nova show as an admin user will show the compute node the instance was scheduled on as hostId. If the instance failed during scheduling, this field is blank.

 Using Instance-Specific Data

There are two main types of instance-specific data: metadata and user data.

 Instance metadata

For Compute, instance metadata is a collection of key-value pairs associated with an instance. Compute reads and writes to these key-value pairs any time during the instance lifetime, from inside and outside the instance, when the end user uses the Compute API to do so. However, you cannot query the instance-associated key-value pairs with the metadata service that is compatible with the Amazon EC2 metadata service.

For an example of instance metadata, users can generate and register SSH keys using the nova command:

$ nova keypair-add mykey > mykey.pem

This creates a key named mykey, which you can associate with instances. The file mykey.pem is the private key, which should be saved to a secure location because it allows root access to instances the mykey key is associated with.

Use this command to register an existing key with OpenStack:

$ nova keypair-add --pub-key mykey.pub mykey
[Note]Note

You must have the matching private key to access instances associated with this key.

To associate a key with an instance on boot, add --key_name mykey to your command line. For example:

$ nova boot --image ubuntu-cloudimage --flavor 2 --key_name mykey myimage

When booting a server, you can also add arbitrary metadata so that you can more easily identify it among other running instances. Use the --meta option with a key-value pair, where you can make up the string for both the key and the value. For example, you could add a description and also the creator of the server:

$ nova boot --image=test-image --flavor=1 \
  --meta description='Small test image' smallimage

When viewing the server information, you can see the metadata included on the metadata line:

$ nova show smallimage
+------------------------+-----------------------------------------+
|     Property           |                   Value                 |
+------------------------+-----------------------------------------+
|   OS-DCF:diskConfig    |               MANUAL                    |
| OS-EXT-STS:power_state |                 1                       |
| OS-EXT-STS:task_state  |                None                     |
|  OS-EXT-STS:vm_state   |               active                    |
|    accessIPv4          |                                         |
|    accessIPv6          |                                         |
|      config_drive      |                                         |
|     created            |            2012-05-16T20:48:23Z         |
|      flavor            |              m1.small                   |
|      hostId            |             de0...487                   |
|        id              |             8ec...f915                  |
|      image             |             natty-image                 |
|     key_name           |                                         |
|     metadata           | {u'description': u'Small test image'}   |
|       name             |             smallimage                  |
|    private network     |            172.16.101.11                |
|     progress           |                 0                       |
|     public network     |             10.4.113.11                 |
|      status            |               ACTIVE                    |
|    tenant_id           |             e83...482                   |
|     updated            |            2012-05-16T20:48:35Z         |
|     user_id            |          de3...0a9                      |
+------------------------+-----------------------------------------+
Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page


loading table of contents...