Warning: This document describes an old release. Check here for the current version.
On this page, you will remotely test the system with the cloud client like you did before. But this time, a virtual machine will be started for you as expected.
Back to the cloud client
Several pages ago, we set up and tested the cloud client against the service. Now we are ready to revisit that client installation and try again, but with a real VM. First off, run the status and list commands to ensure that the IaaS and repository services are still running correctly.
$ cd nimbus-cloud-client/
$ ./bin/cloud-client.sh --status Querying for ALL instances. There's nothing running on this cloud that you own.
$ bin/cloud-client.sh --list No files.
If these commands succeeded, we can move on and attempt to upload a real VM image to the repository.
It is recommended to start with the nimbus-z2c image that you used earlier to test the VMM. If you don't still have it, fetch it again.
$ wget http://www.nimbusproject.org/downloads/nimbus-z2c.gz $ gunzip nimbus-z2c.gz
Now use the cloud-client to upload the VM image to the Cumulus repository.
$ ./bin/cloud-client.sh --transfer --sourcefile nimbus-z2c Transferring - Source: nimbus-z2c - Destination: cumulus://Repo/VMS/04NjWi75iz1TzNf4Y3zvU/nimbus-z2c
Now the list operation should show the new image.
$ ./bin/cloud-client.sh --list [Image] 'nimbus-z2c' Read/write Modified: Jun 7 2010 @ 12:43 Size: 288358400 bytes (~275 MB)
Now we are ready to boot this image and see if we can ssh into it.
$ ./bin/cloud-client.sh --run --name nimbus-z2c --hours 1 Launching workspace. Workspace Factory Service: https://nimbus.example.edu:8445/wsrf/services/WorkspaceFactoryService Creating workspace "vm-001"... done. IP address: 188.8.131.52 Hostname: x001.nimbus.example.edu Start time: Mon Jul 05 21:27:20 CDT 2010 Shutdown time: Mon Jul 05 22:27:20 CDT 2010 Termination time: Mon Jul 05 22:37:20 CDT 2010 Waiting for updates.
If all goes well, a VM should be booting and its IP should be printed to the screen. The client will not exit until the image is distributed and the VM is booting. When this happens, you can attempt to SSH into the new VM.
$ ssh firstname.lastname@example.org
Notice also that a "handle" was printed out to the screen for this launch. In the example above, this handle is "vm-001". When you are ready to destroy a running VM, you do so using this handle.
$ ./bin/cloud-client.sh --terminate --handle vm-001
The cloud client has many other options and features. For details, check --help and the client quickstart.
Once more, with contextualization
The cloud client also supports launching VMs with contextualization. The VM image has an agent installed that securely contacts a broker and exchanges information about itself and other nodes. This allows launching groups of nodes that are contextualized into clusters.
This test will fail if you have not configured the metadata server. The VM will launch but get stuck in the phase "Waiting for context broker updates".
The nimbus-z2c image supports simple contextualization. Each VM's SSH public key will be retrieved by the cloud client and installed into your ~/.ssh/known_hosts file. If you launch multiple nimbus-z2c VMs, they will be contextualized and configured with host-based authentication.
Clusters are launched using a cluster document which describes the instances and their relationships. An example for nimbus-z2c is available here.
$ wget http://www.nimbusproject.org/downloads/nimbus-z2c.xml
<cluster> <workspace> <name>node1</name>  <!-- How many to launch --> <quantity>1</quantity> <nic wantlogin="true">public</nic> <ctx> <provides> <identity /> <role>testrole1</role> </provides> <requires> <identity /> <role name="testrole2" hostname="true" pubkey="true" /> <role name="testrole3" /> </requires> </ctx> </workspace> </cluster>
Run the cloud client again, specifying this file with the --cluster argument.
$ ./bin/cloud-client.sh --run --cluster nimbus-z2c.xml --hours 1 Requesting cluster. - node1: image 'nimbus-z2c', 1 instance Context Broker: https://nimbus.example.edu:8445/wsrf/services/NimbusContextBroker Created new context with broker. Workspace Factory Service: https://nimbus.example.edu:8445/wsrf/services/WorkspaceFactoryService Creating workspace "node1"... done. - 184.108.40.206 [ x001.nimbus.example.edu ] Launching cluster-001... done. Waiting for launch updates. - cluster-001: all members are Running Waiting for context broker updates. - cluster-001: contextualized SSH trusts new key for x001.nimbus.example.edu [[ node1 ]]
If all went well, you should see output similar to the example. The VM was booted and its SSH key was sent back to the client via the Context Broker. If you want to try booting multiple VMs in a cluster, edit the XML file and adjust the quantity element then try again. You can also check out the cluster guide for more information.
If the VM successfully launches but the cloud client gets stuck with the message "Waiting for context broker updates", try logging into the VM with SSH and examining the context agent log file /opt/nimbus/ctxlog.txt. There may be a problem with the context broker or metadata server.
Once all of these tests succeed, your cloud is up and running. At this point you probably want to configure more VMM nodes and add them to the resource pool. You may also want to check out the reference page sections for more a list of other configurations and information.
You can also now start enabling remote users. You should familiarize yourself with the user management tools, the web application that is available for securely distributing credentials, and look into configuring per-user rights and allocations.
Thanks for trying out Nimbus! If you have any questions or comments please contact our support lists.