One Click Clusters

Quickstart and conceptual overview for launching one-click, auto-configuring clusters.

If you are already running on one of the science clouds, you could launch and use a cluster right this minute. Or you could run your own cloud (the software is free and open source).


Cluster Quickstart:
  (you are here)
Cluster Guide:


Features (#)

  • Launch heterogeneous clusters of auto-configuring VMs with one command

  • No private keys need to be on images at the repository, you can collaborate openly and share them freely

  • Configuration tasks that are sensitive to (the changing) network and cryptographic identities are automatically managed upon boot

  • Shared, secure information context created for all launch members

  • Data may be sent to cluster's context by remote client for consumption by the nodes

  • Inside VM, you can attain context information without knowing about any messaging code/protocols

  • Information from VMs is available to remote client via secure path (for example generated SSHd host keys)

  • Common tasks are implemented (optional) like populating /etc/hosts with all cluster member addresses and setting up SSH host-based authentication across all accounts


Background (#)

Using the "--cluster" option, the cloud client can direct the workspace service to launch any number (and any type) of workspaces simultaneously. Something called the context broker will work behind the scenes to make sure each node has the information they need to play their various roles.

You can bring up the exact cluster you need whenever necessary. It's portable across clouds, too. If you need to make some updates you can re-launch from a new template image (that you already tested at small scale). You can also customize files on a per launch basis to make the cluster have different policies or behaviors. You can even direct the same image file to take on different roles depending on what the context broker tells it.

No private keys need to live on the images in your cluster before they are booted. This means you can freely collaborate on useful setups and distribute them openly over the internet to end users.


Example (#)

If you have access to a science cloud and see the 'base-cluster-01.gz' image in your personal repository directory, you can launch the cluster yourself. You will need a cloud client 009 or later.

There are four steps to the example:

  1. Edit the access policy to install on the cluster (dictate the grid-mapfile contents)
  2. Run one command to launch the cluster
  3. Run one scp command to get the cluster's auto-created credential so local job submission tools can trust it
  4. Export an environment variable that will point tools to the credential directory

Now you can submit remote work: you have a Torque PBS cluster fronted by GridFTP and GRAM.

Steps #1, #3, and #4 are particular to the example cluster, to configure security settings for GridFTP and GRAM.

SSH trust is already setup for you as part of #2. The nodes have been configured to trust your SSH key for logins. You trust each nodes' generated SSH host keys.


Super-quick start (#)

The guide on this page below contains inline explanation of the commands and what's happening. But if you'd just like to get going with the sample, here is what you need to do:

  1. Edit samples/base-cluster.xml to include your DN in the gridmap field
  2. Run ./bin/cloud-client.sh --run --hours 1 --cluster samples/base-cluster.xml
  3. Wait a few minutes
  4. At exit, note the hostname printed for "[[ head-node ]]"
  5. Configure local tools to trust the cluster:
    scp -r [email protected]:certs/*  lib/certs/
    export X509_CERT_DIR=`pwd`/lib/certs

There is a full sample of commands and output here.

Now you can send work. You have your very own, working:

  • Torque installation for distributing work across the cluster
  • GRAM4 (backed by Torque) installation at "https://HEADNODE_HOSTNAME:8443/wsrf/services/ManagedJobFactoryService"
  • GridFTP server on standard port 2811 of the headnode
  • RFT service at "https://HEADNODE_HOSTNAME:8443/wsrf/services/ReliableFileTransferFactoryService"

Some notes:

  • Any user account (including root) can freely SSH across the cluster to corresponding account
  • The "/home" directory is mounted from headnode to each compute over NFS
  • You should map your DN to the 'jobrun' account (like the sample value in base-cluster.xml)

To make your image(s) auto-configure your own software choices, see How do I make images of my own do this?. While this will take more than "one click" to set up, our alpha users have had successes in short order. You can run and auto-configure pretty much any type of software/application that will run on a normal cluster.


Example walkthrough (#)

We assume, like with single VM launches you've done, you have run grid-proxy-init. See the main quickstart page for more information.

We've put a full sample of commands and output here for reference. What follows is a walkthrough of everything that happens.

* Configure the grid-mapfile contents you want: (#)


Edit samples/base-cluster.xml, find the "<data name="gridmap">" tag and add your DN. This cluster has a generic account called "jobrun" you can map to (follow the example).

Notice we are requesting one head node and two compute nodes. This particular cluster can launch with only one head node but any number of compute nodes.

* Launch the cluster: (#)


Now you can launch:

$ ./bin/cloud-client.sh --run --hours 1 --cluster samples/base-cluster.xml

Some information is printed (including the assigned IP addresses), just like with single launches. We quickly move to this output:

Launching cluster-001... done.

Waiting for launch updates.

This is the first of two waiting periods, this is when the images files make their way to the hypervisors and it will take a few minutes. The next waiting period is much shorter.

While you're waiting, make a mental note of the handle. This will look something like "cluster-001". This is just like "vm-001" for non-cluster launches, you can use it to manage things, for example terminate ("./bin/cloud-client.sh --terminate --handle cluster-001")

* Launched: (#)


Information starts to come in:

  - cluster-001: all members are Running
  - wrote reports to '/tmp/cloud-client/history/cluster-001/reports-vm'

Everything is now "Running" which really means the VMs are all at least booting. If there was a problem, an error notice would print to the screen and the "reports-vm" directory that is listed here would have all the details on any errors.

Information is archived in that directory for successes, too, but you typically only need the hostname to log into or submit jobs to. That hostname is printed to the screen and also always available by looking at your history/cluster-001/run-log.txt file.

* Contextualization status: (#)


Next up, we wait for contextualization. Shown here is the waiting message along with the result. It takes much less time than the previous wait.

Waiting for context broker updates.
  - cluster-001: contextualized
  - wrote ctx summary to '/tmp/cloud-client/history/cluster-001/reports-ctx/CTX-OK.txt'
  - wrote reports to '/tmp/cloud-client/history/cluster-001/reports-ctx'

What 'contextualized" means is that every node has:

  • gotten in contact with the context broker
  • provided data to the context broker, e.g. SSH public key
  • gotten all necessary information from the context broker
  • successfully called the just-in-time configuration scripts
  • reported back to context broker that no error occured (ready to go)

Contextualization reports are available at the listed reports directory but like the launch reports, you don't really need to pay attention unless there was an error. If there is an error, the error reports are written to the reports directory and the cluster is backed out. These error reports include all logs of each context agent (all output leading up to the problem, non-errors included too).

* No security gap with SSH logins: (#)


The last thing you will see before the client exits:

SSH trusts new key for hostname1.cloudurl.edu  [[ head-node ]]
SSH trusts new key for hostname2.cloudurl.edu  [[ compute-nodes #0 ]]
SSH trusts new key for hostname3.cloudurl.edu  [[ compute-nodes #1 ]]

That final SSH message means that the client retrieved the SSH public keys from the context broker -- they are installed already to your local known_hosts file. Which mean you can take an address and log in as root:

... and you do not get "The authenticity of host 'xyz (1.2.3.4)' can't be established" messages anymore.

This relieves you of that pesky 'y' keystroke to accept the key -- and no more "WARNING KEY HAS CHANGED" messages when you've been given an IP that's been seen before. But this importantly means the gap is closed on possible man-in-the-middle attacks. Through secure channels end to end, the client is able to know the public key that each instance generated at boot.

You can turn off this auto-configuration of your known_hosts file by removing the "ssh.hostsfile" configuration from the conf/cloud.properties file. Also, to support large virtual clusters, there is an option to only get this SSH adjustment for specific nodes you care about logging in to directly. For example, you may have 100 compute nodes on a NAT'd network behind the edge nodes. You wouldn't care about known_hosts adjustment for those so you can turn that off on a case by case basis.

* Configure grid tools trust: (#)


Before submitting to the cluster, we need the grid tools to trust the middleware on the headnode. The head node has been configured to create a self-signed host certificate. Grab it and add it to the embedded trusted cert directory like this:

$ scp -r [email protected]:certs/*  lib/certs/

Make grid tools trust that host certificate:

$ export X509_CERT_DIR=`pwd`/lib/certs

* Result: (#)


You now have your own, working:

  • Torque installation for distributing work across the cluster
  • GRAM4 (backed by Torque) installation at "https://HEADNODE_HOSTNAME:8443/wsrf/services/ManagedJobFactoryService"
  • GridFTP server on standard port 2811 of the headnode
  • RFT service at "https://HEADNODE_HOSTNAME:8443/wsrf/services/ReliableFileTransferFactoryService"