Cluster Guide

Quickstart and conceptual overview for launching one-click, auto-configuring clusters.

If you are already running on one of the science clouds, you could launch and use a cluster right this minute. Or you could run your own cloud (the software is free and open source).


Cluster Quickstart:

Cluster Guide:
  (you are here)

How does it happen? (#)

A lightweight agent on each VM -- its only dependencies are Python and the ubiquitous curl program -- securely contacts the context broker using a secret key. This key was created on the fly and seeded inside the instance. This agent gets information concerning the cluster from the context broker and then causes last minute changes inside the image to adapt to the environment. This is called contextualization.

You, as client, specify the types of nodes you want using a simple role-based annotation. You specify the source image of each type as well as how many instances should be launched.


Let's say you have a head node, file server and a pool of homogenous compute nodes. The head node needs to know what nodes to send work to for computing, the slave nodes need to know what its head node and file server contacts are, etc.

To adapt to new network identities as well as new cryptographic identities, configuration files and access policies need to be adjusted. You may also want to pass different data files in on different launches: in the example (see quickstart), an access policy for external usage was installed (grid-mapfile).

Sorting this out on the fly (and securely) is the responsibility of the context agent on each VM. The context agent takes all the generic security and message handling steps out of the equation for you as the cluster builder. You're left with just the task of taking the information you need (identities, new data files, etc) and turning that into the configuration change that will make your software work.

* Basic process: (#)


The numbered steps below correspond to the numbers in the diagram.

You make a cloud-client request. Instead of using "--name" to specify the image to use, you use "--cluster" and specify a cluster definition file. This is a simple XML file that defines the layout you want (we'll look at this later).

The client notices you want contextualization and creates a new context for you from the context broker. The broker provides the information that each instance needs to talk to the context broker: a security credential and the broker contact information.

The client contacts the cloud service with your request, securely passing along the broker contact and security information.

The cloud service launches the instances you asked for, seeding each image with the security credential and the broker contact information. You can launch different images at once, for example two "abc" instances plus three "xyz" instances etc. You can also make a whole cluster launch from a single image file -- each instance can take on a different personality after it boots, guided by what you request in the cluster definition file. That is how the base-cluster sample works (see quickstart).

Inside the VM is a lightweight program we will call the context agent. Its only dependencies are Python and the curl program, so it should be able to launch in all but the most stripped down VMs. This program interprets the bootstrap information (from step #2) and talks with the context broker over HTTPS.

We will go through the concepts of provided roles and required roles later in the cluster definition sections below, but the basic idea is that the broker tells the agent inside the VM all about the other nodes that it is "required" to know about. The agent turns around and looks for scripts to call that bear the role name(s) in question.

If you are making a cluster yourself, these scripts are what you need to look at and change around to get the right behavior at boot time. Because only particular scripts are called based on what the context broker tells the agent, this allows you to have one binary VM image that ends up being able to play multiple roles. If a script is present but is not invoked because of the way you launched that particular instance, then the configuration will just not happen.

Remote clients can query the context broker for information. One important thing they can retrieve is the SSHd public key generated by every cluster member. By default, as you saw above, the cloud-client will install these to the known_hosts file for you. This feature is only available when using contextualization (you can launch workspaces with "--cluster" flag and NOT use contextualization for some or all of them).

* Cluster definition without contextualization: (#)


The cluster definition file drives the actual request that is made by the cloud-client in step #1 above. Here is an example that excludes the contextualization related element:

<cluster xmlns="https://www.globus.org/2008/06/workspace/metadata/logistics">

  <workspace>
    <name>head-node</name>
    
    <quantity>1</quantity>
    <nic wantlogin="true">public</nic>
  </workspace>
  
  <workspace>
    <name>compute-nodes</name>
    
    <quantity>2</quantity>
    <nic>public</nic>
  </workspace>
  
</cluster>

Each <workspace> element is like a group (of one to N) identical requests that will only differ by the network identity each instance gets assigned. You can have unlimited <workspace> sections to make any arbitrary cluster layout.

<name> is for local console printing only. This is helpful for quickly ascertaining which IP address you're interested in. It's an optional element.

The 
    <quantity>1</quantity>
    <nic wantlogin="true">public</nic>
    
    <ctx>
      <provides>
          <identity />
          <role>torquemaster</role>
      </provides>
      
      <requires>
          <identity />
          <role name="torqueslave" hostname="true" pubkey="true" />
      </requires>
    </ctx>
    
  </workspace>
  
  <workspace>
    <name>compute-nodes</name>
    
    <quantity>2</quantity>
    <nic>public</nic>
    
    <ctx>
      <provides>
          <identity />
          <role>torqueslave</role>
      </provides>
      
      <requires>
          <identity />
          <role name="torquemaster" hostname="true" pubkey="true" />
      </requires>
    </ctx>
  </workspace>
  
</cluster>

Besides seeing examples of the provides and requires syntax, two extra things were introduced in this example:

  1. The <identity /> tags. Keep these, they signal that each member requires all identities in the cluster. This is used for configuring each node's local /etc/hosts with every member in the context and it's very likely you will not want to disable this behavior.

  2. The hostname and pubkey attributes. This signals that when a node is informed of this required role, it needs to have hostname and SSHd pubkey in the information, otherwise it will not be considered a complete answer. Responses to context agents concerning this role will be held off until the nodes in question report their SSHd public keys to the context broker.

Notice that providing contextualization requirements on a per launch basis allows you to use the same image file for different <workspace> sections. With these Torque roles, you could have the same image file with both sets of configuration scripts. When the VMs are booting and the context agent retrieves role information from the broker, only the appropriate scripts are called. That is how the base-cluster sample works (see quickstart).

This should be enough to get you going, be sure to look at the script samples for comments.

If you are not sure about what configuration strategy to take for a particular piece of software, one thing you might try is asking on the [email protected] list for ideas since other cluster authors are lurking there (see the contact page for instruction on how to subscribe).