Warning: This document describes an old release. Check here for the current version.
Install DHCPd and Configure Networking
On this page, you will install a central DHCPd daemon (or integrate with an existing one) and configure the networking addresses you want to give out to virtual machines when they boot.
When a VM is started using Nimbus, it is provided a network IP address via DHCP. You must configure a pool of available networks and addresses in the service. You must provide at least as many addresses as VMs you want to run simultaneously.
In addition to configuring available addresses in the Nimbus service, you must also set up a DHCP server to actually hand out these addresses. This will be discussed in depth shortly.
Network pools (#)
Take a look at the $NIMBUS_HOME/services/etc/nimbus/workspace-service/network-pools/ directory on your service node. Each file represents a single network which can be provided to virtual machines. Each network can be bridged to different (or the same) physical networks and a single VM can be connected to multiple networks. When a VM is launched, one or more networks are requested by the client.
Traditionally, Nimbus installations are configured with two networks, public and private. Start with the public file as it contains a lot of helpful documentation. For each network, you must configure a DNS server as well as a list of available network slots.
# DNS server IP address (or 'none') 192.168.0.1 # hostname ipaddress gateway broadcast subnetmask [MAC] pub02 192.168.0.2 192.168.0.1 none none pub03 192.168.0.3 192.168.0.1 192.168.0.255 255.255.255.0 pub04 192.168.0.4 192.168.0.1 192.168.0.255 255.255.255.0
The only truly required fields are the hostname and IP. The gateway, broadcast, and subnet may be specifed as 'none' (but you probably don't want to). You can also optionally specify a hardware MAC address as the last field. If you do not specify a MAC, the Nimbus service will generate one for you.
While addresses for VMs are configured and chosen within the Nimbus service, they are physically queried via an external DHCPd service. There are two ways of arranging the DHCP configuration.
- Centralized -- a new or existing DHCP service that you configure with Nimbus-specific MAC to IP mappings. This is generally simpler to set up and is covered in this guide.
- Local -- a DHCP server is installed on every VMM node and automatically configured with the appropriate addresses just before a VM boots. This is more complicated to set up initially but can be preferable in certain scenarios. It is however out of the scope of this guide. For details, see this section of the reference guide.
To proceed, you need a DHCP server listening on the network(s) that will be available to virtual machines. You can use an existing site server or install your own. DHCP has great distribution support in Linux. In Debian/Ubuntu, the package is dhcp3-server while in Redhat it is dhcp.
You should be very careful when installing DHCP and ensure that you have permission from your network administrators to do so. Multiple DHCP services do not play nicely on the same subnet.
When a VM is created using Nimbus, the service selects a network pool entry and sends the IP and hostname back to the client. It is the job of your DHCP server to ensure that the VM leases the correct IP address, otherwise the VM will not be able to access the network. You must add the correct MAC to IP mappings to your DHCP server configuration. To facilitate this, the Nimbus service produces several files for you.
After you configure the network pool entries in the previous section, restart the service.
$ nimbusctl services restart
Now take a look in $NIMBUS_HOME/services/var/nimbus/. Among several other files, there should be these three:
- dhcpd.entries - a generated list of host entries that can be included in your DHCP configuration.
- ip_macs.txt - a whitespace-separated list of IP address to MAC address pairings. If you have some unusual DHCP system, you can script parsing this file to generate your configuration.
- control.netsample.txt - a sample network pool entry, useful for testing out new VMM nodes. You will use this file in the next section.
These files will only change when you alter your network-pools configuration and restart the service. So you can safely copy them to another system or use a script to generate your config. You only need to ensure that you keep things in sync whenever you change network pool entries.
The recommended configuration is to directly include the dhcpd.entries file in your dhcpd.conf. You can do this with the include directive:
On some systems, such as Ubuntu, you may need to copy the dhcpd.entries file to the DHCP configuration directory (/etc/dhcp3) as the daemon will not read from other locations.
Alternatively you can paste the contents of that file directly inline, or use a script to generate your needed configuration from ip_macs.txt. Once the configuration is in place, restart the DHCP server.
Metadata server (#)
VMs can optionally query a metadata server for user-provided data as well as information about the deployment. This service is disabled by default, but will be required if you want to use the Nimbus Context Broker. Configuration is in $NIMBUS_HOME/services/etc/nimbus/workspace-service/metadata.conf. You must choose an IP and port for the service to listen on. This address must be accessible by the VMs. You should also set listen=true. If you have non-standard network names, adjust these as well.
listen=true contact.socket=22.214.171.124:80 public.networks=public local.networks=private
After you make these changes, restart the Nimbus service.
$ nimbusctl services restart
You can verify that the metadata server is running by checking the log.
$ grep Metadata $NIMBUS_HOME/var/services.log 2010-07-30 14:50:33,769 INFO defaults.HTTPListener [main,initServer:84] Metadata server URL:
Now you are ready to configure real nodes. Proceed to the next page to Install VMM Software.