News

   Jump to year: 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006

2022

  • Dec 01, 2022
    We have another position open for a research software engineer to work on our exciting new projects in reproducibility and edge computing, at University of Chicago to be part of the Nimbus team. For more details and to apply please visit the job posting.
  • Jun 28, 2022
    For the upcoming ACM Practice & Experience in Advanced Research Computing Conference of 2022, Chameleon will be presented with the Best Paper Award for our single sign-on migration paper! See this article published by the University of Chicago computer science department about the achievement.

2021

2020

  • Sep 22, 2020

    Update: this position has been filled.

    We are seeking a product lead at University of Chicago to be part of the Nimbus team and work on managing the growth of the Chameleon project. For more details and to apply please visit the job posting.
  • Sep 22, 2020

    Nimbus Infrastructure was one of the first open source implementations of the concept of Infrastructure-as-a-Service (IaaS) with the production release of the first component, the Workspace Service, released in mid-2005. Overtime, the system grew to add an implementation of a scalable quota-based storage cloud, contextualization tools, allowing users to configure “one click” virtual clusters, as well as a variety of tools creating and managing configurations distributed over multiple clouds, and adapting them to the needs of the scientific community. Nimbus software was used to configure multiple research clouds (FutureGrid being the most prominent example) as well as enable a variety of scientific applications.

    While pioneering and committed to a quality implementation, Nimbus remained primarily a research project. In the early 2010s, the OpenStack IaaS cloud implementation emerged as a viable alternative to Nimbus, with strong support from the open source community. To better leverage the momentum behind cloud-related development, the Nimbus team transitioned to become an OpenStack contributor, always advocating the needs of the scientific community, and significantly contributing to OpenStack services such as Blazar reflecting our community’s requirements.

    Today, the Nimbus team leads the operation of the Chameleon research cloud, an OpenStack-based testbed for computer science systems research. While the Nimbus project is no longer under active development, the Nimbus team continues to drive science-related features into cloud computing development via contributions in research, development, and operations. Aside from operating the Chameleon testbed, the Nimbus team actively contributes to exploring topics such as auto-scaling, preemptible workloads, and the use of clouds to advance reproducibility in science. To further advance the study of cloud computing, we make available traces from the Chameleon research cloud as well as tools that can be used to obtain similar traces in any OpenStack cloud at our Science Clouds site.

    Since the Nimbus project itself is no longer active we have archived the code and documentation; they are however preserved and accessible on the Nimbus GitHub organization and our papers are available here.

2017

  • Mar 29, 2017

    We are looking for summer student to work on three different projects. The work location is the Argonne National Laboratory, near Chicago, Illinois. If you are interested in working with us on any of these, contact .(JavaScript must be enabled to view this email address).

    Investigating Hadoop dynamic scaling

    We are creating a platform for running geospatial analysis operations in a scalable manner using cloud computing resources. In order to support large number of users with varying workloads, the platform must dynamically manage deployments of compute resources.

    Hadoop is heavily used by applications running on this platform. The purpose of this project is to study scalability patterns of a geospatial analysis application, UrbanFlow, in order to derive scaling policies that will allow to dynamically vary the number of Hadoop workers in the system to provide a good response time. Since data and locality in particular is crucial to Hadoop, this project will evaluate how data placement patterns can help or prevent dynamic scaling.

    The objectives of this project are:

    • Study data access and computing patterns of UrbanFlow
    • Propose scaling policies using these patterns that will optimize response time for various workloads
    • Develop a dynamic scaling engine that can enact such policies
    Cloud workload trace archive

    Traces from existing parallel and distributed computing systems are a useful resource for researchers to replicate real-life workloads for their experiments. However, there is little material available from cloud computing systems. We propose to develop a trace archive that will provide traces from various clouds systems combined with tools to replay them. This effort originally focuses on OpenStack clouds, but would eventually include other cloud technologies.

    The objectives of this project are:

    • Define a cloud workload trace format after reviewing existing traces format. This format should be flexible enough to support other cloud technologies in the future.
    • Develop tools to extract workload from OpenStack systems, converting into the chosen trace format.
    • Develop tools to replay traces on an OpenStack deployments for experimental purposes. We will use the Chameleon testbed as a platform for deploying OpenStack.
    • Create a platform (potentially reusing existing software) for hosting traces and allowing others to contribute.
    HPC/Cloud resource balancer

    We are working on a platform that seeks to combine two types of scheduling: batch/best-effort scheduling typically used in HPC datacenters and on-demand scheduling available in commercial clouds. This project is developing a meta-scheduler that switches between these different modes of scheduling to ensure meeting both user satisfaction goals (in terms of resource availability) and provider satisfaction (in terms of utilization). The overall objective of this project is to use an existing implementation and a set of traces from on-demand and batch jobs and explore different usage scenarios in this context.

    The relevant tasks are as follows:

    • Evaluate and potentially enhance existing implementation to add additional features
    • Define and run experiments evaluating features of the resulting platform
    • Contrast and compare the work with existing platforms such as Mesos

2016

  • Nov 14, 2016

    We are seeking a cloud computing software developer at University of Chicago to be part of the Nimbus team. Most of our development is in Python and we use OpenStack on several projects.

    For more details and to apply please visit the job posting.

    About us:

    The Nimbus team is a pioneer in infrastructure cloud computing. We work closely with scientists across many disciplines to understand how new technology can improve and transform science, develop and integrate innovative solutions in cloud computing, and support their practical use. Our previous work includes developing the first open source IaaS platform (since 2005, www.nimbusproject.org), enabling many early cloud computing projects across a range of sciences, and developing a national experimental testbed for cloud computing research (since 2014, www.chameleoncloud.org). Current challenges focus particularly on cloud computing platforms supporting High Performance Computing and Big Data applications and systems. The Nimbus team consists of scientists, developers, and students and provides a friendly, challenge oriented environment.

    About the job:

    We work with innovative technologies, which requires our team to keep track of new approaches and quickly master new technologies. We are learners: self-motivated, eager to try new things, but with a strong appreciation of quality development and spirit of teamwork.

    We are looking for a new team member with these characteristics:

    • Wants to work on important, cutting edge problems in R&D and see their work make a positive change to the world by contributing to the advancement of science
    • Loves working with system code and is good at it (see the link below for specific skills)
    • Enjoys working independently, taking on new challenges, and creating new initiatives that will shape the direction of our existing and future projects
    • Will thrive as part of a creative team, where your contributions are valued and your initiatives are welcomed

    Key requirements for the job:

    • Bachelor's or Master's degree in computer science or related field
    • The more relevant programming experience the better, preferably demonstrated via contributions to open source software (Python preferred)
    • Programming experience with Python preferred
    • Knowledge of Unix/Linux, IaaS cloud systems (OpenStack, AWS), virtualization technologies/containers, and other relevant technologies
    • Experience with system administration and DevOps tools, such as Chef and Puppet, preferred
    • Excellent verbal and written communication skills
    • Ability to prioritize, work both independently and in a team environment, and a keen sense of humor

    For more details and to apply please visit the job posting.

  • Sep 26, 2016

    We are seeking a cloud computing software developer at University of Chicago to be part of the Nimbus team. Most of our development is in Python and we use OpenStack on several projects.

    To apply please visit the job posting.

    About us:

    The Nimbus team is a pioneer in infrastructure cloud computing having developed what is now recognized as the first open source Infrastructure-as-a-Service implementation. We work closely with scientific application communities and develop innovative solutions in cloud computing infrastructures and platforms, with particular focus on cloud computing platforms supporting High Performance Computing and Big Data applications and systems. To facilitate cloud computing research on national scale, we also operate an experimental testbed supporting cloud computing research. Our overall mission is to develop innovative technical solutions enabling new methods creating unprecedented opportunities in science. The Nimbus team provides a friendly, challenge oriented environment.

    About the job:

    The job involves participation in two Nimbus projects. First, it will involve contributing to building and operating the Chameleon experimental infrastructure for cloud computing (https://www.chameleoncloud.org). Specific tasks might involve: working with OpenStack to provide additional features or troubleshoot problems, help operate the testbed working closely with our system administrators, and respond to user requests. The second project involves participating in development of infrastructure that combines cloud computing and HPC capabilities for resource management and container optimization. Specific tasks will involve enhancing or developing infrastructure-as-a-service system (e.g., Openstack), exploring or orchestrating their interaction with HPC tools (such as e.g., batch schedulers), and performance evaluation.

    Why join us:

    • You truly belong to the team; your contributions are valued and your initiatives are welcomed
    • You participate in shaping the directions of our existing and future projects
    • You make a positive change to the world by contributing to the advancement of science

    Key requirements for the job:

    • Bachelor's or Master's degree in computer science or another relevant computer related field
    • The more relevant programming experience the better (preferably demonstrated via contributions to open source software)
    • Programming experience with Python preferred
    • Knowledge of Unix/Linux, IaaS cloud systems (OpenStack, AWS), virtualization technologies/containers, and other relevant technologies
    • Experience with system administration and DevOps tools, such as Chef and Puppet, preferred
    • Excellent verbal and written communication skills
    • Ability to prioritize, work both independently and in a team environment, and a keen sense of humor
    • For more details and to apply please visit the job posting.

    2015

    • Sep 28, 2015
      The Nimbus Project is recruiting a software developer at University of Chicago.

      To apply please visit https://jobopportunities.uchicago.edu/applicants/Central?quickFind=229408

      About us: The Nimbus team is a globally recognized pioneer in infrastructure cloud computing. We created the first ever open source Infrastructure-as-a-Service implementation, and are constantly evolving and thriving on the leading edge of cloud services technologies. We work closely with scientific application communities to develop innovative solutions in cloud computing infrastructures and platforms, with particular focus on High Performance Computing and Big Data systems. To facilitate cloud computing research on a national scale, we also operate an experimental testbed supporting cloud computing research. Our overall mission is to develop innovative technical solutions that create new opportunities in science. The Nimbus team provides a friendly, collegial environment where you will be challenged to help create these groundbreaking new cloud technologies.

      About the job: In this position you will be making critical contributions to two Nimbus projects. First, you will be helping to build and operate the Chameleon experimental infrastructure for cloud computing. Specific tasks will include working with OpenStack to provide additional features or find creative solutions to problems, help operate the testbed working closely with our system administrators, and work with our users. For the second project you will be participating in development of infrastructure that elastically adapts to service demands of processing dynamic data streams obtained from social and sensor networks. Specific tasks will involve working with technologies such as Openstack, Hadoop and Pig and adapting them to solve new problems as well as original development focused on innovative capabilities.

      Key requirements for the job:

      • Bachelor's or Master's degree in computer science or another relevant computer related field
      • The more relevant programming experience the better (preferably demonstrated via contributions to open source software)
      • Knowledge of Unix/Linux, IaaS cloud systems (OpenStack, AWS), virtualization technologies/containers, and other relevant technologies
      • Excellent verbal and written communication skills
      • Ability to prioritize, work both independently and in a team environment, and a keen sense of humor

      For more detail and to apply please visit https://jobopportunities.uchicago.edu/applicants/Central?quickFind=229408

    2014

    • Sep 15, 2014

      This announcement concerns users of the FutureGrid Hotel resources.

      Early this year, we announced a move towards shutting down the Nimbus-Xen cloud on Hotel and the creation of Nimbus-KVM in addition to the OpenStack-KVM cloud already operated on this resource. However, since most of our users chose to move to the OpenStack-KVM cloud as a result of this change, we decided to operate the Nimbus clouds for a longer time allowing the community to fully transition to OpenStack.

      As most of the active Nimbus users have now moved to the OpenStack, we therefore plan to shut down the Nimbus clouds on Hotel (both Xen and KVM versions) by Friday, September 19, 2014 to facilitate the reconfiguration of the physical infrastructure to support OpenStack. These resources will continue to be operated as an OpenStack cloud Chameleon Cloud. Please contact us if this action will create any issue with your current or planned use of Nimbus Hotel on FutureGrid.

    • Jul 15, 2014

      We are happy to announce "phantomize", a Phantom feature that will automatically install and run the tcollector sensor agent on the first boot of your virtual machines thereby automatically instrumenting your VMs to provide sensor measurements.

      Phantom offers autoscaling based on sensor measurements from a variety of sources, including user's virtual machines. To collect these measurements, Phantom relies on the tcollector sensor agent being running on each of those virtual machines. Until now, users had to manually install tcollector in their virtual machines or use an image provided by us with tcollector already installed. The former requires extra effort and the latter restricts the user to the types of images provided by us.

      The phantomize feature addresses this problem. To use it, all the user needs to do is pick the "phantomize" contextualization type in their launch configuration settings. The only requirement is that the user's virtual machine image is capable of downloading and executing the user-data script on boot.

      Phantomize has been tested successfully with Debian, Fedora, and Ubuntu virtual machines on FutureGrid clouds running Nimbus and OpenStack.

    • Jun 04, 2014

      We are happy to announce the alpha release of our newest tool for FutureGrid users: a multi-cloud VM image generator.

      FutureGrid offers access to multiple clouds based on several different technologies (Nimbus, OpenStack, and Eucalyptus) using different hypervisors (Xen or KVM). Users can also supplement the use of FutureGrid resources by bursting out to commercial clouds such as Amazon EC2. While this allows users to use multiple clouds, such access is often hard to leverage as VM images are generally not portable across different formats and cloud providers.

      This presents users with a few problems. First, moving from one cloud to another means creating a new image; this is time-consuming and error-prone. Second, users typically want the VM images to represent a consistent environment independently of what type of cloud the image is deployed on; this is hard to achieve using a manual configuration process as even small differences in configuration can have significant consequences. Third, even if the user does produce a set of images that are initially consistent, as images subsequently evolve it is hard to keep track of which changes were applied to which image. In short, the problem is the lack of traceability and repeatability of VM image customizations.

      Our image generator aims to solve these problems by providing an interface to specify a customization script that can be used to generate consistent images for many clouds. The service starts out with a set of consistent images uploaded to several clouds, applies them to those images, and creates a new VM image on each cloud.

      We invite you to try our image generator by following our online tutorial, and please report any issue of request to .(JavaScript must be enabled to view this email address).

    • Mar 06, 2014

      The Nimbus Project is recruiting for a postdoc on cloud computing at Argonne National Laboratory (Illinois, USA).

      The successful applicant will work with the Nimbus team at Argonne National Laboratory and University of Chicago to develop innovative resource management technology allowing the use of high-performance computing (HPC) resources with a cloud model. The challenges span the definition of resource leases suitable in the HPC context, developing topology-aware solutions, as well as defining policies that will ensure good utilization of HPC resources while providing on-demand leases for the end-user. The work will also involve adaptive use of a federation of resources providing those capabilities for a range of scientific applications.

      More information about the position is available online: https://www.anl.gov/careers/apply-job/postdoctoral-applicants (search for requisition number 321312)

      Please don't hesitate to contact .(JavaScript must be enabled to view this email address) for further information.

    2013

    • Oct 22, 2013

      Hello Phantom Users!

      As the days are getting shorter, and the air is getting cooler, we’d like to take one last look back on the summer and highlight the work we’ve done since we announced the Phantom HTTP API in early July. Some of you might have already noticed them in the interface and maybe even started using them – here’s a quick summary of the changes that already happened and how they fit into our strategy going forward.

      Support for Appliances

      We added support for appliances in Phantom. An appliance represents an environment that can support the execution of an application (e.g., Ubuntu Linux with MPI installed) and is typically implemented as a virtual machine image. However, since clouds use different virtualization technologies (such as KVM, Xen, or VMware) as well as various cloud-specific adaptations (e.g., a Xen image executing on EC2 may be different than a Xen image executing on a Nimbus cloud) an appliance typically maps to multiple VM images, each guaranteed to work with a different cloud. Working with appliances is simpler than having to remember which VM image works with which cloud and finding those images on all the clouds you wanted to run on.

      It used to be that in the Phantom Launch Configuration tab you had to specify a VM image for each cloud. Now, there is an alternative to specify an appliance such that Phantom will automatically find which VM image needs to be used for any particular cloud you might run on. The Phantom installation on FutureGrid offers several pre-configured public appliances. You can also of course use the old method and specify an image per cloud.

      The details of the pre-configured public appliances are defined at https://scienceclouds.org/appliances/. You can find out what operating system version they support, what tools and libraries are installed, and what applications, or type of applications they were designed to support.

      As an example, we have created a chef-server appliance. This appliance allows you to easily deploy the chef server which can be helpful in configuring contextualization services.

      For now, only Phantom administrators can create appliances, but we are working on making the creation and sharing of appliances easier. In the meantime, if you have an appliance you'd like to share with the community, please to email the list and we will be happy to publish the appliance under your name.

      Contextualization

      Noting that many of our users use contextualization tools, we added support for contextualizing instances using Opscode Chef. Contextualization via a server provides a more flexible and powerful contextualization method than sending data using user-data. Phantom does the hard work of configuring each instance with a Chef Server and running Chef Client. You simply have to provide a list of Chef recipes and a Chef configuration in JSON format, which will be applied to each instance.

      This feature requires the use of a Chef Server. You can either run your own with the Chef Server Appliance mentioned above, or Entreprise Chef, which is free for up to 5 nodes. Once you have a Chef Server available, configure your Phantom profile with your Chef credentials, which will enable Phantom to interact with the Chef Server on your behalf.

      More information on this feature can be found under the following links:

      In the future we are planning to make Chef server deployment ever easier, to provide pre-baked deployment templates for popular constructs such as virtual clusters, and to provide generic mechanisms for contextualizing groups of virtual machines.

      SSH key upload

      Every cloud user needs an SSH key pair to connect to cloud instances: while the private key stays secure on your own machine, the public key is imported inside your instances. In order to do this, each cloud must know your public key. In the past, unless you were using FutureGrid Nimbus clouds, you had to upload keys manually to each cloud using command line tools. Now you can do it directly from the Phantom web interface, which is much easier!

      How to add an SSH key

      We hope you enjoy those new features and please, let us know what you think! We would like to take this opportunity to acknowledge and thank users who shared exceptional insight on Phantom features in the last quarter: Jan Balewski and Pradeep Mantha, and whose comments will shape the service going forward. Stay tuned, we have lots of exciting new things coming up in the fall!

    • Feb 27, 2013

      We are happy to announce Nimbus Infrastructure 2.10.1 bugfix release!

      This is a minor release that fixes a few bugs. Specifically, it updates junixsocket to 1.14. A Java security update necessitated this change. The qcow2 support is also more resilient to hanging qemu_nbd processes.

      Users should note that this release will ONLY work with Java 1.6.0 Update 41 and later, which they are strongly encouraged to upgrade to. If for some reason upgrading is impossible, Nimbus Infrastructure 2.10 should continue to work on that release.

      Check out the changelog for full details.

      We would like to particularly acknowledge the contributions to this release of our partners in the open source community: Michael Paterson and Adam Brust. The features in this release were supported by the Ocean Observatory Initiative and FutureGrid projects.

      The Nimbus 2.10.1 release is available on the downloads page.

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.10.1/

    2012

    • Dec 21, 2012

      Just in time for the Holidays -- we released cloudinit.d 1.2!

      This release primarily offers more customization options to the user: it provides support for custom SSH options, provides customization and management features for the working directory on the deployed VMs, allows users to customize timeouts, and run local commands from cloudinit.d launch plans. In addition, we expanded and clarified the documentation as well as fixed bugs.

      The latest documentation for cloudinit.d can be found at: https://www.nimbusproject.org/doc/cloudinitd/latest/

      The features in this release were supported by the Ocean Observatory Initiative and FutureGrid projects.

      Happy upgrades!

    • Oct 30, 2012

      Do you want to learn about how cloud computing is changing science?

      Join us for a cloud computing tutorial at SC12 on Monday, November 12, starting from 8:30 AM, to learn about why scientific applications are increasingly drifting towards clouds, the fundamentals of the approach, and find out how your research group can leverage this new development.

      You will get the chance to interact with real production clouds and launch your own virtual machines and applications, and also discover open challenges of cloud computing for science.

      More details are available on the SC12 schedule.

      Update: the slides are now available online.

    • Sep 12, 2012

      Happy Nimbus Infrastructure 2.10 final release!

      The main addition in the 2.10 release is support for copy-on-write based on the qcow2 format; when used in coordination with the image cache, this feature decreases the time needed to start virtual machines. It is now also possible to set the networks associated with an instance type in the same way CPU and memory are set and to select a kernel using the EC2 interface.

      In addition, the release also includes bugfixes and additions to documentation. Check out the changelog for full details.

      We would like to particularly acknowledge the contributions to this release of our partners in the open source community: Michael Paterson, Hsin Shao, Brett Wu, and Feng Zheng. The features in this release were supported by the Ocean Observatory Initiative and FutureGrid projects.

      The Nimbus 2.10 final release is available on the downloads page.

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.10/

      Happy upgrades!

    • Aug 31, 2012

      Nimbus Infrastructure 2.10 RC2 is coming out today!

      The main addition in the 2.10 release is support for copy-on-write based on the qcow2 format; when used in coordination with the image cache, this feature decreases the time needed to start virtual machines. It is now also possible to set the networks associated with an instance type in the same way CPU and memory are set and to select a kernel using the EC2 interface. The main changes from RC1 are functionalities for administrators to clean up corrupted instances from the service, as well as the usual set of bug fixes.

      In addition, the release also includes bugfixes and additions to documentation. Check out the changelog for full details.

      Some of the effort for this release came from our open source community. We would like to particularly acknowledge the contributions of Michael Paterson, Hsin Shao, Brett Wu, and Feng Zheng. The features in this release were supported by the Ocean Observatory Initiative and FutureGrid projects.

      The Nimbus 2.10 RC2 is available on the downloads page.

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.10/

      We appreciate help from all who are willing to test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

      Everybody have a great Labor Day weekend!

    • Jul 20, 2012

      We are happy to announce that the Nimbus Infrastructure 2.10 release candidate is coming out today!

      The main addition in this release is support for copy-on-write based on the qcow2 format; when used in coordination with the image cache, this feature decreases the time needed to start virtual machines. It is now also possible to set the network associated with an instance type in the same way CPU and memory are set and to select a kernel using the EC2 interface.

      In addition, the release also includes bugfixes and additions to documentation. Check out the changelog for full details.

      Some of the effort for this release came from our open source community. We would like to particularly acknowledge the contributions of Michael Paterson, Hsin Shao, Brett Wu, and Feng Zheng. The features in this release were supported by the Ocean Observatory Initiative and FutureGrid projects.

      The Nimbus 2.10 release candidate is available for download at: https://www.nimbusproject.org/downloads/

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.10/

      We appreciate help from all who are willing to test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

    • Apr 16, 2012

      We are proud to announce the release of Nimbus cloud-client-021.

      Along with some minor improvements and bug fixes the major contributions of this release are:

      • Users can now associate meta data describing a virtual machine image along side of their image in the Cumulus repository.
      • Users can query the cloud for detailed information about the hardware on which their VM is running. Determining the specific location of VMs has enabled FutureGrid researchers to study scientific computation in infrastructure clouds. In order to use this feature the infrastructure cloud administrator must enable it.

      The new release can be found on the Nimbus downloads page.

    • Jan 27, 2012

      It's finally here -- the final release of Nimbus Infrastructure 2.9!

      The major additions in this release are support for Availability Zones, configurable EC2 multi-core instances, more robust support for LANTorrent and new administration tools which allow administrators to easily control VMs running on their cloud. The administrators can also choose to give more information to the user, e.g., allow them to inspect on what physical machines a their virtual machines are running.

      In addition, the release also includes bugfixes and additions to documentation. Check out the changelog for full details.

      As always, we want to express our gratitude to our open source community for their contributions to this release. We would like to particularly acknowledge the work of Rob Rusnak who contributed the administrative tools as part of the Google Summer of Code project and Shao, Hsin (Jeff) who helped with LANTorrent testing. The features in this release were supported by the GSoC, OOI, and FutureGrid projects.

      The Nimbus Infrastructure 2.9 release is available for download at: https://www.nimbusproject.org/downloads/

      Documentation is available at: https://www.nimbusproject.org/docs/2.9/

    • Jan 09, 2012

      Happy New Year: Nimbus Infrastructure 2.9 RC2 is ready to come out!

      The major additions in this release are support for availability zones, configurable EC2 multi-core instances, and new administrative tools which allows administrators to easily control VMs running on their cloud. The administrators can also choose to give more information to the user, e.g., allow them to inspect on what physical machines a their virtual machines are running.

      In addition, the release also includes bugfixes and additions to documentation. Check out the changelog for full details.

      Much of the effort for this release came from our open source community. We would like to particularly acknowledge the contributions of Rob Rusnak who contributed the administrative tools as part of the Google Summer of Code project. The features in this release were supported by the GSoC, OOI, and FutureGrid projects.

      The Nimbus 2.9 RC2 is available for download at: https://www.nimbusproject.org/downloads/

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.9/

      We appreciate help from all who are willing to test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

    2011

    • Dec 05, 2011

      Just in time for Christmas: we have a new Nimbus Infrastructure 2.9 release candidate coming out.

      The major additions in this release are support for availability zones, configurable EC2 multi-core instances, and new administrative tools which allows administrators to easily control VMs running on their cloud. The administrators can also choose to give more information to the user, e.g., allow them to inspect on what physical machines a their virtual machines are running.

      In addition, the release also includes bugfixes and additions to documentation. Check out the changelog for full details.

      Much of the effort for this release came from our open source community. We would like to particularly acknowledge the contributions of Rob Rusnak who contributed the administrative tools as part of the Google Summer of Code project. The features in this release were supported by the GSoC, OOI, and FutureGrid projects.

      The Nimbus 2.9 release candidate is available for download at: https://www.nimbusproject.org/downloads/

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.9/

      We appreciate help from all who are willing to test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

    • Oct 31, 2011
      • For all our fans in Europe, Kate Keahey will give at talk at the CloudViews conference, on 11/04/11
      • Come and join us at the SC11 tutorial on cloud computing for science in Seattle, WA on 11/13/11
      • And if you are at SC, Kate will talk about Nimbus on 11/15/11 at 10:30 AM in the ANL booth
      • And finally, if you'd like to find out more about cloudinit.d and how it can be used to structure Computer Science experiments we will have a talk at the Support for Experimental Computer Science Workshop on 11/18/11
    • Oct 10, 2011

      At Supercomputing 2011 the Nimbus team will be presenting the tutorial "Using and Building Infrastructure Clouds for Science".

      If you are attending SC11 and interested in the clouds role, potential, and challenges to the scientific community please attend our tutorial. You will gain hands-on experience using the FutureGrid clouds and administering your own cloud as well as learning about the surrounding tools and ecosystem.

      The tutorial will be held on Sunday November 13th at 8:30AM.

      For more information see the SC page.

    • Aug 22, 2011

      We are happy to announce the final release of Nimbus Infrastructure 2.8 as well as the first release of Nimbus Platform!

      Nimbus Infrastructure 2.8 contains new features that significantly improve VM deployment performance, flexibility of administration and improve compatibility with EC2. In addition, it also contains minor enhancements and bugfixes, see the changelog.

      While Nimbus Infrastructure contains tools that allow providers to build clouds, Nimbus Platform focuses on tools that leverage them: provision resources across different infrastructure cloud providers and contextualize them. To achieve this, Nimbus Platform tools are compatible with leading infrastructure cloud implementations. The current release contains two such tools:

      • Nimbus Platform Context Broker 2.8: repackaging of the Nimbus Context Broker facilitating its independent use.
      • Nimbus Platform cloudinit.d 1.0: a tool for coordinating, monitoring and repair of complex launches over infrastructure cloud providers.

      The release is available for download at: https://www.nimbusproject.org/downloads/

      Documentation is available:

      In addition to the committers, we would like to acknowledge the contributions from Jamie Chen, David Foster, Kyle Fransham, Adam Smith, and Dan Yocum.

    • Jun 24, 2011

      We are happy to announce the release candidate of a new Nimbus release!

      The star of this release is unquestionably Nimbus Platform: a set of tools that make infrastructure clouds from many providers easier to use. While our focus has always been on providing capabilities for infrastructure outsourcing across the stack, in the last few years Nimbus was best known for its IaaS implementation (Nimbus Infrastructure). With this release we are emphasizing our focus on the “platform layer”: tools that make management of resources across different infrastructure cloud providers easier. Besides to the already familiar context broker, this release contains cloudinit.d – a tool that enables repeatable and coordinated deployment of multiple inter-dependent VMs over many infrastructure clouds. Future Nimbus Platform releases this year will contain tools that facilitate cloud use for automatic scaling and high availability. In addition, this release also contains significant improvements to the Nimbus Infrastructure (see below).

      To allow users to flexibly access the best tools for their needs, we also decided to make changes to our packaging and release structure: Nimbus Infrastructure as well as all individual Nimbus Platform tools will be packaged and released separately, inheriting the versioning from their last release. Consequently, this release contains the following downloads:

      • Nimbus Infrastructure 2.8, containing several features that improve VM deployment performance and flexibility as well as numerous bugfixes.
      • Nimbus Platform Context Broker 2.8
      • Nimbus Platform cloudinit.d 1.0

      In addition to the committers who all contributed to this release we would like to acknowledge the help from Jamie Chen, David Foster, Kyle Fransham, Adam Smith, and Dan Yocum.

      The release is available for download at: https://www.nimbusproject.org/downloads/

      Documentation is available at:

    • May 13, 2011

      A new release of the Nimbus Cloud Client is now available. This is primarily a bugfix release and is compatible with Nimbus clouds of version 2.2 or later. This release includes support for downloading public images from Cumulus, better handling of configuration errors, and numerous bugfixes.

    • Mar 28, 2011

      Nimbus is excited to be participating in Google Summer of Code (GSoC) again, under the Globus organization. GSoC is a program in which Google sponsors students to work on open source projects for the summer. We’ve had many excellent students and successful projects in past years and are looking forward to this summer.

      For more information, check out our ideas page. Please feel free to contact mentors or our mailing lists if you have any questions.

    • Feb 15, 2011

      Happy (belated) Valentine's Day! From Nimbus, with love -- the final release of 2.7!

      The most important new functionality in this release is support for preemptible instances: "spot instances" as offered by Amazon EC2 and "backfill" instances their simplified version which we think may be more useful in a scientific setting. To learn more about this exciting new functionality see our blog.

      In addition, we also extended our support for various EC2-compatible features, made improvements to several existing components of Nimbus, and included bugfixes. Simultaneously, we are also releasing a new cloud client #18 with bugfixes and small enhancements. Check out the changelog for full details.

      The effort for this release came almost entirely from our open source community. In particular, one of the main contributions (spot instances) was sponsored by Google Summer of Code. We would like to particularly acknowledge the contributions of Paulo Ricardo Motta Gomes, Paul Marshall, Patrick Armstrong, Pierre Riteau and Joe Bester.

      The Nimbus 2.7 release is available for download here.

      Documentation is available here.

    • Jan 06, 2011

      Happy New Year! We’ve got some belated fireworks to get it off to a good start—all contained in Nimbus 2.7 RC1 which we are hereby releasing.

      The most exciting new functionality is support for spot instances (similar to that offered by Amazon’s EC2) and “backfill” VMs. Both are preemptible instances which run on resources unoccupied by on-demand VMs; an on-demand request may terminate any of those instances in order to acquire resources to run. The main difference between “backfill” and spot instances is that “backfill” preemptible instances are configured and arbitrated by the cloud administrator whereas spot instances are arbitrated by users based on an auction. These features not only give the user an instance with new availability characteristics (“spot” as opposed to on-demand); these instances also allow the administrators to improve the utilization of their clouds.

      In addition, we also extended our support for various EC2 features, made improvements to several existing components of Nimbus, and included bugfixes. Check out the changelog for full details.

      The effort for this release came almost entirely from our open source community. In particular, one of the main contributions (spot instances) was almost entirely sponsored by Google Summer of Code. We would like to particularly acknowledge the contributions of Paulo Gomez, Paul Marshall, Patrick Armstrong and Pierre Riteau.

      The Nimbus 2.7 release candidate is available for download at: https://www.nimbusproject.org/downloads/

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.7/

      We appreciate help from all who are willing to test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

    2010

    • Dec 16, 2010

      An updated release of the Nimbus Context Agent is now available. The context agent is a small package that is bundled inside virtual machine images and facilitates the contextualization process.

      The new release is primarily bugfix-oriented. We improved overall robustness and added retry logic for some requests. The code was also refactored to make development easier.

      Grab the new release from the Nimbus download page.

    • Nov 10, 2010

      We will be at SC10 in New Orleans this month. If you're attending, swing by and talk to us at the Argonne booth.

      Events:

      • Tuesday, 11/16 @ 2:30 PM: Overview of Nimbus talk at SC 2010 in the Argonne Booth (Booth 2513)
      • Tuesday, 11/16 @ 5:15 PM: John Bresnahan will present a poster describing the Cumulus architecture and performance evaluation at the poster reception in the main lobby

      [Updated to correct the date of the poster reception.]

    • Nov 08, 2010

      We are happy to announce the Nimbus 2.6 release!

      This release introduces three new features. The first one is fast propagation with LANTorrent, a multicast file distribution tool dramatically reducing the time it takes to distribute VM images to nodes. The second new feature is the dynamic node management which allows administrators to add or remove resources from a Nimbus cloud on the fly -- without the need to take the service down. This feature is accompanied by several new upgrade tools that make managing Nimbus clouds easier than ever. Finally, the Nimbus Context Broker got an overhaul -- it received a new HTTP/REST-based interface.

      In addition, this release also contains numerous helper programs, small enhancements, and bugfixes. The details can be found in the changelog.

      As always, we are indebted to our open source community for their contributions, feedback, and testing. We thank all who volunteered their effort to develop, test, and patch. In particular, we'd like to thank Patrick Armstrong and Pierre Riteau for making this release possible!

      The Nimbus 2.6 release is available for download at:
      https://www.nimbusproject.org/downloads/

      Documentation is available at:
      https://www.nimbusproject.org/docs/2.6/

      This release was supported by the NSF SDCI "Missing Links" project, by the NSF FutureGrid project, and partially the NSF OOI project.

    • Oct 25, 2010
      • In the Cloud versus Cloud keynote at the 2nd International ICST Conference on Cloud Computing (CloudComp 2010) in Barcelona on 10/28, Kate will discuss the newest features of Nimbus as well as the rationale for our upcoming releases and future research development.
      • The Cloud Computing for Science talk on 11/03 at the Nanoinformatics workshop in Arlington, VA will provide an overview of Nimbus.
      • Come and see us for another overview of Nimbus talk at SC 2010 in the Argonne Booth on 11/16 at 2:30 PM.
      • At the SC 2010 poster reception John Bresnahan will also present a poster with a performance evaluation of Cumulus: the Nimbus storage cloud.
      • Kate will give an INRIA seminar at Rennes, France on Monday November 29th.
      • Join us at CloudCom 2010 in Indianapolis, IN: we will present a Nimbus tutorial on 12/03 and a poster discussing features to be released in 2011 during the conference poster reception.
    • Oct 15, 2010

      We are happy to announce the first release candidate of Nimbus 2.6!

      This release introduces three major features:

      • Fast propagation with LANTorrent, a multicast file distribution protocol designed to saturate all the links in a switch.
      • Dynamic node management, use the new 'nimbus-nodes' program to add and remove VMM resources on the fly.
      • The Context Broker has alternate HTTP/REST protocol support to aid in wider integrations.

      In addition, this release also contains numerous helper programs, small enhancements, and bugfixes. The details can be found in the changelog.

      This release would not have been the same without active involvement of the Nimbus open source community!

      The Nimbus 2.6 release candidate is available for download at:

      https://www.nimbusproject.org/downloads/

      Documentation is available at:

      https://www.nimbusproject.org/docs/2.6/

    • Sep 24, 2010

      Kate Keahey will give a plenary talk at the CHEP 2010 conference on October 20. Along with general discussion of cloud computing and its applications to science, she will talk about some exciting new features coming down the pipe in Nimbus.

      Title: Cloud versus Cloud: the Blessings and Challenges of Cloud Computing for Science
      When: October 20, 2010, 9:00 - 10:30am
      Where: CHEP 2010, Taipei, Taiwan

    • Jul 30, 2010

      We are happy to announce the Nimbus 2.5 release!

      This release introduces two major new features. The first one is the Cumulus storage cloud implementation that has been integrated with the Workspace Service but can also be used standalone. Cumulus is compatible with the Amazon Web Services S3 REST API and has a pluggable backend that allows it to support multiple storage systems used by the scientific community. It also includes support for quota management. Cumulus replaces the current GridFTP-based upload and download of VM images. The second new feature is the Zero -> Cloud installation process, which significantly simplifies Nimbus installation and also includes user management tools.

      In addition, this release also contains new scheduling and network configuration options, new propagation methods, new workspace pilot options, as well as multiple smaller features and bugfixes -- too many to cover them all in an announcement! The full list is available in the changelog.

      The community testing and feedback has been an invaluable help. This has been the most active and productive release cycle that Nimbus has seen and resulted in a product that all of us can be proud of. We would like to thank all who volunteered their effort to help with testing and submitted patches to this release. In particular, we'd like to thank Patrick Armstrong, Colin Leavett-Brown, Paul Marshall, Paulo Motta, Pierre Riteau, Marien Ritzenthaler, and Matt Vliet.

      The Nimbus 2.5 release is available for download at:
      https://www.nimbusproject.org/downloads/

      Documentation is available at:
      https://www.nimbusproject.org/docs/2.5/

    • Jul 16, 2010

      We are happy to announce RC2 of Nimbus 2.5!

      This release "rounds out" the new features introduced in RC1, addresses usability concerns, improves and adds documentation, and provides several new developer features. In addition, the release also of course provides bug fixes relative to RC1.

      The full changelog information is available at:
      https://www.nimbusproject.org/docs/2.5/changelog.html

      Nimbus 2.5 RC2 is available for download at:
      https://www.nimbusproject.org/downloads/

      The community testing and feedback has been an invaluable help, this has been the most active and productive release candidate cycle that Nimbus has seen and we think it will show in the final release. We would like to thank all who volunteered their effort to help with testing and submitted patches to this release. Specifically, we would like to acknowledge the help of Pierre Riteau, Patrick Armstrong, Paul Marshall, Paulo Motta, Marien Ritzenthaler, Colin Leavett-Brown, and Matt Vliet.

    • Jul 06, 2010

      Happy (belated) Independence Day—we have just won independence from a kludgy storage solution and a tyrannical installation system and are happy to announce RC1 of Nimbus 2.5!

      This release introduces two major features:

      1) Cumulus, a storage cloud implementation that has been integrated with the Workspace Service but can also be used standalone. Cumulus is compatible with the Amazon Web Services S3 REST API, but extends it to include quota management.

      2) Zero -> Cloud installation process, which significantly simplifies Nimbus installation and includes user management tools.

      In addition, this release also contains new scheduling and network configuration options, new propagation methods, new workspace pilot options, as well as multiple smaller features and bugfixes—too much by far to brag about in this mail; the full list is available in the changelog at: https://www.nimbusproject.org/docs/2.5/changelog.html#2.5

      This release would not have been the same without active involvement of the Nimbus open source community. The changelog contains acknowledgements of many members who made substantial contributions: in particular, we’d like to thank Patrick Armstrong, Paulo Motta, Pierre Riteau, and Matt Vliet. They not only contributed new ideas, suggestions, and features but also helped us improve code quality—priceless!

      The Nimbus 2.5 release candidate is available for download at:
      https://www.nimbusproject.org/downloads/

      Documentation (still in progress) is available at: https://www.nimbusproject.org/docs/2.5/

    • Jun 21, 2010

      Nimbus is to be featured in a demo at this week's OGF29 in Chicago. The demo involves six Nimbus cloud installations spread across FutureGrid and Grid'5000. Many VMs with up to 1000 cores will be started between the clouds and used to run a single bioinformatics application (BLAST). The demo will also showcase some experimental features in Nimbus for fast propagation of VM images.

      From the OGF29 site:

      Sky Computing on FutureGrid and Grid'5000

      "Sky computing" is an emerging computing model where resources from multiple cloud providers are leveraged to create large scale distributed infrastructures. This demonstration will show how sky computing resources can be used as a platform for the execution of a bioinformatics application (BLAST). The application will be dynamically scaled out with new resources as need arises. This demonstration will also show how resources across two experimental projects: the FutureGrid experimental testbed in the United States and the Grid'5000, an infrastructure for large scale parallel and distributed computing research in France, can be combined and used to support large scale, distributed experiments. The demo will showcase not only the capabilities of the experimental platforms, but also their emerging collaboration. Finally, the demo will showcase several open source technologies. Specifically, our demo will use Nimbus for cloud management, offering virtual machine provisioning and contextualization services, ViNe to enable all-to-all communication among multiple clouds, and Hadoop for parallel fault-tolerant execution of BLAST. (POC: Kate Keahey, Mauricio Tsugawa, ANL; Pierre Riteau, IRISA)

    • May 08, 2010

      Please welcome Matt Vliet and Paulo Gomes to the Nimbus community, they were accepted to the Google Summer of Code 2010 to work on Nimbus related projects!

      Matt will be working with Ian Gable on HDFS for robust VM propagation, Paulo will be working with Tim Freeman on Spot Instances to maximize cloud utilization.

      Thanks Google for your generous support of open source software!

    • May 05, 2010

      Happy Cinco De Mayo—we too feel like we’ve just won a victory against the odds—and are happy to announce the final Nimbus 2.4 release!

      The major feature of this release is a new installer which makes the installation process significantly easier and faster, eliminates the need for a separate Globus container installation, and sets up an embedded certificate authority. Another significant contribution is a refinements to the Nimbus cloud monitoring service including a new feature that aggregates monitoring information from various Nimbus clouds. In addition, the release contains numerous feature enhancements and bug fixes. Check out the full changelog.

      The Nimbus 2.4 release is available for download at: https://www.nimbusproject.org/downloads/

      Documentation is available at: https://www.nimbusproject.org/docs/2.4/

      Many thanks to folks who contributed their time, comments, and patches during the release candidate process!  We would like to particularly acknowledge Patrick Armstrong, Ian Gable, Paulo Ricardo Motta Gomes, Colin Leavett-Brown, Mike Lowe, Paul Marshall, Pierre Riteau, and Mauricio Tsugawa.

    • Apr 30, 2010

      We are pleased to announce the second release candidate of Nimbus 2.4 (RC2). In response to excellent community feedback, we’ve identified and fixed several problems with RC1. We’ve also significantly improved the documentation and installation experience.

      For a detailed list of changes between RC1 and RC2, consult the changelog.

      Download the new RC2: https://www.nimbusproject.org/downloads/. Documentation is available here.

      This has been one of the most active and helpful release candidate periods we have ever had.  Many thanks to everyone that has contributed their time, comments, and patches!  We would like to especially thank Patrick Armstrong, Ian Gable, Paulo Ricardo Motta Gomes, Colin Leavett-Brown, Mike Lowe, Paul Marshall, Pierre Riteau, and Mauricio Tsugawa.

    • Apr 15, 2010

      We are happy to announce release candidate 1 (RC1) of Nimbus 2.4. The major feature of this release is a new installer which makes the installation process significantly easier and faster, eliminates the need for a separate Globus container installation, and sets up an embedded certificate authority. In addition, the release contains enhancements to the Nimbus cloud monitoring service including a new feature that aggregates monitoring information from various Nimbus clouds.

      This RC1 also contains numerous smaller improvements, and bug fixes. Check the changelog for details.

      The RC1 is available for download at: https://www.nimbusproject.org/downloads/

      Documentation for the new release is available here.

      We appreciate help from all who volunteered to alpha test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

    • Mar 19, 2010

      Globus has been again selected as a mentoring organization for Google Summer of Code. GSoC is an excellent program that sponsors students to work on various open source projects.

      Nimbus has eight GSoC project ideas this year. If you are a student and are interested in working with us over the summer, please take a look at our ideas page. If you have any questions, please contact us. Applications are due to GSoC by April 9th.

    • Feb 02, 2010

      We are happy to announce the final Nimbus 2.3 release!

      This release contains support for EC2 Query API as well as support for KVM via a new, refactored, workspace-control based on libvirt. This is also the first release of the refactored design of the Nimbus context broker. Another major addition is an administrative web interface that supports securely distributing user credentials. In addition, this release contains improvements to the cloud client, numerous small features, and bug fixes. Check the full changelog for more information.

      The Nimbus 2.3 release is available for download at: https://www.nimbusproject.org/downloads/

      We would like to acknowledge the contributions of all who volunteered their effort to help with testing and submitted patches to this release. Our sincere thanks go to Pierre Riteau, Alex Clemesha, Kevin Wilson, Adam Bishop, Kyle Fransham, and Patrick Armstrong.

    2009

    • Dec 31, 2009

      Happy New Year from the Nimbus Team!

      After a year devoted primarily to working with users and experimentation, we are back to packaging our work off in releases. We are happy to announce release candidate 1 (RC1) of the Nimbus 2.3 release. For the Nimbus workspace service, this RC1 contains support for EC2 Query API as well as support for KVM via a new, refactored, workspace-control based on libvirt. This is also the first release of the refactored design of the Nimbus context broker. Another major addition is an administrative web interface that supports securely distributing user credentials.

      In addition this RC1 contains improvements to the cloud client, numerous small features, and bug fixes. The full changelog information is available here.

      The RC1 is available for download here.

      We appreciate help from all who volunteered to alpha test this release. To help provide an easy vehicle for feedback and resolve issues quickly we offer real-time access to a Nimbus RC chatroom for serious alpha testers. If you would like to participate, please contact us for access.

    • Dec 18, 2009

      Exciting things are happening in the Nimbus world! The development team is growing and so is our user base. We have several developments to report from the past few months:

      • The Nimbus codebase has been moved to GitHub which we are very happy about. Collaboration is easier than ever and it is simple to track development progress. Check it out.
      • We’ve just launched a new website at a new address: https://www.nimbusproject.org
      • We have also recently moved our Science Clouds pages into a separate site accompanied by a blog. Check it out at https://www.scienceclouds.org
      • Heavy software development has been underway and we are preparing a Nimbus 2.3 release candidate which is expected to be available within a couple of weeks. Highlight features include initial EC2 Query API support, an administrative web application, and integration with libvirt.
      • We are committing to a more regular release schedule and have a lot of great features forthcoming in the next few months.
    • Nov 13, 2009

      On Monday, November 16th, Kate Keahey will talk about Nimbus in the afternoon at the SC09 Cloud Computing for Systems and Computational Biology workshop. See the workshop page for details.

      On Tuesday, November 17th, Kate Keahey will give a talk at the AIST Booth on the SC show floor at 9 AM. The talk will be followed by a discussion.

      There will be an ongoing display of a Nimbus poster in the ANL booth.

    • Oct 12, 2009
      Kate will giving a talk "Infrastructure-as-a-Service - Cloud Computing for Science" tomorrow at the Banff Centre, 9am PT. Details can be found here. Update: download the talk here.
    • Sep 07, 2009
      Kate will be giving an invited lecture at the XtreemOS Summer School 2009 at Wadham College, Oxford, UK on September 7th. Details can be found here.
    • Mar 23, 2009
      Nimbus at CHEP 09:

      Jerome Lauret from BNL will talk about Nimbus in his plenary on Wednesday March 25th and Artem Harutyunian will present a poster on how he integrated CernVM VMs running on the Nimbus cloud at UC into the ALICE testbed.

      See here for schedule details.

    • Mar 02, 2009

      Update: find the slides from the talk here.

      Kate will be talking about Nimbus at the Virtualization Workshop co-located with OSG All Hands Meeting in Baton Rouge, LA.

      At the same workshop, you will hear about how STAR scientists are using Nimbus in the "STAR & Virtualization" talk and Alex Younts will talk about his experiences running the Wispy Cloud at Purdue on TeraGrid resources.

    • Feb 12, 2009

      Andrea Matsunaga and Mauricio Tsugawa have contributed a new image to the marketplace that lets you form a Hadoop/MPI cluster on the fly using Nimbus contextualization technology. The cluster is set up to run NCBI BLAST or mpiBLAST, see the marketplace description for all the details.

      On the Teraport cloud this has been linked into your personal directory, give it a go! You need to download the cluster XML file from the marketplace. If you've never launched a self-configuring virtual cluster before, the best way to learn is from this walkthrough.

    • Jan 29, 2009

      Update: find the slides from the talk here.

      Computing Techniques Seminar

      Thursday, January 29, 2pm. Feynman Computing Center, Fermilab.

      Abstract:

      Infrastructure-as-a-Service (IaaS) cloud computing is emerging as a viable alternative to the acquisition and management of physical resources. But what exactly is cloud computing and to what extent can it be used to meet the needs of scientific applications?

      In this talk, I will give an overview of cloud computing and describe Nimbus -- a toolkit that provides an open source, EC2-compatible IaaS implementation as well as tools that enable, for example, the creation of tightly-coupled clusters such as are often used in science.

      I will describe how applications drove the development of various Nimbus capabilities and how they use these capabilities today on Amazon EC2 and the Science Clouds. Finally, I will discuss the emerging trends in cloud computing and discuss how they can benefit science.

    • Jan 21, 2009

      The call for papers for VTDC 2009 is out. This is the 3rd workshop on Virtualization Technologies in Distributed Computing, taking place June 15th in Barcelona, Spain.

      See the homepage for CFP and other workshop details.

    • Jan 09, 2009

      The main new features provided in this release are an EC2 metadata server (can be used with both EC2 and WSRF remote interfaces) and a standalone context broker that allows you to contextualize virtual clusters on both EC2 and Nimbus, and even virtual clusters spanning across virtual clusters.

      You can download the new release here

      The full changelog can be found here

2008

  • Nov 16-19, 2008

    Ioan Raicu will discuss Nimbus in his talk Cloud Computing and Grid Computing 360-Degree Compared on November 16th, from 1:00PM to 1:30PM at GCE08 in room 11AB.

    Kate Keahey will discuss Nimbus in her talk The Nimbus CloudKit: the best open source EC2 no money can buy on November 19th, from 4:30PM to 5:00PM at the Argonne booth.

    See you there!

    Update: download the talk here.

  • Oct 31, 2008

    Happy Halloween, a scary new website is online. If you want the old pages back, send us some candy.

  • Oct 21, 2008

    Michael Paterson and Ian Gable (University of Victoria / HEPnet Canada / ATLAS) have contributed a Nagios monitoring component for Nimbus as well as an aggregator for use with MDS.

    Download and Installation instructions for the Nagios plugins can be found here and for the MDS Aggregator Source here.

    They are looking for feedback!

  • Jul 30, 2008

    In collaboration with Artem Harutyunyan and Predrag Buncic, AliEn based images are launching on Nimbus in support of the ALICE experiment to carry out simulation, reconstruction and distributed analysis of physics data. After one VM makes site services available, AliEn Job Agents can launch and retrieve jobs from the main task queue to execute.

    See this screenshot of the incorporation! Nimbus is the 'Cloud' site in Chicago. See this screenshot for a bird's eye view of the whole operation.

  • Jul 07, 2008

    The main new feature provided in this release is the ability to deploy "one-click" virtual clusters -- a much awaited release of the contextualization functions allowing users to create self-configurable virtual clusters on the fly. The new feature comes with improvements to the ensemble service and image compression facilities that extend the range of deployment scenarios in which it can be used.

    See the changelog for all the details.

  • Jul 07, 2008

    See the cloud pages to download the new cloud client #009. A lot of enhancements have been added including support for "one-click" clusters. You need to upgrade to this release in order to continue using the clouds. See its CHANGES.txt file for a list of enhancements.

  • May 30, 2008

    See the Nimbus pages to download the new cloud client #008. Notable changes include new "--download" option allows you to easily grab a template or result image in your personal directory, new "--delete" option allows you to delete images in your personal directory

  • May 23, 2008

    Workspace Service TP1.3.2 has been released, the "cloudkit" release. Support for the new cloud configuration and many smaller enhancements/bug fixes. For a detailed changelog, see the TP1.3.2 pages.

  • May 14, 2008

    See the Nimbus pages to download the new cloud client #007. Notable changes include new "--download" option allows you to easily grab a template or result image in your personal directory, new "--delete" option allows you to delete images in your personal directory

  • May 12-16, 2008

    There will be a Virtual Workspaces tutorial at the Open Source Grid Cluster conference in Oakland, CA. The conference is May 12-16, 2008. The Virtualization and Cloud Computing with Globus session is on Wednesday, May 14th, from 4:30-6:00 pm. We hope to see you there!

    Quoting from the summary:

    One of the primary obstacles users face in grid computing is that Grids provide access to many diverse resources, their applications often require a very specific, customized environment. This disconnect can lead to resource underutilization, user frustration, and much wasted effort spent on bridging the gap between applications and resources. Virtual Workspaces describe the environment required for the execution of an application that can be dynamically deployed across a variety of resources creating a working and consistent platform for grid applications.

    This tutorial will introduce the Globus Toolkit workspace service that implements workspaces as Xen virtual machines and enables authorized grid clients to dynamically deploy them and manage their resources. Further, we will describe and demonstrate the workspace "cloudkit" that provides a user-friendly interface on top of the workspace service allowing authorized users to easily provision and run VMs on the available community clouds. Finally, we will describe how the process of contextualization can be used to provide on-demand functioning clusters and give examples of its use by applications.

  • Apr 15, 2008

    See the Nimbus pages to download the new cloud client #006. One of the notable changes is the new "--save" option that allows you to persist workspace changes back to your personal directory after running. Previous cloud client versions are now wire-incompatible with Nimbus.

  • Feb 14, 2008

    The virtual machines based Workspace Service TP1.3.1 has been released, adding non-invasive site scheduler integration, support for coscheduled, heteregenous virtual clusters, and several small enhancements/bug fixes. For a detailed changelog, see the TP1.3.1 pages.

2007

  • Nov 1, 2007

    The virtual machines based Workspace Service TP1.3 has been released, adding group support, client usage accounting, enhancements to make configuration easier, and several bug fixes. For a detailed changelog, see the TP1.3 pages.

  • Sep 12, 2007

    The STAR community successfully completed its first production-size deployment of a VM-based virtual cluster managed by the workspace service and backed by EC2 resources.

    The 100 node cluster was composed of a headnode and workernodes based on the OSG 0.6.0 grid middleware stack and Torque. Its deployment-time configuration was securely coordinated by the new workspace contextualization technology.

  • Jun 10, 2007

    Our short paper on enabling cost-effective resource leases was accepted to the Hot Topics session in the HPDC 2007 conference and is now online: Enabling Cost-Effective Resource Leases with Virtual Machines

    This paper discusses how virtualization can facilitate short-term leasing of resources, while allowing resource providers to continue support for existing batch workloads and their current job execution software stack. The paper discusses preliminary results obtained so far, and future work in our group on an architecture to support cost-effective resource leasing.

  • Apr 20, 2007

    The virtual machines based Workspace Service TP1.2.3 has been released, adding multiple partition management, blankspace creation, an HTTP transfer adapter, and improved scheduling criteria. For a detailed changelog, see the TP1.2.3 pages.

  • Jan 4, 2007

    The virtual machines based Workspace Service TP1.2.2 has been released, adding support for DHCP based networking configuration, unit tests, and changes to the logistics metadata. For a detailed changelog, see the TP1.2.2 pages.

2006