Warning: This document describes an old release. Check here for the current version.
Developer reference
This section is for miscellaneous information of use to Nimbus developers.
Layout of the source directories (#)
The code lives under "workspace/vm" in CVS (see here for details), all directory names will be referenced with the assumption that this is being prepended. So if "backend/workspace" is referenced, in CVS that will be "workspace/vm/backend/workspace"
The source code has been through many changes but in some cases the decision was made to preserve the CVS history rather than moving the directory to a more organized place. This is not a set in stone decision... whatever works best.
While reading the following notes, it will help to have this picture in mind.
The core of Nimbus is the RM API which lives in the "service-api/java/source" directory.
If you navigate there you will find many things that are common to Nimbus source directories.
There is a construct borrowed from other Globus services: the component's hierarchy is first split by the language it is implemented in. So the first directory here is "java" . Then there is a directory for source code and a directory for tests.
Under "source" are the "build.xml" and "build.properties" files which are specific to the component. Running "ant dist" in any Nimbus directory with a "build.xml" file is usually all you need to do in order to build that component.
Though a much better option is to use the "scripts" scripts for building. The build file that the "scripts" scripts call is: "scripts/lib/gt4.0/build/build.xml"
That file will build components with the proper dependencies. On many projects, source trees have their build.xml files call on other things to compile, etc., but this source tree's build.xml files -- if used directly -- will only look for dependencies in other directories. If those dependencies are not built, then the build fails. This lets the developer do what he or she wants to do with maximum control when mucking with build files directly. The scripts in the "bin" directory are what both users and developers will use on a regular basis.
The RM API has no dependencies outside of this tree. It contains its own "dummy" implementations of the APIs which do nothing. If you instantiate the API (it bootstraps a Spring inversion of control container), it will use the embedded configuration as its default. This does nothing, it will just print things to the logger (which can be useful in some situations, for example when developing a new protocol implementation).
Moving along, "service/service/java/source" contains the Workspace Service site manager implementation.
That implements the RM API. Together, these two trees could be used in any number of Java containers. There is no other dependency.
In the "service/service/java/source/etc/workspace-service/other/main.xml" file you will find the Spring configurations that are used to instantiate the workspace service. There are a number of internal plug points etc., this is how that is all arranged. The ".conf" files are never examined directly by the service, they are sucked into the Spring configuration using the magic of PropertyPlaceholderConfigurer. Open the "main.conflocator.xml" file next to "main.xml" to see how that works.
The service may call on some components that are not contained in the same tree but instead live in their own top level directories:
-
"control" contains the standalone workspace control program that is installed to the VMM nodes.
-
"pilot" contains the standalone workspace pilot which is submitted to a local batch scheduler (in some deployments) in order to reserve resources (VMMs). (advanced)
-
"plugins" contains plugin implementations for the workspace service. These are broken out from the service tree mainly because they require dependencies (such as Jython) that are bulky and/or licensed in ways that may not be acceptable to people that need pure BSD/Apache style licensings. (advanced)
Next thing to understand is the "messaging" directory.
While not necessary, the RM API (and the main workspace service implementation of it) are intended to be used with a remote messaging implementation. Some kind of protocol which is implemented by some kind of WS/REST/binary conventions and hosted by some kind of container technology (in all likelihood).
The "messaging" directory contains two protocol implementations, each of them run in the Axis based Globus 4.0.x Java container. As you can see from the subdirectories, the container glue and protocol implementations are intertwined. When making a new permutation of protocol and container, you typically need to worry about both at the same time to make it work.
The "messaging/gt4.0/java/stubs" directory contains the auto-generated Axis stub classes that are used to marshall/demarshall the WSRF protocol based Nimbus messages.
The "messaging/gt4.0/java/msgbridge" directory contains everything necessary to actually take messages off the wire, initially consume them, and translate them into RM API calls.
The "messaging/gt4.0-elastic" directory contains similar things for the EC2 WS protocol hosted in the Globus 4.0.x Java container.
The "messaging/gt4.0/java/gar-builder" directory contains the scripts that produce the "final packaging." That is, this is the master directory for building a "GAR" file which is what is actually deployed into the Globus 4.0.x Java container. Both the WSRF and "elastic" interfaces are stuffed into this GAR creation process, as well as any dependency (i.e., the RM API and any of its dependencies ... and the workspace service and any of its dependencies).
So the final product for the service (the GAR files) that is deployed is something of an onion. The layers around the outside are very specific to the deployment and wire technology but as you peel things away you become more and more generic.
A related thing to understand is that the initialization sequence works in the same way. The container boots, the Axis service is initialized (because the "loadOnStartup" configuration is true), the service is tied in with the container's JNDI system, a JNDI configuration points to the Spring configuration, and then the Spring configuration takes over. 99% of the configuration is done via Spring, but something needs to kick everything off and via Container->JNDI is how it works.
Next up, something needs to call the service. The "service/client/java/source" directory is where all of the client code lives (EC2 clients work too).
Navigate to "service/client/java/source/src/org/globus/workspace" . There are several different layers here. "client_core" is where all of the code is to make actual calls to the service, it is a collection of convenience wrappers around the web services stubs and the returned data structures (and notifications/polling helpers). These wrappers make it easier to use the different operations as "building blocks" for higher level actions.
The "client" package (sibling of "client_core" ) is where the "reference client" is implemented. This is a commandline wrapper around the "client_core" basic actions. It has several "modes" which are each an "orchestration" of various actions. This client can handle any operation the service supports and provides many useful utilities for getting things done in the expected order ("create, then monitor", etc.).
Both the "client" and the "client_core" classes are ripe for inserting into portals and other new types of clients.
The "cloud" package, however, is not geared towards programmatic re-use (parts of it could be, but they were not intentionally written for this). This is another commandline program but it is fully geared towards human use. There are many default parameters and behaviors that are simplifications of what is possible with the Nimbus API, but definitely right for users that are getting started or only have the typical needs.
The cloud client is packaged separately, and this Java code is in it, it gets installed (via "ant deploy" in the "service/client/java/source" directory) into the embedded GLOBUS_LOCATION in the cloud client ("lib/globus" ).
The "autoconfiguaration" directory contains programs and scripts that run the configuration wizard for installing and maintaining the server side, the "administrator wizard".
This is contained in the GAR files when Nimbus is installed to a container, it is installed into the "$GLOBUS_LOCATION/share/nimbus-autoconfig" directory and is a standalone thing. It asks questions and can alter some of the configuration files, its sole purpose is to make it so the user has to understand less to get things up and running initially.
The "autocontainer" directory contains a standalone system for making the, you guessed it, AutoContainer. To make a new release, run the "autocontainer/lib/prepare-for-auto-container-release.sh" script.
That script will download a new Globus container (if the tarball is not already present) and adjust some of the config files to make them ready to be autoconfigured when the user downloads and runs the AutoContainer. This ensures we're never storing the ~20MB Globus container itself in source control.
The "autocommon" directory contains libraries and code that are used by both the AutoContainer and the administrator wizard.
That's the bird's eye view. Ask questions about any of this on the developer's list.
Changing the WSDL (#)
This section discusses making changes to WSDL and generating new stub classes, etc. It assumes you have some understanding of the layers and source code directions discussed in the code layout section.
First run "bin/all-clean.sh" .
Then edit the "compact" WSDL in the "messaging/gt4.0/schema/compact/workspace/" directory. Note that the EC2 WSDL works the same way but we are using the WSRF ones as an example.
Compact WSDL is a Globus convention, it facilitates an inheritance mechanism that allows the final WSDL to contain common WSRF/WSN related operations. You edit the compact WSDL only and then a program will "compile" that into the real WSDL/schemas.
Once you have edited something, the changes will not be picked up by the build system until you have run "ant" in the "messaging/gt4.0/schema/compact/" directory. This triggers the default ant target of "copyToDeployableComponent" which will take the "compact" files and put the (generated) real WSDL/schemas under the "messaging/gt4.0/schema/dist/" directory.
That directory is what the build system uses to generate the stubs (the Axis classes that are actually referenced by the Nimbus code for un/marshalling of the web services). So, because you ran "bin/all-clean.sh" already, running "bin/all-build.sh" now will generate stubs with the changes you have added.
That creates the new wsdl. Now you need to create the auto-generated "stub" code Java jars. Do this by running "scripts/stubs-build.sh"
Unless you added something entirely new, the corresponding messaging layer (e.g., "messaging/gt4.0/java/msgbridge" ) will probably not compile now. So the next step is to move up the stack and make it work.