Warning: This document describes an old release. Check here for the current version.
NOTE: (see the end of this page for an overview of Nimbus authorization flow)
* Java interface:
The basics are authorized by an implementation of this interface: org.globus.workspace.service.binding.GlobalPolicies
Source code: service/service/java/source/src/org/globus/workspace/service/binding/
Activated by way of the $GLOBUS_LOCATION/etc/nimbus/workspace-service/other/main.xml file -- see the "nimbus-rm.service.binding.GlobalPolicies" Spring bean.
If configured, further authorization can be done by an implementation of this interface: org.globus.workspace.service.binding.authorization.CreationAuthorizationCallout
Source code: service/service/java/source/src/org/globus/workspace/service/binding/authorization/
Activated by way of the $GLOBUS_LOCATION/etc/nimbus/workspace-service/other/main.xml file -- see the "nimbus-rm.service.binding.Authorize" Spring bean.
* Default implementation:
Source code: service/service/java/source/src/org/globus/workspace/service/binding/defaults/
(there is no default CreationAuthorizationCallout, it is optional)
Group authorization plugin
One implementation of CreationAuthorizationCallout is the groupauthz plugin.
The plugin can enforce the following policies. The request data to check is determined on a per-request, per-client basis. The limits are defined on a per group basis (every caller identity must be a part of a group).
- Maximum currently reserved minutes at one point in time. If the caller has two other workspaces with 10 hours scheduled for each, the value being checked against this policy would be 20 hours plus whatever time the current request is.
- Maximum elapsed and currently reserved minutes at one point in time. If the caller has one other workspace with 10 hours scheduled and 80 hours of recorded past usage, the value being checked against this policy would be 90 hours plus whatever time the current request is. This is the all-time maximum usage cap.
- Maximum number of running workspaces at one point in time.
- Maximum number of workspaces per request (the largest group request possible).
- The image node that must be specified.
- The image node base directory that must be specified.
- Support for identity-hash based image subdirectories (see the cloud setup documentation to understand this convention).
Each policy can be set to disabled/infinite for specific groups if you desire.
Python authorization plugin
We also distribute a Python based authorization plugin that allows an administrator to provide only a simple Python script to express policies (using the "Jython" library).
Understanding the authorization possibilities requires some understanding of the factory service's create process, so the explanation below includes extra information that is not authorization related per se.
Default: gridmap setup
The default installation is configured with gridmap authorization, a DN access control list, that allows only clients in the gridmap file to call the factory create operation.
- If the client's DN is not in the grid-mapfile list, the operation will will return a fault with the authorization error explained.
The request is then validated and default values are filled in if not supplied by the client. This is also where network addresses are leased if necessary.
- If the request is simply invalid, it will be denied and a WorkspaceMetadataFault will be returned.
- If the request is asking for network allocations and there are not enough, the request will be denied and a WorkspaceResourceRequestDeniedFault will be returned.
Then the request is compared against the master policies configured in the factory.
- A violation will cause a WorkspaceResourceRequestDeniedFault to be returned.
Attribute based authorization
The VOMS and GridShib modules run before the Workspace Factory Service is ever invoked, just like the gridmap authorization:
As mentioned above there is a plugin interface for creation time authorization. All relevant information about the request is passed to the plugin including client identity and attributes (if available) as well as the workspace description and resource request. The callout to this plugin occurs after the validation process:
The supplied Python based plugin allows an administrator to configure a much richer policy than the factory policies allow for. For example, any arbitrary combination of resource allocation request (such as RAM), network settings, deployment duration, client DN, and client attributes can be taken into account.
This implementation of the authorization callout can present both VOMS credentials and SAML attributes (via GridShib) to the policy evaluation. But before they can be consulted, the "PIP" (Policy Information Point) portion of those modules must be configured. The PIP is what collects the attributes, the PDP (Policy Decision Point) is what enforces policy. This distinction is being mentioned because the PIP can be configured without the PDP in the VOMS and GridShib packages. Bear in mind that this might be an option if you are using the workspace authorization callout and want to handle all attribute policy there instead of before the factory service which is when the VOMS and GridShib modules are run. Thus, the PIP modules can collect the attributes about the client and then the detailed policy about those attributes can be expressed in the workspace creation time authorization callout.
In all cases, after the default policy check succeeds, the request is currently passed next to the scheduling/resource management plugin where problems will also lead to a WorkspaceResourceRequestDeniedFault.
After scheduling succeeds, the only thing stopping success at this point is an internal error (for example, a database connection problem).
Note: Once deployed, a workspace can be managed and inspected via Workspace Service or Workspace Group Service operations. Also, destruction may be run when using groups of groups, Workspace Ensemble Service.
Currently, no matter what authorization scheme is in use, once a workspace (or group of them) is deployed, al lof these operations are protected by a DN access control list consisting of the creator DN. Only the deployer can remotely manage or inspect the workspace.