Legion: A Worldwide Virtual Computer
Home General Documentation Software Testbeds Et Cetera Map/Search
  Legion Scheduling
Application-level scheduling and total site autonomy


The Legion scheduling philosophy is one of reservation through a negotiation process between resource providers and resource consumers. We view autonomy as the single most crucial aspect of this process.

  • Site autonomy is crucial in attracting resource providers. In particular, participating sites must be assured that their local policies will be respected by the system at large. Therefore, final authority over the use of a resource is placed with the resource itself.

  • User autonomy is crucial to achieving maximum performance. A single scheduling policy will not be the best answer for all problems and programs: rather, users should be able to choose between scheduling policies, and select the one which best fits the problem at hand or, in the extreme, provide their own schedulers. A special, and vitally important, case of user-provided schedulers is that of application-level scheduling. This allows users to provide per-application schedulers that are specially tailored to match the needs of the application. Application-level schedulers will be commonplace in high-performance computing domains.

To paraphrase the 1996 Presidential election campaign, "It's the autonomy, stupid!"


Legion presently provides two types of resources: hosts (computational resources) and vaults (storage resources). We will incorporate network resources in the future. As seen below, the Legion scheduling module consists of three major components: a resource state information database, a module which computes request (object) mapping to resources (hosts and vaults), and an activation agent responsible for implementing the computed schedule. We call these items the Collection, Scheduler, and Enactor, respectively.

The Collection interacts with resource objects to collect state information describing the system (step 1). The Scheduler queries the Collection to determine a set of available resources that match the Scheduler's requirements (step 2). After computing a schedule, or set of desired schedules, the Scheduler passes a list of schedules to the Enactor for implementation (step 3). The Enactor then makes reservations with the individual resources (step 4), and reports the results to the Scheduler (step 5). Upon approval by the Scheduler, the Enactor places objects on the hosts, and monitors their status (step 6).

If the user does not wish to select or provide an external scheduler, the Legion system (via the class mechanism) provides default scheduling behavior supplying general-purpose support. Through the use of class defaults, sample schedulers, and application-level schedulers, the user can balance the effort put into scheduling against the resulting application performance gain.

Features in 1.4

  • Resource reservations for Host and Vaults.

  • Collection objects providing resource information for schedulers, using data collection agents that push information.

  • Enactor objects to implement schedules, by obtaining resource reservations and starting objects.

  • Support for application-level, per-object schedulers.

  • Per-class default external schedulers and placements (these may be overridden at user's behest).

  • Intelligent scheduling for stateless objects, which balances the workload across available hosts.

  • A pull model for Collection data gathering will be added in future releases, as well as additional monitoring support and sample schedulers.

Link Description
Overview A general look at Legion
Objectives and constraints Legion's design objectives and restrictions
Applications Adapting and running Legion applications
Architecture Legion system architecture
High-performance computing Using Legion to get high-performance computing
Scheduling Scheduling and resource management
Security Legion's security philosophy and model


[Home] [General] [Documentation] [Software]
[Testbeds] [Et Cetera] [Map/Search]

This work partially supported by DOE grant DE-FG02-96ER25290, Logicon (for the DoD HPCMOD/PET program) DAHC 94-96-C-0008, DOE D459000-16-3C, DARPA (GA) SC H607305A, NSF-NGS EIA-9974968, NSF-NPACI ASC-96-10920, and a grant from NASA-IPG.