|Knowledge Center Contents Previous Next Index|
- About Resource Reservation
- Using Resource Reservation
- Memory Reservation for Pending Jobs
- Time-based Slot Reservation
- Viewing Resource Reservation Information
About Resource Reservation
When a job is dispatched, the system assumes that the resources that the job consumes will be reflected in the load information. However, many jobs do not consume the resources they require when they first start. Instead, they will typically use the resources over a period of time.
For example, a job requiring 100 MB of swap is dispatched to a host having 150 MB of available swap. The job starts off initially allocating 5 MB and gradually increases the amount consumed to 100 MB over a period of 30 minutes. During this period, another job requiring more than 50 MB of swap should not be started on the same host to avoid over-committing the resource.
Resources can be reserved to prevent overcommitment by LSF. Resource reservation requirements can be specified as part of the resource requirements when submitting a job, or can be configured into the queue level resource requirements.
Pending job resize allocation requests are not supported in slot reservation policies. Newly added or removed resources are reflected in the pending job predicted start time calculation.
Resource reservation limits
Maximum and minimum values for consumable resource requirements can be set for individual queues, so jobs will only be accepted if they have resource requirements within a specified range. This can be useful when queues are configured to run jobs with specific memory requirements, for example. Jobs requesting more memory than the maximum limit for the queue will not be accepted, and will not take memory resources away from the smaller memory jobs the queue is designed to run.
Resource reservation limits are set at the queue level by the parameter
How resource reservation works
When deciding whether to schedule a job on a host, LSF considers the reserved resources of jobs that have previously started on that host. For each load index, the amount reserved by all jobs on that host is summed up and subtracted (or added if the index is increasing) from the current value of the resources as reported by the LIM to get amount available for scheduling new jobs:available amount = current value - reserved amount for all jobs
bsub -R "rusage[tmp=30:duration=30:decay=1]" myjob
will reserve 30 MB of temp space for the job. As the job runs, the amount reserved will decrease at approximately 1 MB/minute such that the reserved amount is 0 after 30 minutes.
Queue-level and job-level resource reservation
The queue level resource requirement parameter
RES_REQmay also specify the resource reservation. If a queue reserves certain amount of a resource (and the parameter
RESRSV_LIMITis not being used), you cannot reserve a greater amount of that resource at the job level.
For example, if the output of
bqueues -lcommand contains:RES_REQ: rusage[mem=40:swp=80:tmp=100]
the following submission will be rejected since the requested amount of certain resources exceeds queue's specification:
bsub -R "rusage[mem=50:swp=100]" myjob
RESRSV_LIMITare set in
lsb.queuesfor a consumable resource, the queue-level
RES_REQno longer acts as a hard limit for the merged
rusagevalues from the job and application levels. In this case only the limits set by
RESRSV_LIMITmust be satisfied, and the queue-level
acts as a default value.
Using Resource Reservation
Queue-level resource reservation
At the queue level, resource reservation allows you to specify the amount of resources to reserve for jobs in the queue. It also serves as the upper limits of resource reservation if a user also specifies it when submitting a job.
Queue-level resource reservation and pending reasons
The use of RES_REQ affects the pending reasons as displayed by
bjobs. If RES_REQ is specified in the queue and the
loadSchedthresholds are not specified, then the pending reasons for each individual load index will not be displayed.
Configuring resource reservation at the queue level
Queue-level resource reservations and resource reservation limits can be configured as parameters in
lsb.queues. The resource reservation requirement can be configured at the queue level as part of the queue level resource requirements. Use the resource usage (
rusage) section of the resource requirement string to specify the amount of resources a job should reserve after it is started.
ExamplesBegin Queue . RES_REQ = select[type==any] rusage[swp=100:mem=40:duration=60] RESRSV_LIMIT = [mem=30,100] . End Queue
This will allow a job to be scheduled on any host that the queue is configured to use and will reserve 100 MB of swap and 40 MB of memory for a duration of 60 minutes. The requested memory reservation of 40 MB falls inside the allowed limits set by
RESRSV_LIMITof 30 MB to 100 MB.Begin Queue . RES_REQ = select[type==any] rusage[mem=20||mem=10:swp=20] . End Queue
This will allow a job to be scheduled on any host that the queue is configured to use. The job will attempt to reserve 20 MB of memory, or 10 MB of memory and 20 MB of swap if the 20 MB of memory is unavailable. In this case no limits are defined by
Job-level resource reservation
- To specify resource reservation at the job level, use
bsub -Rand include the resource usage section in the resource requirement string.
Configure per-resource reservation
- To enable greater flexibility for reserving numeric resources are reserved by jobs, configure the
lsb.resourcesto reserve resources like license tokens per resource as PER_JOB, PER_SLOT, or PER_HOST:Begin ReservationUsage RESOURCE METHOD licenseX PER_JOB licenseY PER_HOST licenseZ PER_SLOT End ReservationUsage
Only user-defined numeric resources can be reserved. Builtin resources like mem, cpu, swp, etc. cannot be configured in the
The cluster-wide RESOURCE_RESERVE_PER_SLOT parameter in
lsb.paramsis obsolete. Configuration in
lsb.resourcesoverrides RESOURCE_RESERVE_PER_SLOT if it also exists for the same resource.
RESOURCE_RESERVE_PER_SLOT parameter still controls resources not configured in
lsb.resources. Resources not reserved in
lsb.resourcesare reserved per job.
PER_HOST reservation means that for the parallel job, LSF reserves one instance of a for each host. For example, some application licenses are charged only once no matter how many applications are running provided those applications are running on the same host under the same user.
Assumptions and limitations
- Per-resource configuration defines resource usage for individual resources, but it does not change any existing resource limit behavior (PER_JOB, PER_SLOT).
- In a MultiCluster environment, you should configure resource usage in the scheduling cluster (submission cluster in lease model or receiving cluster in job forward model).
- The keyword
prefin the compute unit resource string is ignored, and the default configuration order is used (
Memory Reservation for Pending Jobs
About memory reservation for pending jobs
By default, the
rusagestring reserves resources for running jobs. Because resources are not reserved for pending jobs, some memory-intensive jobs could be pending indefinitely because smaller jobs take the resources immediately before the larger jobs can start running. The more memory a job requires, the worse the problem is.
Memory reservation for pending jobs solves this problem by reserving memory as it becomes available, until the total required memory specified on the
rusagestring is accumulated and the job can start. Use memory reservation for pending jobs if memory-intensive jobs often compete for memory with smaller jobs in your cluster.
Configure memory reservation for pending jobs
- Use the RESOURCE_RESERVE parameter in
lsb.queuesto reserve host memory for pending jobs.
The amount of memory reserved is based on the currently available memory when the job is pending. Reserved memory expires at the end of the time period represented by the number of dispatch cycles specified by the value of MAX_RESERVE_TIME set on the RESOURCE_RESERVE parameter.
- To enable memory reservation for sequential jobs, add the LSF scheduler plugin module name for resource reservation (
schmod_reserve) to the
lsb.modulesfile:Begin PluginModule SCH_PLUGIN RB_PLUGIN SCH_DISABLE_PHASES schmod_default () () schmod_reserve () () schmod_preemption () () End PluginModule
- Set the RESOURCE_RESERVE parameter in a queue defined in
If both RESOURCE_RESERVE and SLOT_RESERVE are defined in the same queue, job slot reservation and memory reservation are both enabled and an error is displayed when the cluster is reconfigured. SLOT_RESERVE is ignored.
The following queue enables memory reservation for pending jobs:Begin Queue QUEUE_NAME = reservation DESCRIPTION = For resource reservation PRIORITY=40 RESOURCE_RESERVE = MAX_RESERVE_TIME End Queue
Use memory reservation for pending jobs
- Use the
rusagestring in the
bsubor the RES_REQ parameter in
lsb.queuesto specify the amount of memory required for the job. Submit the job to a queue with RESOURCE_RESERVE configured.
See Examples for examples of jobs that use memory reservation.
note:Compound resource requirements do not support use of the
||operator within the component
rusagesimple resource requirements, multiple
-Roptions, or the
How memory reservation for pending jobs works
Amount of memory reserved
The amount of memory reserved is based on the currently available memory when the job is pending. For example, if LIM reports that a host has 300 MB of memory available, the job submitted by the following command:
bsub -R "rusage[mem=400]" -q reservation my_job
will be pending and reserve the 300 MB of available memory. As other jobs finish, the memory that becomes available is added to the reserved memory until 400 MB accumulates, and the job starts.
No memory is reserved if no job slots are available for the job because the job could not run anyway, so reserving memory would waste the resource.
Only memory is accumulated while the job is pending; other resources specified on the
rusagestring are only reserved when the job is running. Duration and decay have no effect on memory reservation while the job is pending.
How long memory is reserved (MAX_RESERVE_TIME)
Reserved memory expires at the end of the time period represented by the number of dispatch cycles specified by the value of MAX_RESERVE_TIME set on the RESOURCE_RESERVE parameter. If a job has not accumulated enough memory to start by the time MAX_RESERVE_TIME expires, it releases all its reserved memory so that other pending jobs can run. After the reservation time expires, the job cannot reserve slots or memory for one scheduling session, so other jobs have a chance to be dispatched. After one scheduling session, the job can reserve available resources again for another period specified by MAX_RESERVE_TIME.
The following queues are defined in
lsb.queues:Begin Queue QUEUE_NAME = reservation DESCRIPTION = For resource reservation PRIORITY=40 RESOURCE_RESERVE = MAX_RESERVE_TIME End Queue
Assume one host in the cluster with 10 CPUs and 1 GB of free memory currently available.
Each of the following sequential jobs requires 400 MB of memory and runs for 300 minutes.
bsub -W 300 -R "rusage[mem=400]" -q reservation myjob1
The job starts running, using 400M of memory and one job slot.
Submitting a second job with same requirements yields the same result.
Submitting a third job with same requirements reserves one job slot, and reserves all free memory, if the amount of free memory is between 20 MB and 200 MB (some free memory may be used by the operating system or other software.)
Time-based Slot Reservation
Existing LSF slot reservation works in simple environments, where the host-based MXJ limit is the only constraint to job slot request. In complex environments, where more than one constraint exists (for example job topology or generic slot limit):
- Estimated job start time becomes inaccurate
- The scheduler makes a reservation decision that can postpone estimated job start time or decrease cluster utilization.
Current slot reservation by start time (RESERVE_BY_STARTTIME) resolves several reservation issues in multiple candidate host groups, but it cannot help on other cases:
- Special topology requests, like
- Only calculates and displays reservation if host has free slots. Reservations may change or disappear if there are no free CPUs; for example, if a backfill job takes all reserved CPUs.
- For HPC machines containing many internal nodes, host-level number of reserved slots is not enough for administrator and end user to tell which CPUs the job is reserving and waiting for.
Time-based slot reservation versus greedy slot reservation
With time-based reservation, a set of pending jobs get future allocation and an estimated start time so that the system can reserve a place for each job. Reservations use the estimated start time, which is based on future allocations.
Time-based resource reservation provides a more accurate predicted start time for pending jobs because LSF considers job scheduling constraints and requirements, including job topology and resource limits, for example.
restriction:Time-based reservation does not work with job chunking.
Start time and future allocation
The estimated start time for a future allocation is the earliest start time when all considered job constraints are satisfied in the future. There may be a small delay of a few minutes between the job finish time on which the estimate was based and the actual start time of the allocated job.
For compound resource requirement strings, the predicted start time is based on the simple resource requirement term (contained in the compound resource requirement) with the latest predicted start time.
If a job cannot be placed in a future allocation, the scheduler uses
greedyslot reservation to reserve slots. Existing LSF slot reservation is a simple greedy algorithm:
- Only considers current available resources and minimal number of requested job slots to reserve as many slots as it is allowed
- For multiple exclusive candidate host groups, scheduler goes through those groups and makes reservation on the group that has the largest available slots
- For estimated start time, after making reservation, scheduler sorts all running jobs in ascending order based on their finish time and goes through this sorted job list to add up slots used by each running job till it satisfies minimal job slots request. The finish time of last visited job will be job estimated start time.
Reservation decisions made by greedy slot reservation do not have an accurate estimated start time or information about future allocation. The calculated job start time used for backfill scheduling is uncertain, so
bjobsdisplays:Job will start no sooner than indicated time stamp
Time-based reservation and greedy reservation compared
Start time prediction Time-based reservation Greedy reservation Backfill scheduling if free slots are available Yes Yes Correct with no job topology Yes Yes Correct for job topology requests Yes No Correct based on resource allocation limits Yes (guaranteed if only two limits are defined) No Correct for memory requests Yes No When no slots are free for reservation Yes No Future allocation and reservation based on earliest start time Yes No bjobs displays best estimate Yes No bjobs displays predicted future allocation Yes No Absolute predicted start time for all jobs No No Advance reservation considered No No
Greedy reservation example
A cluster has four hosts: A, B, C and D, with 4 CPUs each. Four jobs are running in the cluster:
Job4. According to calculated job estimated start time, the job finish times (FT) have this order: FT(
Job2) < FT(
Job1) < FT(
Job4) < FT(
Now, a user submits a high priority job. It pends because it requests -n 6 -R "span[ptile=2]". This resource requirement means this pending job needs three hosts with two CPUs on each host. The default greedy slot reservation calculates job start time as the job finish time of
Job4finishes, three hosts with a minimum of two slots are available.
Greedy reservation indicates that the pending job starts no sooner than when Job 2 finishes.
In contrast, time-based reservation can determine that the pending job starts in 2 hours. It is a much more accurate reservation.
Configuring time-based slot reservation
Greedy slot reservation is the default slot reservation mechanism and time-based slot reservation is disabled.
- Use LSB_TIME_RESERVE_NUMJOBS=
lsf.confto enable time-based slot reservation. The value must be a positive integer.
LSB_TIME_RESERVE_NUMJOBS controls maximum number of jobs using time-based slot reservation. For example, if LSB_TIME_RESERVE_NUMJOBS=4, only the top 4 jobs will get their future allocation information.
- Use LSB_TIME_RESERVE_NUMJOBS=1 to allow only the highest priority job to get accurate start time prediction.
Smaller values are better than larger values because after the first pending job starts, the estimated start time of remaining jobs may be changed. For example, you could configure LSB_TIME_RESERVE_NUMJOBS based on the number of exclusive host partitions or host groups.
Some scheduling examples
Job5requests -n 6 -R "span[ptile=2]", which will require three hosts with 2 CPUs on each host. As in the greedy slot reservation example, four jobs are running in the cluster:
Job4. Two CPUs are available now, 1 on host
A, and 1 on host
Job2finishes, freeing 2 more CPUs for future allocation, 1 on host
A, and 1 on host
Job4finishes, freeing 4 more CPUs for future allocation, 2 on host
A, and 2 on host
Job1finishes, freeing 2 more CPUs for future allocation, 1 on host
C, and 1 host
Job5can now be placed with 2 CPUs on host A, 2 CPUs on host C, and 2 CPUs on host D. The estimated start time is shown as the finish time of
Assumptions and limitations
- To get an accurate estimated start time, you must specify a run limit at the job level using the
bsub -Woption, in the queue by configuring RUNLIMIT in
lsb.queues, or in the application by configuring RUNLIMIT in
lsb.applications,or you must specify a run time estimate by defining the RUNTIME parameter in
lsb.applications. If a run limit or a run time estimate is not defined, the scheduler will try to use CPU limit instead.
- Estimated start time is only relatively accurate according to current running job information. If running jobs finish earlier, estimated start time may be moved to earlier time. Only the highest priority job will get accurate predicted start time. The estimated start time for other jobs could be changed after the first job starts.
- Under time-based slot reservation, only information from currently running jobs is used for making reservation decisions.
- Estimated start time calculation does not consider Deadline scheduling.
- Estimated start time calculation does not consider Advance Reservation.
- Estimated start time calculation does not consider DISPATCH_WINDOW in
- If preemptive scheduling is used, the estimated start time may not be accurate. The scheduler may calculate and estimated time, but actually it may preempt other jobs to start earlier.
- For resizable jobs, time-based slot reservation does not schedule pending resize allocation requests. However, for resized running jobs, the allocation change is used when calculating pending job predicted start time and resource reservation. For example, if a running job uses 4 slots at the beginning, but added another 4 slots, after adding the new resources, LSF expects 8 slots to be available after the running job completes.
Slot limit enforcement
The following slot limits are enforced:
- Slot limits configured in
To request memory resources, configure RESOURCE_RESERVE in
When RESOURCE_RESERVE is used, LSF will consider memory and slot requests during time-based reservation calculation. LSF will not reserve slot or memory if any other resources are not satisfied.
If SLOT_RESERVE is configured, time-based reservation will not make a slot reservation if any other type of resource is not satisfied, including memory requests.
When SLOT_RESERVE is used, if job cannot run because of non-slot resources, including memory, time-based reservation will not reserve slots. For example, if job cannot run because it cannot get required license, job will be pending without any reservation
Host partition and queue-level scheduling
If host partitions are configured, LSF first schedules jobs on the host partitions and then goes through each queue to schedule jobs. The same job may be scheduled several times, one for each host partition and last one at queue-level. Available candidate hosts may be different for each time.
Because of this difference, the same job may get different estimated start times, future allocation, and reservation in different host partitions and queue-level scheduling. With time-based reservation configured, LSF always keeps the same reservation and future allocation with the earliest estimated start time.
bjobs displays future allocation information
- By default, job future allocation contains LSF host list and number of CPUs per host, for example:
- LSF integrations define their own future allocation string to override the default LSF allocation. For example, in RMS, future allocation is displayed as:rms_alloc=2*sierra0 3*sierra1
Predicted start time may be postponed for some jobs
If a pending job cannot be placed in a future resource allocation, the scheduler can skip it in the start time reservation calculation and fall back to use greedy slot reservation. There are two possible reasons:
- The job slot request cannot be satisfied in the future allocation
- Other non-slot resources cannot be satisfied.
Either way, the scheduler continues calculating predicted start time for the remaining jobs without considering the skipped job.
Later, once the resource request of skipped job can be satisfied and placed in a future allocation, the scheduler reevaluates the predicted start time for the rest of jobs, which may potentially postpone their start times.
To minimize the overhead in recalculating the predicted start times to include previously skipped jobs, you should configure a small value for LSB_TIME_RESERVE_NUMJOBS in
Even though no running jobs finish and no host status in cluster are changed, a job's future allocation may still change from time to time.
Why this happens
Each scheduling cycle, the scheduler recalculates a job's reservation information, estimated start time and opportunity for future allocation. The job candidate host list may be reordered according to current load. This reordered candidate host list will be used for the entire scheduling cycle, also including job future allocation calculation. So different order of candidate hosts may lead to different result of job future allocation. However, the job estimated start time should be the same.
For example, there are two hosts in cluster,
hostB. 4 CPUs per host. Job 1 is running and occupying 2 CPUs on
hostAand 2 CPUs on
hostB. Job 2 requests 6 CPUs. If the order of hosts is
hostB, then the future allocation of job 2 will be 4 CPUs on
hostA2 CPUs on
hostB. If the order of hosts changes in the next scheduling cycle changes to
hostA, then the future allocation of job 2 will be 4 CPUs on
hostB2 CPUs on
If you set JOB_ACCEPT_INTERVAL to non-zero value, after job is dispatched, within JOB_ACCEPT_INTERVAL period, pending job estimated start time and future allocation may momentarily fluctuate.
Why this happens
The scheduler does a time-based reservation calculation each cycle. If JOB_ACCEPT_INTERVAL is set to non-zero value. once a new job has been dispatched to a host, this host will not accept new job within JOB_ACCEPT_INTERVAL interval. Because the host will not be considered for the entire scheduling cycle, no time-based reservation calculation is done, which may result in slight change in job estimated start time and future allocation information. After JOB_ACCEPT_INTERVAL has passed, host will become available for time-based reservation calculation again, and the pending job estimated start time and future allocation will be accurate again.
Three hosts, 4 CPUs each:
qat26. Job 11895 uses 4 slots on
qat24(10 hours). Job 11896 uses 4 slots on
qat25(12 hours), and job 11897 uses 2 slots on
Job 11898 is submitted and requests
-n 6 -R "span[ptile=2]".
bjobs -l 11898Job <11898>, User <user2>, Project <default>, Status <PEND>, Queue <challenge>, Job Priority <50>, Command <sleep 100000000> .. RUNLIMIT 840.0 min of hostA Fri Apr 22 15:18:56: Reserved <2> job slots on host(s) <2*qat26>;
Sat Apr 23 03:28:46: Estimated Job Start Time; alloc=2*qat25 2*qat24 2*qat26.lsf.platform.com
Two RMS hosts,
sierraB, 8 CPUs per host. Job 3873 uses 4*sierra0 and will last for 10 hours. Job 3874 uses 4*sierra1 and will run for 12 hours. Job 3875 uses 2*sierra2 and 2*sierra3, and will run for 13 hours.
Job 3876 is submitted and requests
-n 6 -ext "RMS[nodes=3]".
bjobs -l 3876Job <3876>, User <user2>, Project <default>, Status <PEND>, Queue <rms>, Extsch ed <RMS[nodes=3]>, Command <sleep 1000000> Fri Apr 22 15:35:28: Submitted from host <sierraa>, CWD <$HOME>, 6 Processors R equested; RUNLIMIT 840.0 min of sierraa Fri Apr 22 15:35:46: Reserved <4> job slots on host(s) <4*sierrab>;
Sat Apr 23 01:34:12: Estimated job start time; rms_alloc=2*sierra[0,2-3]...
Rerun example 1, but this time, use greedy slot reservation instead of time-based reservation:
bjobs -l 3876Job <12103>, User <user2>, Project <default>, Status <PEND>, Queue <challenge>, Job Priority <50>, Command <sleep 1000000> Fri Apr 22 16:17:59: Submitted from host <qat26>, CWD <$HOME>, 6 Processors Req uested, Requested Resources <span[ptile=2]>; RUNLIMIT 720.0 min of qat26 Fri Apr 22 16:18:09: Reserved <2> job slots on host(s) <2*qat26.lsf.platform.co m>;
Sat Apr 23 01:39:13: Job will start no sooner than indicated time stamp;
Viewing Resource Reservation Information
View host-level resource information (bhosts)
bhosts -lto show the amount of resources reserved on each host. In the following example, 143 MB of memory is reserved on
hostA, and no memory is currently available on the host.
bhosts -l hostAHOST hostA STATUS CPUF JL/U MAX NJOBS RUN SSUSP USUSP RSV DISPATCH_WINDOW ok 20.00 - 4 2 1 0 0 1 - CURRENT LOAD USED FOR SCHEDULING: r15s r1m r15m ut pg io ls it tmp swp mem Total 1.5 1.2 2.0 91% 2.5 7 49 0 911M 915M 0M Reserved 0.0 0.0 0.0 0% 0.0 0 0 0 0M 0M 143M
bhosts -sto view information about shared resources.
View queue-level resource information (bqueues)
bqueues -lto see the resource usage configured at the queue level.
bqueues -l reservationQUEUE: reservation -- For resource reservation PARAMETERS/STATISTICS PRIO NICE STATUS MAX JL/U JL/P JL/H NJOBS PEND RUN SSUSP USUSP RSV 40 0 Open:Active - - - - 4 0 0 0 0 4 SCHEDULING PARAMETERS r15s r1m r15m ut pg io ls it tmp swp mem loadSched - - - - - - - - - - - loadStop - - - - - - - - - - - cpuspeed bandwidth loadSched - - loadStop - - SCHEDULING POLICIES: RESOURCE_RESERVE USERS: all users HOSTS: all Maximum resource reservation time: 600 seconds
View reserved memory for pending jobs (bjobs)
If the job memory requirements cannot be satisfied,
bjobs -lshows the pending reason.
bjobs -lshows both reserved slots and reserved memory.
- For example, the following job reserves 60 MB of memory on
bsub -m hostA -n 2 -q reservation -R"rusage[mem=60]" sleep 8888Job <3> is submitted to queue <reservation>.
bjobs -lshows the reserved memory:
bjobs -lpJob <3>, User <user1>, Project <default>, Status <PEND>, Queue <reservation> , Command <sleep 8888> Tue Jan 22 17:01:05: Submitted from host <user1>, CWD </home/user1/>, 2 Processors Requested, Requested Resources <rusage[mem=60]>, Specified Hosts <hostA>; Tue Jan 22 17:01:15: Reserved <1> job slot on host <hostA>; Tue Jan 22 17:01:15: Reserved <60> megabyte memory on host <60M*hostA>; PENDING REASONS: Not enough job slot(s): hostA; SCHEDULING PARAMETERS r15s r1m r15m ut pg io ls it tmp swp mem loadSched - - - - - - - - - - - loadStop - - - - - - - - - - - cpuspeed bandwidth loadSched - - loadStop - -
View per-resource reservation (bresources)
bresourcesto display per-resource reservation configurations from
The following example displays all resource reservation configurations:
bresources -sBegin ReservationUsage RESOURCE METHOD licenseX PER_JOB licenseY PER_HOST licenseZ PER_SLOT End ReservationUsage
The following example displays only
bresources -s licenseZRESOURCE METHOD licenseZ PER_SLOT
Platform Computing Inc.
|Knowledge Center Contents Previous Next Index|