Runner¶
Runner.py: Task scheduling and execution
- waflib.Runner.GAP = 5¶
Wait for at least
GAP * njobsbefore trying to enqueue more tasks to run
- class waflib.Runner.Consumer(spawner, task)[source]¶
Daemon thread object that executes a task. It shares a semaphore with the coordinator
waflib.Runner.Spawner. There is one instance per task to consume.- task¶
Task to execute
- spawner¶
Coordinator object
- class waflib.Runner.Spawner(master)[source]¶
Daemon thread that consumes tasks from
waflib.Runner.Parallelproducer and spawns a consuming threadwaflib.Runner.Consumerfor eachwaflib.Task.Taskinstance.- master¶
waflib.Runner.Parallelproducer instance
- sem¶
Bounded semaphore that prevents spawning more than n concurrent consumers
- run()[source]¶
Spawns new consumers to execute tasks by delegating to
waflib.Runner.Spawner.loop()
- loop()[source]¶
Consumes task objects from the producer; ends when the producer has no more task to provide.
- __annotations__ = {}¶
- class waflib.Runner.Parallel(bld, j=2)[source]¶
Schedule the tasks obtained from the build context for execution.
- __init__(bld, j=2)[source]¶
The initialization requires a build context reference for computing the total number of jobs.
- numjobs¶
Amount of parallel consumers to use
- bld¶
Instance of
waflib.Build.BuildContext
- outstanding¶
Heap of
waflib.Task.Taskthat may be ready to be executed
- postponed¶
Heap of
waflib.Task.Taskwhich are not ready to run for non-DAG reasons
- incomplete¶
List of
waflib.Task.Taskwaiting for dependent tasks to complete (DAG)
- ready¶
List of
waflib.Task.Taskready to be executed by consumers
- out¶
List of
waflib.Task.Taskreturned by the task consumers
- count¶
Amount of tasks that may be processed by
waflib.Runner.TaskConsumer
- processed¶
Amount of tasks processed
- stop¶
Error flag to stop the build
- error¶
Tasks that could not be executed
- biter¶
Task iterator which must give groups of parallelizable tasks when calling
next()
- dirty¶
Flag that indicates that the build cache must be saved when a task was executed (calls
waflib.Build.BuildContext.store())
- revdeps¶
The reverse dependency graph of dependencies obtained from Task.run_after
- spawner¶
Coordinating daemon thread that spawns thread consumers
- postpone(tsk)[source]¶
Adds the task to the list
waflib.Runner.Parallel.postponed. The order is scrambled so as to consume as many tasks in parallel as possible.- Parameters
tsk (
waflib.Task.Task) – task instance
- refill_task_list()[source]¶
Pulls a next group of tasks to execute in
waflib.Runner.Parallel.outstanding. Ensures that all tasks in the current build group are complete before processing the next one.
- add_more_tasks(tsk)[source]¶
If a task provides
waflib.Task.Task.more_tasks, then the tasks contained in that list are added to the current build and will be processed before the next build group.The priorities for dependent tasks are not re-calculated globally
- Parameters
tsk (
waflib.Task.Task) – task instance
- get_out()[source]¶
Waits for a Task that task consumers add to
waflib.Runner.Parallel.outafter execution. Adds more Tasks if necessary throughwaflib.Runner.Parallel.add_more_tasks.- Return type
- error_handler(tsk)[source]¶
Called when a task cannot be executed. The flag
waflib.Runner.Parallel.stopis set, unless the build is executed with:$ waf build -k
- Parameters
tsk (
waflib.Task.Task) – task instance
- task_status(tsk)[source]¶
Obtains the task status to decide whether to run it immediately or not.
- Returns
the exit status, for example
waflib.Task.ASK_LATER- Return type
integer
- start()[source]¶
Obtains Task instances from the BuildContext instance and adds the ones that need to be executed to
waflib.Runner.Parallel.readyso that thewaflib.Runner.Spawnerconsumer thread has them executed. Obtains the executed Tasks back fromwaflib.Runner.Parallel.outand marks the build as failed by setting thestopflag. If only one job is used, then executes the tasks one by one, without consumers.
- prio_and_split(tasks)[source]¶
Label input tasks with priority values, and return a pair containing the tasks that are ready to run and the tasks that are necessarily waiting for other tasks to complete.
The priority system is really meant as an optional layer for optimization: dependency cycles are found quickly, and builds should be more efficient. A high priority number means that a task is processed first.
This method can be overridden to disable the priority system:
def prio_and_split(self, tasks): return tasks, []
- Returns
A pair of task lists
- Return type
tuple