make knows how to execute several recipes at once.
make will execute only one recipe at a time, waiting
for it to finish before executing the next. However, the ‘
--jobs’ option tells
make to execute many recipes
simultaneously. You can inhibit parallelism in a particular makefile
.NOTPARALLEL pseudo-target (see Special Built-in Target Names).
On MS-DOS, the ‘
-j’ option has no effect, since that system doesn’t
If the ‘
-j’ option is followed by an integer, this is the number of
recipes to execute at once; this is called the number of job slots.
If there is nothing looking like an integer after the ‘
there is no limit on the number of job slots. The default number of job
slots is one, which means serial execution (one thing at a time).
make invocations raises issues for parallel
execution. For more information on this, see Communicating Options to a Sub-
If a recipe fails (is killed by a signal or exits with a nonzero
status), and errors are not ignored for that recipe (see Errors in Recipes), the remaining recipe lines to remake the same
target will not be run. If a recipe fails and the ‘
--keep-going’ option was not given (see Summary of Options),
make aborts execution. If make
terminates for any reason (including a signal) with child processes
running, it waits for them to finish before actually exiting.
When the system is heavily loaded, you will probably want to run fewer jobs
than when it is lightly loaded. You can use the ‘
-l’ option to tell
make to limit the number of jobs to run at once, based on the load
average. The ‘
-l’ or ‘
option is followed by a floating-point number. For
will not let
make start more than one job if the load average is
above 2.5. The ‘
-l’ option with no following number removes the
load limit, if one was given with a previous ‘
More precisely, when
make goes to start up a job, and it already has
at least one job running, it checks the current load average; if it is not
lower than the limit given with ‘
make waits until the load
average goes below that limit, or until all the other jobs finish.
By default, there is no load limit.
|• Parallel Output||Handling output during parallel execution|
|• Parallel Input||Handling input during parallel execution|