2.6 版新增。
multiprocessing 是支持卵生进程的包,所用 API 类似 threading 模块。 multiprocessing 包同时提供本地和远程并发,有效避开 全局解释器锁 通过使用子进程而不是线程。由于此, multiprocessing 模块允许程序员充分利用给定机器上的多个处理器。它可以在 Unix 和 Windows 上运行。
警告
Some of this package’s functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the multiprocessing.synchronize module will be disabled, and attempts to import it will result in an ImportError 。见 issue 3770 for additional information.
注意
Functionality within this package requires that the __main__ module be importable by the children. This is covered in 编程指导方针 however it is worth pointing out here. This means that some examples, such as the multiprocessing.Pool examples will not work in the interactive interpreter. For example:
>>> from multiprocessing import Pool >>> p = Pool(5) >>> def f(x): ... return x*x ... >>> p.map(f, [1,2,3]) Process PoolWorker-1: Process PoolWorker-2: Process PoolWorker-3: Traceback (most recent call last): AttributeError: 'module' object has no attribute 'f' AttributeError: 'module' object has no attribute 'f' AttributeError: 'module' object has no attribute 'f' Traceback (most recent call last): AttributeError: 'module' object has no attribute 'f'
(If you try this it will actually output three full tracebacks interleaved in a semi-random fashion, and then you may have to stop the master process somehow.)
在 multiprocessing ,卵生进程通过创建 Process 对象然后调用其 start() 方法。 Process 遵循的 API 源自 threading.Thread 。通俗多进程程序范例
from multiprocessing import Process def f(name): print 'hello', name if __name__ == '__main__': p = Process(target=f, args=('bob',)) p.start() p.join()
为展示涉及的单个进程 ID,这里是扩展范例:
from multiprocessing import Process import os def info(title): print title print 'module name:', __name__ if hasattr(os, 'getppid'): # only available on Unix print 'parent process:', os.getppid() print 'process id:', os.getpid() def f(name): info('function f') print 'hello', name if __name__ == '__main__': info('main line') p = Process(target=f, args=('bob',)) p.start() p.join()
For an explanation of why (on Windows) the if __name__ == '__main__' 部分是必要,见 编程指导方针 .
multiprocessing 支持 2 种类型进程之间的通信通道:
队列
The 队列 类几乎克隆 Queue.Queue 。例如:
from multiprocessing import Process, Queue def f(q): q.put([42, None, 'hello']) if __name__ == '__main__': q = Queue() p = Process(target=f, args=(q,)) p.start() print q.get() # prints "[42, None, 'hello']" p.join()
Queue 是线程和进程安全的。
Pipes
The Pipe() 函数返回一对通过管道 (默认情况下,为双工双向) 连接的 Connection 对象。例如:
from multiprocessing import Process, Pipe def f(conn): conn.send([42, None, 'hello']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() # prints "[42, None, 'hello']" p.join()
2 Connection 对象的返回通过 Pipe() 表示管道的 2 端。每个连接对象都有 send() and recv() methods (among others). Note that data in a pipe may become corrupted if two processes (or threads) try to read from or write to the same end of the pipe at the same time. Of course there is no risk of corruption from processes using different ends of the pipe at the same time.
multiprocessing 包含的所有同步原语的等价物来自 threading 。例如,可以使用锁来确保每次仅一进程打印到标准输出:
from multiprocessing import Process, Lock def f(l, i): l.acquire() print 'hello world', i l.release() if __name__ == '__main__': lock = Lock() for num in range(10): Process(target=f, args=(lock, num)).start()
不使用锁,来自不同进程的输出很容易搞混。
如上所述,在进行并发编程时,通常最好尽可能避免使用共享状态。当使用多个过程时,尤其如此。
不管怎样,若确实需要使用一些共享数据, multiprocessing 为做到这提供了 2 种方式。
共享内存
可以将数据存储在共享内存映射中,使用 值 or 数组 。例如,以下代码
from multiprocessing import Process, Value, Array def f(n, a): n.value = 3.1415927 for i in range(len(a)): a[i] = -a[i] if __name__ == '__main__': num = Value('d', 0.0) arr = Array('i', range(10)) p = Process(target=f, args=(num, arr)) p.start() p.join() print num.value print arr[:]
将打印
3.1415927 [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
The 'd' and 'i' arguments used when creating num and arr are typecodes of the kind used by the array 模块: 'd' indicates a double precision float and 'i' indicates a signed integer. These shared objects will be process and thread-safe.
For more flexibility in using shared memory one can use the multiprocessing.sharedctypes module which supports the creation of arbitrary ctypes objects allocated from shared memory.
服务器进程
管理器对象的返回通过 Manager() controls a server process which holds Python objects and allows other processes to manipulate them using proxies.
管理器的返回通过 Manager() 将支持类型 list , dict , Namespace , 锁 , RLock , 信号量 , BoundedSemaphore , 条件 , 事件 , 队列 , 值 and 数组 。例如,
from multiprocessing import Process, Manager def f(d, l): d[1] = '1' d['2'] = 2 d[0.25] = None l.reverse() if __name__ == '__main__': manager = Manager() d = manager.dict() l = manager.list(range(10)) p = Process(target=f, args=(d, l)) p.start() p.join() print d print l
将打印
{0.25: None, 1: '1', '2': 2} [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory.
The Pool class represents a pool of worker processes. It has methods which allows tasks to be offloaded to the worker processes in a few different ways.
例如:
from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': pool = Pool(processes=4) # start 4 worker processes result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously print result.get(timeout=1) # prints "100" unless your computer is *very* slow print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
Note that the methods of a pool should only ever be used by the process which created it.
The multiprocessing 包主要复现的 API 源自 threading 模块。
进程对象表示在单独进程中运行的活动。 Process 类拥有的相当于所有方法为 threading.Thread .
The constructor should always be called with keyword arguments. group should always be None ; it exists solely for compatibility with threading.Thread . target 是要被援引的可调用对象通过 run() method. It defaults to None ,意味着什么都不调用。 name is the process name. By default, a unique name is constructed of the form ‘Process-N 1 :N 2 :...:N k ‘ where N 1 ,N 2 ,...,N k is a sequence of integers whose length is determined by the generation of the process. args is the argument tuple for the target invocation. kwargs is a dictionary of keyword arguments for the target invocation. By default, no arguments are passed to target .
If a subclass overrides the constructor, it must make sure it invokes the base class constructor ( Process.__init__() ) before doing anything else to the process.
表示进程活动的方法。
可以在子类中覆盖此方法。标准 run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs 自变量,分别。
启动进程的活动。
This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process.
Block the calling thread until the process whose join() method is called terminates or until the optional timeout occurs.
若 timeout is None then there is no timeout.
A process can be joined many times.
A process cannot join itself because this would cause a deadlock. It is an error to attempt to join a process before it has been started.
The process’s name.
The name is a string used for identification purposes only. It has no semantics. Multiple processes may be given the same name. The initial name is set by the constructor.
返回进程是否存活。
Roughly, a process object is alive from the moment the start() method returns until the child process terminates.
The process’s daemon flag, a Boolean value. This must be set before start() 被调用。
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-daemonic processes have exited.
除了 threading.Thread API, Process objects also support the following attributes and methods:
返回进程 ID。在卵生进程前,这将是 None .
子级退出代码。这将是 None if the process has not yet terminated. A negative value -N 指示子级被终止,通过信号 N .
进程的身份验证密钥 (字节字符串)。
当 multiprocessing is initialized the main process is assigned a random string using os.urandom() .
当 Process object is created, it will inherit the authentication key of its parent process, although this may be changed by setting authkey to another byte string.
见 身份验证键 .
Terminate the process. On Unix this is done using the SIGTERM signal; on Windows TerminateProcess() is used. Note that exit handlers and finally clauses, etc., will not be executed.
Note that descendant processes of the process will not be terminated – they will simply become orphaned.
警告
If this method is used when the associated process is using a pipe or queue then the pipe or queue is liable to become corrupted and may become unusable by other process. Similarly, if the process has acquired a lock or semaphore etc. then terminating it is liable to cause other processes to deadlock.
注意, start() , join() , is_alive() , terminate() and exitcode methods should only be called by the process that created the process object.
Example usage of some of the methods of Process :
>>> import multiprocessing, time, signal >>> p = multiprocessing.Process(target=time.sleep, args=(1000,)) >>> print p, p.is_alive() <Process(Process-1, initial)> False >>> p.start() >>> print p, p.is_alive() <Process(Process-1, started)> True >>> p.terminate() >>> time.sleep(0.1) >>> print p, p.is_alive() <Process(Process-1, stopped[SIGTERM])> False >>> p.exitcode == -signal.SIGTERM True
Exception raised by Connection.recv_bytes_into() when the supplied buffer object is too small for the message read.
若 e 是实例化的 BufferTooShort then e.args[0] will give the message as a byte string.
When using multiple processes, one generally uses message passing for communication between processes and avoids having to use any synchronization primitives like locks.
For passing messages one can use Pipe() (for a connection between two processes) or a queue (which allows multiple producers and consumers).
The 队列 , multiprocessing.queues.SimpleQueue and JoinableQueue types are multi-producer, multi-consumer FIFO queues modelled on the Queue.Queue class in the standard library. They differ in that 队列 lacks the task_done() and join() methods introduced into Python 2.5’s Queue.Queue 类。
若使用 JoinableQueue 那么 must call JoinableQueue.task_done() for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow, raising an exception.
Note that one can also create a shared queue by using a manager object – see 管理器 .
注意
multiprocessing uses the usual Queue.Empty and Queue.Full exceptions to signal a timeout. They are not available in the multiprocessing namespace so you need to import them from 队列 .
注意
When an object is put on a queue, the object is pickled and a background thread later flushes the pickled data to an underlying pipe. This has some consequences which are a little surprising, but should not cause any practical difficulties – if they really bother you then you can instead use a queue created with a manager .
警告
If a process is killed using Process.terminate() or os.kill() while it is trying to use a 队列 , then the data in the queue is likely to become corrupted. This may cause any other process to get an exception when it tries to use the queue later on.
警告
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread ), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See 编程指导方针 .
For an example of the usage of queues for interprocess communication see 范例 .
Returns a pair (conn1, conn2) of Connection objects representing the ends of a pipe.
若 duplex is True (the default) then the pipe is bidirectional. If duplex is False then the pipe is unidirectional: conn1 can only be used for receiving messages and conn2 can only be used for sending messages.
Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe.
The usual Queue.Empty and Queue.Full exceptions from the standard library’s 队列 module are raised to signal timeouts.
队列 implements all the methods of Queue.Queue except for task_done() and join() .
Return the approximate size of the queue. Because of multithreading/multiprocessing semantics, this number is not reliable.
Note that this may raise NotImplementedError on Unix platforms like Mac OS X where sem_getvalue() is not implemented.
返回 True 若队列为空, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable.
返回 True 若队列是满的, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable.
Put obj into the queue. If the optional argument block is True (the default) and timeout is None (默认),阻塞若有必要直到空闲槽可用。若 timeout 是正数,它阻塞最多 timeout 秒并引发 Queue.Full 异常若在该时间内无可用空闲槽。否则 ( block is False ), put an item on the queue if a free slot is immediately available, else raise the Queue.Full 异常 ( timeout 被忽略在这种情况下)。
相当于 put(obj, False) .
移除并返回项从队列。若可选自变量 block is True (the default) and timeout is None (默认),阻塞若有必要直到项可用。若 timeout 是正数,它阻塞最多 timeout 秒并引发 Queue.Empty exception if no item was available within that time. Otherwise (block is False ), return an item if one is immediately available, else raise the Queue.Empty 异常 ( timeout 被忽略在这种情况下)。
相当于 get(False) .
队列 has a few additional methods not found in Queue.Queue . These methods are usually unnecessary for most code:
Indicate that no more data will be put on this queue by the current process. The background thread will quit once it has flushed all buffered data to the pipe. This is called automatically when the queue is garbage collected.
Join the background thread. This can only be used after close() has been called. It blocks until the background thread exits, ensuring that all data in the buffer has been flushed to the pipe.
By default if a process is not the creator of the queue then on exit it will attempt to join the queue’s background thread. The process can call cancel_join_thread() to make join_thread() do nothing.
Prevent join_thread() from blocking. In particular, this prevents the background thread from being joined automatically when the process exits – see join_thread() .
A better name for this method might be allow_exit_without_flush() . It is likely to cause enqueued data to lost, and you almost certainly will not need to use it. It is really only there if you need the current process to exit immediately without waiting to flush enqueued data to the underlying pipe, and you don’t care about lost data.
返回 True 若队列为空, False 否则。
从队列移除并返回项。
Put item 进队列。
JoinableQueue , 队列 subclass, is a queue which additionally has task_done() and join() 方法。
指示先前排队任务已完成。用于队列消费者线程。对于每个 get() 用于抓取任务,后续调用 task_done() tells the queue that the processing on the task is complete.
若 join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue).
引发 ValueError if called more times than there were items placed in the queue.
Block until all items in the queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.
返回当前进程所有存活子级的列表。
Calling this has the side effect of “joining” any processes which have already finished.
Return the number of CPUs in the system. May raise NotImplementedError .
返回 Process object corresponding to the current process.
An analogue of threading.current_thread() .
Add support for when a program which uses multiprocessing has been frozen to produce a Windows executable. (Has been tested with py2exe , PyInstaller and cx_Freeze )。
One needs to call this function straight after the if __name__ == '__main__' line of the main module. For example:
from multiprocessing import Process, freeze_support def f(): print 'hello world!' if __name__ == '__main__': freeze_support() Process(target=f).start()
若 freeze_support() line is omitted then trying to run the frozen executable will raise RuntimeError .
If the module is being run normally by the Python interpreter then freeze_support() 不起作用。
Sets the path of the Python interpreter to use when starting a child process. (By default sys.executable is used). Embedders will probably need to do some thing like
set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
before they can create child processes. (Windows only)
注意
multiprocessing contains no analogues of threading.active_count() , threading.enumerate() , threading.settrace() , threading.setprofile() , threading.Timer ,或 threading.local .
Connection objects allow the sending and receiving of picklable objects or strings. They can be thought of as message oriented connected sockets.
通常创建 Connection 对象是使用 Pipe() – 另请参阅 Listener 和 Client .
Send an object to the other end of the connection which should be read using recv() .
The object must be picklable. Very large pickles (approximately 32 MB+, though it depends on the OS) may raise a ValueError 异常。
Return an object sent from the other end of the connection using send() . Blocks until there its something to receive. Raises EOFError if there is nothing left to receive and the other end was closed.
Return the file descriptor or handle used by the connection.
关闭连接。
This is called automatically when the connection is garbage collected.
Return whether there is any data available to be read.
若 timeout is not specified then it will return immediately. If timeout is a number then this specifies the maximum time in seconds to block. If timeout is None then an infinite timeout is used.
Send byte data from an object supporting the buffer interface as a complete message.
若 offset is given then data is read from that position in buffer 。若 size is given then that many bytes will be read from buffer. Very large buffers (approximately 32 MB+, though it depends on the OS) may raise a ValueError exception
Return a complete message of byte data sent from the other end of the connection as a string. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end has closed.
若 maxlength is specified and the message is longer than maxlength then IOError is raised and the connection will no longer be readable.
Read into buffer a complete message of byte data sent from the other end of the connection and return the number of bytes in the message. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end was closed.
buffer must be an object satisfying the writable buffer interface. If offset is given then the message will be written into the buffer from that position. Offset must be a non-negative integer less than the length of buffer (in bytes).
If the buffer is too short then a BufferTooShort exception is raised and the complete message is available as e.args[0] where e is the exception instance.
例如:
>>> from multiprocessing import Pipe >>> a, b = Pipe() >>> a.send([1, 'hello', None]) >>> b.recv() [1, 'hello', None] >>> b.send_bytes('thank you') >>> a.recv_bytes() 'thank you' >>> import array >>> arr1 = array.array('i', range(5)) >>> arr2 = array.array('i', [0] * 10) >>> a.send_bytes(arr1) >>> count = b.recv_bytes_into(arr2) >>> assert count == len(arr1) * arr1.itemsize >>> arr2 array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
警告
The Connection.recv() method automatically unpickles the data it receives, which can be a security risk unless you can trust the process which sent the message.
Therefore, unless the connection object was produced using Pipe() you should only use the recv() and send() methods after performing some sort of authentication. See 身份验证键 .
警告
If a process is killed while it is trying to read or write to a pipe then the data in the pipe is likely to become corrupted, because it may become impossible to be sure where the message boundaries lie.
Generally synchronization primitives are not as necessary in a multiprocess program as they are in a multithreaded program. See the documentation for threading 模块。
Note that one can also create synchronization primitives by using a manager object – see 管理器 .
A bounded semaphore object: a clone of threading.BoundedSemaphore .
(On Mac OS X, this is indistinguishable from 信号量 因为 sem_getvalue() is not implemented on that platform).
A condition variable: a clone of threading.Condition .
若 lock is specified then it should be a 锁 or RLock 对象从 multiprocessing .
克隆自 threading.Event . This method returns the state of the internal semaphore on exit, so it will always return True 除了有给定 timeout (超时),且操作超时。
2.7 版改变: 以前,方法总是返回 None .
A non-recursive lock object: a clone of threading.Lock .
A recursive lock object: a clone of threading.RLock .
A semaphore object: a clone of threading.Semaphore .
注意
The acquire() 方法为 BoundedSemaphore , 锁 , RLock and 信号量 has a timeout parameter not supported by the equivalents in threading . The signature is acquire(block=True, timeout=None) with keyword parameters being acceptable. If block is True and timeout 不是 None then it specifies a timeout in seconds. If block is False then timeout 被忽略。
在 Mac OS X, sem_timedwait 不被支持,所以调用 acquire() with a timeout will emulate that function’s behavior using a sleeping loop.
注意
If the SIGINT signal generated by Ctrl-C arrives while the main thread is blocked by a call to BoundedSemaphore.acquire() , Lock.acquire() , RLock.acquire() , Semaphore.acquire() , Condition.acquire() or Condition.wait() then the call will be immediately interrupted and KeyboardInterrupt 会被引发。
This differs from the behaviour of threading where SIGINT will be ignored while the equivalent blocking calls are in progress.
It is possible to create shared objects using shared memory which can be inherited by child processes.
返回 ctypes object allocated from shared memory. By default the return value is actually a synchronized wrapper for the object.
typecode_or_type determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the array 模块。 *args is passed on to the constructor for the type.
若 lock is True (the default) then a new recursive lock object is created to synchronize access to the value. If lock 是 锁 or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”.
Operations like += which involve a read and write are not atomic. So if, for instance, you want to atomically increment a shared value it is insufficient to just do
counter.value += 1
Assuming the associated lock is recursive (which it is by default) you can instead do
with counter.get_lock(): counter.value += 1
注意, lock 是仅关键词自变量。
Return a ctypes array allocated from shared memory. By default the return value is actually a synchronized wrapper for the array.
typecode_or_type determines the type of the elements of the returned array: it is either a ctypes type or a one character typecode of the kind used by the array module. If size_or_initializer is an integer, then it determines the length of the array, and the array will be initially zeroed. Otherwise, size_or_initializer is a sequence which is used to initialize the array and whose length determines the length of the array.
若 lock is True (the default) then a new lock object is created to synchronize access to the value. If lock 是 锁 or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”.
注意, lock 是仅关键词自变量。
Note that an array of ctypes.c_char has value and raw attributes which allow one to use it to store and retrieve strings.