fixed the issue. Will look into that further I found a solution that doesn't involve changing the source code of a python module. It uses the approach suggested here. One can check that only the physical cores are active after running that script by doing:
may help you . The apply_async method of a pool will only run the worker function once, on an arbitrarily selected process from the pool, so your two code examples won't do exactly the same thing. To really be equivalent, you'd need to call apply_async five times. I think which of the approaches is more appropriate to a give task depends a bit on what you are doing. multiprocessing.Pool allows you to do multiple jobs per process, which may make it easier to parallelize your program. For instance, if you have a million items that need individual processing, you can create a pool with a reasonable number of processes (perhaps as many as you have CPU cores) and then pass the list of the million items to pool.map. The pool will distribute them to the various worker processes (and collecting up the return values to be returned to the parent process). Launching a million separate processes would be much less practical (it would probably break your OS).