

Really it just depends what fits most neatly into the rest of your code.Ĭoncurrent code always tends to be a bit different from single-threaded code, so don't worry too much if you have to do things that seem different from what you're used to. Or you could make the class itself the worker function, that is you could do something like p.map(M圜lass, range(10)) and make the _init_ method call self.func(). That lambda happens to be equivalent to operator.attrgetter('func'), which can be pickled, so you could try that instead. There are lots of ways of doing this, but your helper function is perfectly fine in my opinion. Is there a way I can avoid this, or minimize the impact? Each tick all these machine instances are being serialized, sent to workers in the pool, de-serialized, executed, serialized, returned, and de-serialized again.

Adding return self toĪ second question has to do with pickling, or the serialization/de-serialization. It says something fails during pickling (serialization), but doesn't provide me enough to go on. # instances = p.map(lambda x: x.func, instances) # Fails Instances = p.map(job, instances) # Works Here is a code example: from multiprocessing import Pool How can I get around the "job" helper function? I need the non-Pythonic helper function job that takes the instance, runs the function and returns the updated instance. My problem: I cannot directly call the class methods in Pool.map.
#Python3 class method map update#
The relevant part for this question, and the reason I'm looking at multiprocessing, is that there are many machines that are self-contained (no shared state) that need to update with a cpu-bound function. In the simulation I have a large set of machines, represented by class instances, that need to have a function called to update their state for each tick (timestep) of the simulation.

Context: I've been learning about the multiprocessing module to speed up code in a factory simulation.
