Skip to content Skip to sidebar Skip to footer

Using Shared Array In Multiprocessing

I am trying to run a parallel process in python, wherein I have to extract certain polygons from a large array based on some conditions. The large array has 10k+ polygons that are

Solution 1:

You can absolutely use a library like Ray.

The structure would look something like this (simplified to remove your application logic).

import numpy as np
import ray

ray.init()

# Create the array and store it in shared memory once.
array = np.ones(10**6)
array_id = ray.put(array)


@ray.remotedefextract_polygon(array, index):
    # Change this to actual extract the polygon.return index

# Start 10 tasks that each take in the ID of the array in shared memory.# These tasks execute in parallel (assuming there are enough CPU resources).
result_ids = [extract_polygon.remote(array_id, i) for i inrange(10)]

# Fetch the results.
results = ray.get(result_ids)

You can read more about Ray in the documentation.

See some related answers below:

Post a Comment for "Using Shared Array In Multiprocessing"