Why is my multithreading code not working?
I have the following function:
def Upscale(file):
img = cv2.imread(file)
sr = cv2.dnn_superres.DnnSuperResImpl_create()
path = 'ESPCN_x2.pb'
sr.readModel(path)
sr.setMode开发者_Python百科l('espcn',2)
result = sr.upsample(img)
return result
This function will take in a PNG image and return an array.
I have the following list of image paths:
file_list = ['page-136.jpg','page-0.jpg','page-11.jpg','page-12.jpg','page-13.jpg','page14.jpg','page-37.jpg','page-58.jpg','page-62.jpg','page-64.jpg','page-134.jpg','page-135.jpg']
and then the multithreading code:
import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
res=[]
with ThreadPoolExecutor(max_workers=10) as executor:
future_to_response = {
executor.submit(Upscale, f'C:\\Users\\rturedi\\Desktop\\DPI_proj\\images\\{i}'): i for i in file_list
}
t = tqdm.tqdm(total=len(future_to_response))
for future in as_completed(future_to_response):
res.append(future.result())
Unfortunately, when I run this code I receive the following error:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 265064448 bytes in function 'cv::OutOfMemoryError'
I attempted to solve this by googling the issue, but there does not seem to be much out there. Does anyone have an idea as to why this is happening?
Your Upscale
function instantiate a new super-resolution neural network each time it is called just to use it one time. Since you are scheduling 10 executions of this function in parallel, 10 network will be allocated in memory, which is likely the source of the out of memory error since neural networks take lots of memory.
You should instantiate only one neural network and pass it in the Upscale
function:
def upscale(file, network):
img = cv2.imread(file)
result = network.upsample(img)
return result
Multithreading the execution of a neural network is usually not a good idea and you should begin with a regular sequential execution:
if __name__ == "__main__":
sr = cv2.dnn_superres.DnnSuperResImpl_create()
path = 'ESPCN_x2.pb'
sr.readModel(path)
sr.setModel('espcn', 2)
files = [...]
results = [upscale(f, sr) for f in files]
In this context, using multithreading will likely result in out of memory errors, as you experienced. Moreover, it could also be counterproductive if the network execution is already multithreaded by OpenCV (assuming you run the network on CPU).
精彩评论