Whenever I see a blog post on Python concurrency that recommends something other than concurrent.futures ThreadPoolExecutor or ProcessPoolExecutor, I shake my head in disappointment at how far we've strayed from "one obviously right way". Why in 2020 am I seeing someone manually close and join a worker pool (blocking!) to return all results all at once because they rewrote executor.map using multiprocessing? Why do they del pool immediately before returning instead of just letting it run out of scope? Why are they using map, which orders the result, if they just want to call all? If performance matters, you want to read the results in the order of completion not the order of submission, and you don't want to wait for all of them to complete if any of them fail. Where did Python's "right way" go sideways?
> I don’t see where in the docs it says, it’s 2020 so do this from now on. Maybe that’s the problem?
I guess lack of clear direction is exactly the problem. For all of PEP20's supposed importance, it's notoriously difficult to discover which parts of the standard library to use and which ones to ignore unless you like reinventing wheels.
I was never sure about that, thinking that a single interpreter process runs on one core and the threads must share that process space. But you're right, and I found good confirmation in this visualization (see the "code" link also): http://www.dabeaz.com/GIL/gilvis/fourthread.html
• Python threads are real system threads
• POSIX threads (pthreads)
• Windows threads
• Fully managed by the host operating system
• Represent threaded execution of the Python interpreter process (written in C)