Here's what I'm saying. Let's say you are a Python package author. You ship your app or library on PyPi to a lot of users. You have a problem that can be solved in 100 lines of C++, but it can't be solved using Python code. Python would just go too slow. That's a problem, because in order to introduce a single line of native code into your Python package, it would require assuming numerous burdens relating to compilation and distribution. Native code is hard. Native code being hard is the whole reason we use high-level languages like Python. It's hard because you'd have to setup a devops process to build and test your C code on all these different operating systems. So what I'm proposing that you consider instead, is compile the ~100 lines of C/C++ code you want as a ~100kb fat APE binary, on your one dev system of choice, using Cosmo. Then embed that APE executable inside your pure Python package, launch it as a subprocess, and never have to worry about devops and distribution toil.
I like that idea for a lot of applications (I also like the python "multiprocessing" library, other languages could learn a thing or two from it, and it really takes the sting out of the GIL). But I don't see it working well for something like numpy or PyTorch which are sharing arbitrary memory between Python and native and calling back and forth at a very high frequency.
PyTorch and the neural net implementations built on it are the whole reason I'm using Python at all. If the answer was "build the native extensions you need for PyTorch/etc into the Python binary" that would be fine too, I have no love for dynamic library loading. But I don't see the PyTorch team going out of their way to change anything about their build process or runtime to work with Cosmopolitan, as much as I wish they would, so my guess is I won't be able to use Cosmopolitan Python for the foreseeable future.