I don't know what guides you're reading but I haven't touched easy_install in at least a decade. It's successor, pip, had effectively replaced all use cases for it by around 2010.
Package names and module names are not coupled to each other. You could have package name like "company-foo" and import it as "foo" or "bar" or anything else.
But you can if you want have a non-flat namespace for imports using PEP 420 – Implicit Namespace Packages, so all your different packages "company-foo", "company-bar", etc. can be installed into the "company" namespace and all just work.
Nothing stops an index from validating that wheels use the same name or namespace as their package names. Sdists with arbitrary backends would not be possible, but you could enforce what backends were allowed for certain users.
Given Astral's heavy involvement in the wheelnext project I suspect this index is an early adopter of Wheel Variants which are an attempt to solve the problems of CUDA (and that entire class of problems not just CUDA specifically) in a more automated way than even conda: https://wheelnext.dev/proposals/pepxxx_wheel_variant_support...
There are a bunch of problems with PyPI. For example, there's no metadata API, you have to actually fetch each wheel file and inspect it to figure out certain things about the packages you're trying to resolve/install.
It would be nice if they contributed improvements upstream, but then they can't capture revenue from doing it. I guess it's better to have an alternative and improved PyPI, than to have no improvements and a sense of pride.
There is a lot of other stuff going on with Pyx, but "uv-native metadata APIs" is the relevant one for this example.
I'm guessing it's the right PyTorch and FlashAttention and TransformerEngine and xformers and all that for the machine you're on without a bunch of ninja-built CUDA capability pain.
They explicitly mention PyTorch in the blog post. That's where the big money in Python is, and that's where PyPI utterly fails.
I spend little time with Python, but I didn’t have any problems using uv. Given how great uv is, I’d like to use pyx, but first it would be good if they could provide a solid argument for using it.
uv has a feature to get the correct version of torch based on your available cuda (and some non-cuda) drivers (though I suggest using a venv not the system Python):
This also means you can safely mix torch requirements with non-torch requirements as it will only pull the torch related things from the torch index and everything else from PyPI.
I love uv and really feel like I only need to know "uv add" and "uv sync" to be effective using it with python. That's an incredible feat.
But, when I hear about these kinds of extras, it makes me even more excited. Getting cuda and torch to work together is something I have struggled countless times.
The team at Astral should be nominated for a Nobel Peace Prize.
A script that requires a library we don't have, and won't work on our local python:
$ cat test.py
#!/usr/bin/env python3
import sys
from rich import print
if sys.version_info < (3, 13):
print("This script will not work on Python 3.12")
else:
print(f"Hello world, this is python {sys.version}")
It fails:
$ python3 test.py
Traceback (most recent call last):
File "/tmp/tmp/test.py", line 10, in <module>
from rich import print
ModuleNotFoundError: No module named 'rich'
$ cat test.py
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "rich",
# ]
# ///
import sys
from rich import print
if sys.version_info < (3, 13):
print("This script will not work on Python 3.12")
else:
print(f"Hello world, this is python {sys.version}")
`uv` runs the script, after installing packages and fetching Python 3.13
$ uv run test.py
Downloading cpython-3.13.5-linux-x86_64-gnu (download) (33.8MiB)
Downloading cpython-3.13.5-linux-x86_64-gnu (download)
Installed 4 packages in 7ms
Hello world, this is python 3.13.5 (main, Jun 12 2025, 12:40:22) [Clang 20.1.4 ]
And if we run it with Python 3.12, we can see that errors:
$ uv run --python 3.12 test.py
warning: The requested interpreter resolved to Python 3.12.3, which is incompatible with the script's Python requirement: `>=3.13`
Installed 4 packages in 7ms
This script will not work on Python 3.12
It's been some time since I last benchmarked defaultdict but last time I did (circa 3.6 and less?), it was considerably slower than judicious use of setdefault.
One time that defaultdict may come out ahead is if the default value is expensive to construct and rarely needed:
d.setdefault(k, computevalue())
defaultdict takes a factory function, so it's only called if the key is not already present:
d = defaultdict(computevalue)
This applies to some extent even if the default value is just an empty dictionary (as it often is in my experience). You can use dict() as the factory function in that case.
> if the default value is expensive to construct and rarely needed:
I'd say "or" rather than "and": defaultdict has higher overhead to initialise the default (especially if you don't need a function call in the setdefault call) but because it uses a fallback of dict lookup it's essentially free if you get a hit. As a result, either a very high redundancy with a cheap default or a low amount of redundancy with a costly default will have the defaultdict edge out.
For the most extreme case of the former,
d = {}
for i in range(N):
d.setdefault(0, [])
versus
d = defaultdict(list)
for i in range(N):
d[0]
has the defaultdict edge out at N=11 on my machine (561ns for setdefault versus 545 for defaultdict). And that's with a literal list being quite a bit cheaper than a list() call.
If you want to offer a PyPI competitor where your value is all packages are vetted or reviewed nothing stops you, the API that Python package installer tools to interact with PyPI is specified: https://packaging.python.org/en/latest/specifications/simple...
There are a handful of commercial competitors in this space, but in my experience this ends up only being valuable for a small % of companies. Either a company is small enough and it wants to be agile and it doesn't have time for a third party to vet or review packages they want to use. Or a company is big enough that it builds it's own internal solution. And single users tend to get annoyed when something doesn't work and stop using it.
that's like suggesting someone complaining about security issues should fork libxml or openssl because the original developers don't have enough resources to maintain their work. the right answer is that as users of those packages we need to pool our resources and contribute to enable the developers to do a better job.
for pypi that means raising funds that we can contribute to.
so instead of arguing that the PSF doesn't have the resources, they should go and raise them. do some analysis on what it takes, and then start a call for help/contributions. to get started, all it takes is to recognize the problem and put fixing it on the agenda.
> so instead of arguing that the PSF doesn't have the resources, they should go and raise them
The PSF has raised resources for support; the person who wrote this post is working full-time to make PyPI better. But you can't staff your way out of this problem; PyPI would need ~dozens of full time reviewers to come anywhere close to a human-vetted view of the index. I don't think that's realistic.
> that's like suggesting someone complaining about security issues should fork libxml or openssl because the original developers don't have enough resources to maintain their work.
I disagree with this analogy, both those libraries have complex and nuanced implementation details which make forking difficult to work in a compatible way. PyPI does not, you can host a simple index with existing libraries and have 100% compatibility with all Python package installer tools.
And YET, openssl has been forked by companies a bunch of times exactly because it lacks resources to do significant security analysis of it's own code.
> for pypi that means raising funds that we can contribute to.
PyPI accepts funds, feel free to donate.
> so instead of arguing that the PSF doesn't have the resources, they should go and raise them. do some analysis on what it takes, and then start a call for help/contributions. to get started, all it takes is to recognize the problem and put fixing it on the agenda.
This is all already being done, it appears like you haven't done any research into this before commenting on this topic.
Right. That's the economic argument: hosting anonymously-submitted/unvetted/insecure/exploit-prone junkware is cheap. And so if you have a platform you're trying to push (like Python or Node[1]) you're strongly incentivized to root your users simply because if you don't your competitors will.
But it's still broken.
[1] Frankly even Rust has this disease with the way cargo is managed, though that remains far enough upstream of the danger zone to not be as much of a target. But the reckoning is coming there at some point.
What it is is feasible, and IMO the alternative you're suggesting is infeasible under our current model of global economics without some kind of massive government funding.
> you're strongly incentivized to root your users simply because if you don't your competitors will
Python is not rooting it's users, this is hyperbole.
I thought, to date, all LLM improvements come from the hundreds of billions of dollars of investment and the millions of software engineer hours spent on better training and optimizations.
And, my understanding is, there are "mixed" findings on whether LLMs assisting those software engineers help or hurt their performance.
Two years ago was Python 3.11, my real world workloads did see a ~15-20% improvement in performance with that release.
I don't remember the Faster CPython Team claiming JIT with a >50% speedup should have happened two years ago, can you provide a source?
I do remember Mark Shannon proposed an aggressive timeline for improving performance, but I don't remember him attributing it to a JIT, and also the Faster CPython Team didn't exist when that was proposed.
> apparently made by one of the chief liars who canceled Tim Peters
Also, I really can not think who you would be referring to as part of the Faster CPython Team, of which all the former members I am aware of largely stayed out of the discussions on DPO.
Astral's focus has been to support the simplest use case, pure Python project with a standard layout. Their aim has been that most users, and especially beginners, should be able to use it with zero configuration.
As such they do not currently support C extensions, nor running arbitrary code during the build process. I imagine they will add features slowly over time, but with the continued philosophy of the simple and common cases should be zero configuration.
For Python experts who don't have special needs from a build backend I would recommend flit_core, simplest and most stable build backend, or hatching, very stable and with lots of features. While uv_build is great, it does mean that users building (but not installing) your project need to be able to run native code, rather than pure Python. But this is a pretty small edge case that for most people it won't be an issue.
I don't know what guides you're reading but I haven't touched easy_install in at least a decade. It's successor, pip, had effectively replaced all use cases for it by around 2010.