Hacker Newsnew | past | comments | ask | show | jobs | submit | more notatallshaw's commentslogin

> easy_install

I don't know what guides you're reading but I haven't touched easy_install in at least a decade. It's successor, pip, had effectively replaced all use cases for it by around 2010.


> I don't know what guides you're reading but I haven't touched easy_install in at least a decade.

It is mentioned in the "Explanations and Discussions" section [0] of the linked Python Packaging guide.

Old indeed, but can still be found at the top level of the current docs.

[0] https://packaging.python.org/en/latest/#explanations-and-dis...


Yes, it is mentioned there, as being deprecated:

> easy_install, now deprecated, was released in 2004 as part of Setuptools.


Package names and module names are not coupled to each other. You could have package name like "company-foo" and import it as "foo" or "bar" or anything else.

But you can if you want have a non-flat namespace for imports using PEP 420 – Implicit Namespace Packages, so all your different packages "company-foo", "company-bar", etc. can be installed into the "company" namespace and all just work.

Nothing stops an index from validating that wheels use the same name or namespace as their package names. Sdists with arbitrary backends would not be possible, but you could enforce what backends were allowed for certain users.


Given Astral's heavy involvement in the wheelnext project I suspect this index is an early adopter of Wheel Variants which are an attempt to solve the problems of CUDA (and that entire class of problems not just CUDA specifically) in a more automated way than even conda: https://wheelnext.dev/proposals/pepxxx_wheel_variant_support...


It's actually not powered by Wheel Variants right now, though we are generally early adopters of the initiative :)


Well it was just a guess, "GPU-aware" is a bit mysterious to those of us on the outside ;).


Probably the more useful blog post: https://astral.sh/blog/introducing-pyx


Thanks, we've updated this now from https://astral.sh/pyx.


Thanks that’s bit less cryptic than the linked page.

Still don’t get how they are solving what they claim to solve.


I'm guesssing from the UV page [0] its mainly if the speed of pip is a problem for you?

[0] https://docs.astral.sh/uv/


There are a bunch of problems with PyPI. For example, there's no metadata API, you have to actually fetch each wheel file and inspect it to figure out certain things about the packages you're trying to resolve/install.

It would be nice if they contributed improvements upstream, but then they can't capture revenue from doing it. I guess it's better to have an alternative and improved PyPI, than to have no improvements and a sense of pride.

There is a lot of other stuff going on with Pyx, but "uv-native metadata APIs" is the relevant one for this example.


I'm guessing it's the right PyTorch and FlashAttention and TransformerEngine and xformers and all that for the machine you're on without a bunch of ninja-built CUDA capability pain.

They explicitly mention PyTorch in the blog post. That's where the big money in Python is, and that's where PyPI utterly fails.


I spend little time with Python, but I didn’t have any problems using uv. Given how great uv is, I’d like to use pyx, but first it would be good if they could provide a solid argument for using it.


I suspect that, in order to succeed, they will need to build something that is isomorphic to Nix.


Yeah, and uv2nix is already pretty good! I wonder if pyx will be competing with uv2nix.

It's easy to compete with Nix tooling, but pretty hard to compete with the breadth of nixpkgs.


They already built uv, which works extremely well for that


> uv pip install --system torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

uv has a feature to get the correct version of torch based on your available cuda (and some non-cuda) drivers (though I suggest using a venv not the system Python):

> uv pip install torch torchvision torchaudio --torch-backend=auto

More details: https://docs.astral.sh/uv/guides/integration/pytorch/#automa...

This also means you can safely mix torch requirements with non-torch requirements as it will only pull the torch related things from the torch index and everything else from PyPI.


I love uv and really feel like I only need to know "uv add" and "uv sync" to be effective using it with python. That's an incredible feat.

But, when I hear about these kinds of extras, it makes me even more excited. Getting cuda and torch to work together is something I have struggled countless times.

The team at Astral should be nominated for a Nobel Peace Prize.


> "uv add"

One life-changing thing I've been using `uv` for:

System python version is 3.12:

    $ python3 --version
    Python 3.12.3
A script that requires a library we don't have, and won't work on our local python:

    $ cat test.py
    #!/usr/bin/env python3

    import sys
    from rich import print

    if sys.version_info < (3, 13):
        print("This script will not work on Python 3.12")
    else:
        print(f"Hello world, this is python {sys.version}")
It fails:

    $ python3 test.py
    Traceback (most recent call last):
    File "/tmp/tmp/test.py", line 10, in <module>
        from rich import print
    ModuleNotFoundError: No module named 'rich'
Tell `uv` what our requirements are

    $ uv add --script=test.py --python '3.13' rich
    Updated `test.py`
`uv` updates the script:

    $ cat test.py
    #!/usr/bin/env python3
    # /// script
    # requires-python = ">=3.13"
    # dependencies = [
    #     "rich",
    # ]
    # ///

    import sys
    from rich import print

    if sys.version_info < (3, 13):
        print("This script will not work on Python 3.12")
    else:
        print(f"Hello world, this is python {sys.version}")
`uv` runs the script, after installing packages and fetching Python 3.13

    $ uv run test.py
    Downloading cpython-3.13.5-linux-x86_64-gnu (download) (33.8MiB)
    Downloading cpython-3.13.5-linux-x86_64-gnu (download)
    Installed 4 packages in 7ms
    Hello world, this is python 3.13.5 (main, Jun 12 2025, 12:40:22) [Clang 20.1.4 ]
And if we run it with Python 3.12, we can see that errors:

    $ uv run --python 3.12 test.py
    warning: The requested interpreter resolved to Python 3.12.3, which is incompatible with the script's Python requirement: `>=3.13`
    Installed 4 packages in 7ms
    This script will not work on Python 3.12
Works for any Python you're likely to want:

    $ uv python list
    cpython-3.14.0b2-linux-x86_64-gnu                 <download available>
    cpython-3.14.0b2+freethreaded-linux-x86_64-gnu    <download available>
    cpython-3.13.5-linux-x86_64-gnu                   /home/dan/.local/share/uv/python/cpython-3.13.5-linux-x86_64-gnu/bin/python3.13
    cpython-3.13.5+freethreaded-linux-x86_64-gnu      <download available>
    cpython-3.12.11-linux-x86_64-gnu                  <download available>
    cpython-3.12.3-linux-x86_64-gnu                   /usr/bin/python3.12
    cpython-3.12.3-linux-x86_64-gnu                   /usr/bin/python3 -> python3.12
    cpython-3.11.13-linux-x86_64-gnu                  /home/dan/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/bin/python3.11
    cpython-3.10.18-linux-x86_64-gnu                  /home/dan/.local/share/uv/python/cpython-3.10.18-linux-x86_64-gnu/bin/python3.10
    cpython-3.9.23-linux-x86_64-gnu                   <download available>
    cpython-3.8.20-linux-x86_64-gnu                   <download available>
    pypy-3.11.11-linux-x86_64-gnu                     <download available>
    pypy-3.10.16-linux-x86_64-gnu                     <download available>
    pypy-3.9.19-linux-x86_64-gnu                      <download available>
    pypy-3.8.16-linux-x86_64-gnu                      <download available>
    graalpy-3.11.0-linux-x86_64-gnu                   <download available>
    graalpy-3.10.0-linux-x86_64-gnu                   <download available>
    graalpy-3.8.5-linux-x86_64-gnu                    <download available>


They’ve definitely saved me many hours of wasted time between uv and ruff.


Agreed, making the virtual environment management and so much else disappear lets so much more focus go to python itself.


Of all the great things people say about UV, this is the one that sold me on it when I found this option in the docs. Such a nice feature.


setdefault was a go to method before defaultdict was added to the collections module in Python 2.5, which replaced the biggest use case.


It's been some time since I last benchmarked defaultdict but last time I did (circa 3.6 and less?), it was considerably slower than judicious use of setdefault.


One time that defaultdict may come out ahead is if the default value is expensive to construct and rarely needed:

    d.setdefault(k, computevalue())
defaultdict takes a factory function, so it's only called if the key is not already present:

    d = defaultdict(computevalue)
This applies to some extent even if the default value is just an empty dictionary (as it often is in my experience). You can use dict() as the factory function in that case.

But I have never benchmarked!


> if the default value is expensive to construct and rarely needed:

I'd say "or" rather than "and": defaultdict has higher overhead to initialise the default (especially if you don't need a function call in the setdefault call) but because it uses a fallback of dict lookup it's essentially free if you get a hit. As a result, either a very high redundancy with a cheap default or a low amount of redundancy with a costly default will have the defaultdict edge out.

For the most extreme case of the former,

    d = {}
    for i in range(N):
        d.setdefault(0, [])
versus

    d = defaultdict(list)
    for i in range(N):
        d[0]
has the defaultdict edge out at N=11 on my machine (561ns for setdefault versus 545 for defaultdict). And that's with a literal list being quite a bit cheaper than a list() call.


If you want to offer a PyPI competitor where your value is all packages are vetted or reviewed nothing stops you, the API that Python package installer tools to interact with PyPI is specified: https://packaging.python.org/en/latest/specifications/simple...

There are a handful of commercial competitors in this space, but in my experience this ends up only being valuable for a small % of companies. Either a company is small enough and it wants to be agile and it doesn't have time for a third party to vet or review packages they want to use. Or a company is big enough that it builds it's own internal solution. And single users tend to get annoyed when something doesn't work and stop using it.


that's like suggesting someone complaining about security issues should fork libxml or openssl because the original developers don't have enough resources to maintain their work. the right answer is that as users of those packages we need to pool our resources and contribute to enable the developers to do a better job.

for pypi that means raising funds that we can contribute to.

so instead of arguing that the PSF doesn't have the resources, they should go and raise them. do some analysis on what it takes, and then start a call for help/contributions. to get started, all it takes is to recognize the problem and put fixing it on the agenda.


> so instead of arguing that the PSF doesn't have the resources, they should go and raise them

The PSF has raised resources for support; the person who wrote this post is working full-time to make PyPI better. But you can't staff your way out of this problem; PyPI would need ~dozens of full time reviewers to come anywhere close to a human-vetted view of the index. I don't think that's realistic.


> that's like suggesting someone complaining about security issues should fork libxml or openssl because the original developers don't have enough resources to maintain their work.

I disagree with this analogy, both those libraries have complex and nuanced implementation details which make forking difficult to work in a compatible way. PyPI does not, you can host a simple index with existing libraries and have 100% compatibility with all Python package installer tools.

And YET, openssl has been forked by companies a bunch of times exactly because it lacks resources to do significant security analysis of it's own code.

> for pypi that means raising funds that we can contribute to.

PyPI accepts funds, feel free to donate.

> so instead of arguing that the PSF doesn't have the resources, they should go and raise them. do some analysis on what it takes, and then start a call for help/contributions. to get started, all it takes is to recognize the problem and put fixing it on the agenda.

This is all already being done, it appears like you haven't done any research into this before commenting on this topic.


Right. That's the economic argument: hosting anonymously-submitted/unvetted/insecure/exploit-prone junkware is cheap. And so if you have a platform you're trying to push (like Python or Node[1]) you're strongly incentivized to root your users simply because if you don't your competitors will.

But it's still broken.

[1] Frankly even Rust has this disease with the way cargo is managed, though that remains far enough upstream of the danger zone to not be as much of a target. But the reckoning is coming there at some point.


> is cheap

It's not even cheap, it's just possible to get companies to donate the resources to sustain it: https://dustingram.com/articles/2021/04/14/powering-the-pyth...

What it is is feasible, and IMO the alternative you're suggesting is infeasible under our current model of global economics without some kind of massive government funding.

> you're strongly incentivized to root your users simply because if you don't your competitors will

Python is not rooting it's users, this is hyperbole.


> recursive self-improvement.

What LLM is recursively self-improving?

I thought, to date, all LLM improvements come from the hundreds of billions of dollars of investment and the millions of software engineer hours spent on better training and optimizations.

And, my understanding is, there are "mixed" findings on whether LLMs assisting those software engineers help or hurt their performance.


Two years ago was Python 3.11, my real world workloads did see a ~15-20% improvement in performance with that release.

I don't remember the Faster CPython Team claiming JIT with a >50% speedup should have happened two years ago, can you provide a source?

I do remember Mark Shannon proposed an aggressive timeline for improving performance, but I don't remember him attributing it to a JIT, and also the Faster CPython Team didn't exist when that was proposed.

> apparently made by one of the chief liars who canceled Tim Peters

Tim Peters still regularly posts on DPO so calling him "cancelled" is a choice: https://discuss.python.org/u/tim.one/activity.

Also, I really can not think who you would be referring to as part of the Faster CPython Team, of which all the former members I am aware of largely stayed out of the discussions on DPO.


Astral's focus has been to support the simplest use case, pure Python project with a standard layout. Their aim has been that most users, and especially beginners, should be able to use it with zero configuration.

As such they do not currently support C extensions, nor running arbitrary code during the build process. I imagine they will add features slowly over time, but with the continued philosophy of the simple and common cases should be zero configuration.

For Python experts who don't have special needs from a build backend I would recommend flit_core, simplest and most stable build backend, or hatching, very stable and with lots of features. While uv_build is great, it does mean that users building (but not installing) your project need to be able to run native code, rather than pure Python. But this is a pretty small edge case that for most people it won't be an issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: