I agree, now I just use uv and forget about it. It does use up a fair bit of disk, but disk is cheap and the bootstrapping time reduction makes working with python a pleasure again
I recently did the same at work, just converted all our pip stuff to use uv pip but otherwise no changes to the venv/requirements.txt workflow and everything just got much faster - it's a no-brainer.
But the increased resource usage is real. Now around 10% of our builds get OOM killed because the build container isn't provisioned big enough to handle uv's excessive memory usage. I've considered reducing the number of available threads to try throttle the non-deterministic allocation behavior, but that would presumably make it slower too, so instead we just click the re-run job button. Even with that manual intervention 10% of the time, it is so much faster than pip it's worth it.
Please open an issue with some details about the memory usage. We're happy to investigate and feedback on how it's working in production is always helpful.
We run on-prem k8s and do the pip install stage in a 2CPU/4GB Gitlab runner, which feels like it should be sufficient for the uv:python3.12-bookworm image. We have about 100 deps that aside from numpy/pandas/pyarrow are pretty lightweight. No GPU stuff. I tried 2CPU/8GB runners but it still OOMed occasionally so didn't seem worth using up those resources for the normal case. I don't know enough about the uv internals to understand why it's so expensive, but it feels counter-intuitive because the whole venv is "only" around 500MB.
Hopefully at the point the community is centralized enough to move in one direction.