Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is terrible. I hope you're not adding a whole venv to version control.

What if the user doesn't have a venv created? What if they created it in a different directory? What if they created a second venv and want to use that instead? What if the user uses `.venv` instead of `venv`?

`#!/usr/bin/env python3` solves most of that.



No!

These are programs that are largely meant to have a run.sh or install.sh script run before the main script. If the venv doesn’t exist, the it is created there and requirements installed.

The main point is that I’m trading away some flexibility to keep my ENV clean. When I submit jobs on HPC clusters, keeping my environment clean tends to make things easier to troubleshoot.

If I’m switching between different programs or commonly piping data between two different programs with their own venvs, it can be easier to just run the associated python binary directly, rather than have to manage different venv environment activations.


You can have your cake (use multiple venvs) and eat it (flexibility) too.

`source venv/bin/activate` from your .sh files will cause `/usr/bin/env python3` to use the python3 located in your venv. Switching between venvs is easy too. just call `deactivate` when one venv is activated. It drops you out of the venv. You can then cleanly `source venv2/bin/activate`.


It feels like you're telling me that I'm holding my phone wrong.

I don't like sourcing things into my environment. I've worked this way for years. I think the idea of 'activating' and 'deactivating' an environment is an anti-pattern. But I also work on HPC clusters where all of the configuration about paths is handled by the environment. Because of this, I've learned the hard way that for my workflows, it's far too easy to have the wrong environment loaded with venvs and modules that it's often better to keep things explicitly defined. I don't like magic, so I explicitly state which venv my code (or occasionally other people's code) is loading from.

I sometimes will have to run multiple programs (that have their own venvs) and pipe data between them. If I have to source and deactivate different venvs for each tool, it just doesn't work right.

I think that's part of the power of virtualenv as a tool -- it's flexible in how it works. I can still use my explicit workflows with the same tooling as everyone else, and you can source your environments and keep happily coding along. For me, that's why I keep using them...


> It feels like you're telling me that I'm holding my phone wrong.

I'm sorry you feel that way.

Ultimately, it is your code and you can do with it whatever you like.

...until your code becomes company code or open sourced. Then your way becomes a hinder to other developers.

> I don't like sourcing things into my environment. I've worked this way for years. I think the idea of 'activating' and 'deactivating' an environment is an anti-pattern.

I completely agree with you. The whole concept of a venv is great! But the concept of needing to source an activation script is... just... completely foreign to me. It took me months to understand that's the way it was intended to work, and more years to stop fighting it.

> I sometimes will have to run multiple programs (that have their own venvs) and pipe data between them.

Me too! I pipe data around all the time because it's amazingly fast and amazingly awesome to just hook up a pipeline. It can be done with venvs, too. Consider this:

    #!/usr/bin/env bash
    set -euo pipefail
    jq '.' < <(source venv1/bin/activate; ./script1) > >(source venv2/bin/activate; ./script2)
Here, script1 (which requires the first venv) might produce some JSON stuff to be processed by `jq`, then pipe that to script2 (which requires the second venv).

I'm curious how you do it though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: