Nice solution with dataclass! And for a complete comparison with the blog you can also use a library to do it for you. It's not quite in the official python distribution but it's maintained by pypa as a dependency of pip so you probably have it installed already.
>>> from packaging.version import Version
>>> Version("1.2.3") > Version("1.2.2")
True
>>> Version("2.0") > Version("1.2.2")
True
packaging.version has a somewhat weird (or at least Python-specific) set of rules that don't match the semantics of Ruby's Gem:Version, which will accept basically anything as input.
I'd use `semver` from PyPI and whatever the equivalent Gem is on the Ruby side in most cases.
Oh that is really interesting, I was just aware of IPython's autoreload extension, I hadn't found your library. I'm also working on hot reload for python as I'm working on a development environment for python that aims to give it a development experience closer to lisp: https://codeberg.org/sczi/swanky-python/
Some minor details. You currently aren't updating functions if their freevars have changed, you can actually do that by using c-api to update __closure__ which is a readonly attribute from python:
Also I think you should update __annotations__, __type_params__, __doc__, and __dict__ attributes for the function.
Rather than using gc.get_referrers I just maintain a set for each function containing all the old versions (using weakref so they go away if that old version isn't still referenced by anything). Then when a function updates I don't need to find all references, all references will be to some old version of the function so I just update that set of old functions, and all references will be using the new code. I took this from IPython autoreload. I think it is both more efficient than gc.get_referrers, and more complete as it solves the issue of references "decorated or stashed in some data structure that Jurigged does not understand". The code for that is here: https://codeberg.org/sczi/swanky-python/src/commit/365702a6c...
hot reload for python is quite tricky to fully get right, I'm still missing plenty parts that I know about and plan on implementing, and surely plenty more that I don't even know. If you or anyone else that's worked on hot reload in python wants to talk about it, I'm happy to, just reach out, my email is visible on codeberg if you're signed in.
Thanks for the tips, I'll try to look into these when I get some time! Didn't know you could modify the closure pointer.
I'm not sure what you mean by "maintaining a set of old versions". It's possible I missed something obvious, but the issue here is that I have the code objects (I can snag them from module evaluation using an import hook)... but I do not have the function objects. I never had them in the first place. Take this very silly and very horrible example:
def adder(x):
def inner(y):
return x / y
return inner
adders = {}
def add(x, y):
adders.setdefault(x, adder(x))
return adders[x](y)
The adders dictionary is dynamically updated with new closures. Each is a distinct function object with a __code__ field. When I update the inner function, I want all of these closures to be updated. Jurigged is able to do it -- it can root them out using get_referrers. I don't see how else to do it. I quickly tested in a Jupyter notebook, and it didn't work: new closures have the new code, but the old ones are not updated.
Oooh now that is interesting. What I by mean stuff I don't even know that I don't know :)
Yes mine doesn't handle that, it is the same as jupyter there. Smalltalk is supposed to be best at interactive development, I wonder if it will update the old closures. I don't know it to try, but I do know Common Lisp which is also supposed to be quite good, and fwiw it behaves the same, new closures have the new code, but the old ones are not updated:
Interesting. It appears that not many systems have been designed to enable hot reload in a thorough way.
Here's another fun complication: what if I have a decorator that performs a code transform on the source code of the decorated function? If the code is changed, I would like to automatically re-run the transform on the new code. I made a (kind of awkward) protocol for this: https://github.com/breuleux/jurigged?tab=readme-ov-file#cust.... It's a little confusing even for me, so I should probably review it.
Wow it looks like you are taking reloading to another level I hadn't even considered.
For example in your elephant:main.py test, in swanky python I run do(3):
['Paint 3 canvasses', 'Sing 3 songs', 'Dance for 3 hours']
change songs to songz, and now do(3) is:
['Paint 3 canvasses', 'Sing 3 songs', 'Dance for 3 hours', 'Sing 3 songz']
Rather than changing the earlier songs to songz as jurigged manages to. But any lisp environment would behave the same, we don't have the idea of:
> 3. When a file is modified, re-parse it into a set of definitions and match them against the original, yielding a set of changes, additions and deletions.
We are just evaling functions or whatever sections of code you say to eval, not parsing files and seeing what was modified. So in some cases we might need to make a separate unregister function and call that on the old one. Like in emacs if you use advice-add (adds before, after, and other kinds of hooks to a function), you can't just change the lines adding an advice and save the file to have it modify the old advice, you need to explicitly call advice-remove to unset the old advice, then advice-add with your new advice, if you want to modify it while running without restarting.
When I eval a function again I am evaling all decorators again, in your readme you write the downsides of that:
> %autoreload will properly re-execute changed decorators, but these decorators will return new objects, so if a module imports an already decorated function, it won't update to the new version.
But I think I am handling that, or maybe you have other cases in mind I am missing? ie in a.py:
Then in b.py, from a import reload_me, change reload_me in a and slime-eval-defun it, and b is using the new version of reload_me. Basically for all (__module__, __qualname__) I am storing the function object, and all old versions of the function object. Then when there is a new function object with that name I update the code, closure and other attributes for all the old function objects to be the same as the new one.
I'll look into maybe just integrating jurigged for providing the reloading within swanky python. I was using the ipython autoreload extension at first, but ran into various problems with it so ended up doing something custom still mostly based on ipython, which is working for me in practice for now. So as long as I don't run into problems with it I'll focus on the many other parts of swanky python that need work, but sooner or later when I inevitably run into reloading problems I'll evaluate whether to just switch reloading to use jurigged.
> Lisp and Smalltalk addressed this by not unwinding the stack on exceptions, dropping you into a debugger and allowing you to fix the error and resume execution without having to restart your program from the beginning. Eventually I'd like to patch CPython to support this
yea i've been meaning to do this for a while as well...
I haven't started really looking into it yet, but I found this blog that looks like a good description of what exactly happens during stack unwinding in python and gets a large part of the way to resuming execution in pure python without even any native code: https://web.archive.org/web/20250322090310/https://code.lard...
Though the author says they wrote it as a joke and probably it is not possible to do robustly in pure python, but I assume it can be done robustly as a patch to CPython or possibly even as just a native C extension that gets loaded without people needing a patched build of CPython. If you know any good resources or information about how to approach this, or start working on it yourself, let me know.
That question clearly says hard labour. I'm sure some people didn't read it, but I think there also may be another effect there, where when talking to people in person, they realize you are morally opposed to forced hard labour, and don't want to seem like a bad person, so they pretend they didn't know. Sort of similar to the recent effect in the US where trump significantly underpolled as many voted for him but don't want to admit it.
A funny thing is I once had a type bug while coding elixir, that bash or perl would've prevented, but rust or haskell wouldn't have caught. I forgot to convert some strings to numbers and sorted them, so they were wrongly sorted by string order rather than numerical order.
In haskell (typeclasses), rust (traits), and elixir comparison is polymorphic so code you write intending to work on numbers will run but give a wrong output when passed strings. In perl and bash < is just numeric comparison, you need to use a different operator to compare strings.
In the case of comparison elixir is more polymorphic than even python and ruby, as at least in those languages if you do 3 < "a" you get a runtime error, but in general elixir is less polymorphic, ie + just works on numbers, not also on strings and lists and Dates and other objects like python or js.
I also experienced more type errors in clojure compared to common lisp, as clojure code is much more generic by default. Of course noone would want to code in rust without traits, obviously there are tradeoffs here, as you're one of the minority in this thread recognizing. There is one axis where the more bugs a type system can catch the less expressive and generic code can be. Then another axis where advanced type systems with stuff like GADT can type check some expressive code, but at the cost of increasing complexity. You can spend a lot more time trying to understand a codebase doing advanced typesystem stuff than it would take to just fix the occasional runtime error without it.
A lot of people in this thread are promoting gleam as if its strictly better than elixir because statically typed, when that just means they chose a different set of tradeoffs. Gleam can never have a web framework like Phoenix and Ash in elixir, as they've rejected metaprogramming and even traits/typeclasses.
Also the referenced post is about them acquiring plataformatec, not about using elixir. Jose Valim (the creator of elixir) left plataformatec after the nubank acquisition in order to continue developing elixir, I've never heard of nubank using elixir, afaik they're solidly a clojure shop with no plans on changing.
Interesting, I hadn't heard of pyrasite, it has a nice GUI with info about objects' memory usage, threads, open files and more. I'll definitely take ideas from it.
I'm working on a live coding environment for python[0], based on emacs' SLIME mode for common lisp. It's quite new and I haven't written documentation yet, but all the main SLIME features not covered by LSP are working.
- All results printed in the repl are presentations that can be inspected, copied around and used again -- as the actual object, not just it's str or repr text like in most repls.
- On any uncaught exception you get an interactive backtrace buffer where you can jump to source, see arguments and local variables for each frame, and eval code or open a repl in the context of any stack frame. And the arguments and local variables aren't just text but presentations you can open in the object inspector, copy to the repl and use, etc.
- A thread viewer where you can view stats on all threads, get the backtrace of any thread, spawn a repl in the context of any of it's stack frames, etc.
- An async task viewer with somewhat more limited functionality as async tasks don't keep a full stack.
- A pretty documentation browser using mmontone's slime-doc-contribs.
- The ability to trace functions, where again their arguments and return values aren't just printed as text, but as presentations, that you can open in the inspector, copy to the repl, etc.
- I took some code from IPython's autoreload extension, so interactive development without restarting and losing state mostly works.
If you want to collaborate or just talk ideas that'd be fantastic, I don't have any experience with the Pharo/Smalltalk world.
Nice set of features you have here, very similar to Smalltalk (particularly Pharo) ideals. Actually I'm also actively working on a Pharo VM simulator so I can ultimately get GToolkit[0], which I really like and is based on it, running in Python. Nothing published yet though, but can definitely get in touch via your project.
So far it's just a subset of what's supported by SLIME for common lisp, and I've heard the smalltalk community emphasizes interactive development tooling even more so than the lisp folks. I've seen some of the gtoolkit videos and it's really impressive what such a small group of people have built, I'll definitely study it more seriously for ideas for my python environment once I've got the basics finished. I even tried gtoolkit very briefly but got stuck just trying to show all keyboard shortcuts in a given context and add new ones, but that was years ago and a quick search says Pharo's redone it's shortcut system since then.
In ruby async is based on stackful fibers. With https://github.com/socketry/async-debug you can see a tree of all fibers with their full call stack. It also avoids the problem people talk about in this thread with go of passing a context parameter everywhere for cancellation as you can kill or raise any exception inside another fiber. I haven't used them but PHP fibers are also supposedly stackful. And Java and every JVM language has them since project loom in JDK 21.