No offense to TFA author, but I don't think this is doing to sell Elixir to Python people. In fact, I have serious doubts as to whether most Python lovers would be willing to set aside their beliefs and practices to learn the Elixir way.
Perhaps Phoenix and LiveView will be the gateway drug, but even to reach that point requires a lot of effort to understand functional programming and Elixir.
Python has some functional capabilities, but in my experiences with Python devs, those are little used and even shunned. Imperative, mutating loops are the Python way; and to suggest otherwise is to hear "But why would I need that? This (long functions with mutations everywhere and loops) is fine."
I went from Ruby to Elixir, and even with my basic Clojure experience, it was an effort. It was worth it, but I think the apparently similarity of Elixir to Ruby is actually a negative. They look awefully similar, but their use is vastly different.
As for Django vs Phoenix, I would argue that Phoenix has a much cleaner story for doing everything that Django can do and more. People promote Django for the "batteries included" aspect, which mostly just means "I get free user/auth system and built-in scaffolding for my CRUD." But those are things which are very easy to add to Phoenix (and Rails, which Phoenix is roughly similar to). Yet people choose Django for these little freebies which almost always get thrown away or ignored in real production worlds. For a quick proof-of-concept, it's nice. But for real projects it doesn't add value enough to offset the crufty boilerplate OO-centric code that results.
I've got experience in both Phoenix and Django and I disagree.
In my view, Phoenix is the less clean one: Django does not generate anything through scaffolding like Phoenix. Your CRUD (and auth and forms and everything more or less) is generated by overring the built-in Django classes or the classes offered by the packages. Nothing gets thrown away in real production; you use your class hierarchy to change it to your requirements. I know that inheritance and polymorphism seems like a 90's technology but this actually is a solved problem in web frameworks and does work excellent.
On the other hand, I agree with you that everything that Phoenix generates through its phx.gen generators are for demo purposes and will get thrown away eventually. This, along with the fact that everything is a function results in having to re-invent the wheel multiple times in your project without any saferails on how to actually architecture it. Should I use a context? What to put there? What's the point of views? Oh views are removed now? So, unless you are really very careful or have much experience/guidance this will result to a total mess.
The batteries included aspect of Django means that if you are a novice developer and follow the best practices you'll get a fine project without too much effort. It isn't really about the batteries, it's about the fact that you'll know how to implement each thing you need in a web project. I can confirm to that that because we've got 12-year old production running Django projects which were developed by a totally novice Django developer (me).
Try to do that in Phoenix (still being a novice developer).
Now I don't want to abolish Phoenix. It's a fine framework considering its age and its popularity and number of people contributing to. However you must be very careful before considering to use it for a real project that will be used and supported for the years to come, expecially if your alternative is something as proved as Django. Personally, I use Phoenix only on projects where its real-time capabilities will be the protagonist.
> Django does not generate anything through scaffolding like Phoenix
I was one of the co-authors of Devise, which is an integrated authentication solution for Rails (similar in spirit to Django), and you will easily find people who swear that the code generation solutions are miles better. That’s because eventually they’d want to customize how the framework or library work and then, instead of simply being able to change the code, you need to find the exact hook or configuration to get the behavior you want. Eventually, you end up with hodgepodge of changes that you can only understand when looking at the framework and your code side-by-side.
Perhaps this is one of the topics where there is no “superior” answer, besides personal preferences and past experiences. I totally understand why someone would prefer Django or Devise, but our premise is that eventually you will want to customize it, therefore giving you control upfront is the better way to go about it. However, I would say it is incorrect to say they are for demo purposes in both Phoenix and Django cases, as there is plenty of evidence otherwise.
It is also a matter of framework philosophy. Phoenix aims to give all the necessary foundation for building applications, and then not get in your way, rather than owning all aspects of your application lifecycle. I believe frameworks like Ash (which also runs on Phoenix) take the latter approach.
> Oh views are removed now?
To clarify, Views are not removed. Phoenix is still MVC. The difference is that the templates in your view are defined via `use Phoenix.Template`, rather than `use Phoenix.View`. You can migrate (or not) at your convenience.
Then the upgrade doc will work and people won’t get stuck trying to follow those instructions with a Phoenix 1.6 app using whatever version of LiveView they had before.
There were a number of things not covered in the upgrade guide. Lots of small changes between what the new app generator provides, csrf tokens being set in a different way, etc..
With Phoenix 1.7 being released, will there be a new edition of "Programming Phoenix"? The latest one covers Phoenix 1.4, which is already slightly outdated.
I think having an up-to-date book would be of great value for Phoenix beginners :)
> instead of simply being able to change the code, you need to find the exact hook or configuration to get the behavior you want.
I don't think that this is a bad thing. This is the purpose of a framework. To extend its functionality. If I wanted to re-write everything (or change the generated scaffolds) then I'd skip the framework and use libraries directly.
I'd like to point out the biggest problem with scaffolding: Let's suppose I use scaffolding to generate a scema / database table in phoenix, along with all the bells and whistles. I'm happy; instant code. I customize all the scaffolds as I like (i.e change html layouts, fix the schema/migration for fks, fix the queries, improve forms etc). I'm happy; everything works as I like. After 1 hour I notice understand that my table is missing a field. I'm sad. Now I've got two equally sad options:
* Delete everything that phoenix has generated for me, drop the schema, re-generate the scaffold (with the new field) and re-apply all my customizations or
* Add the field myself by adding a migration, adding it to the schema, fixing the tests, adding it to changesets, fixing the templates, fixing the queries and praying I haven't forgotten anything.
Now I'm very sad.
What would happen in that case (forgot a field in the database) in Django? Add the field to the model and re-run the migrations. That's it. Happy days!
> You can migrate (or not) at your convenience
Yes, I know that the views are still there but my understading is that they won't be generated anymore (so the steer is to not use them anymore, at least for new code). However, my main argument was that Django is better at holding your hand to produce acceptable code even if you are not experienced with the framework nor you have mentoring. The fact that now you have even more options (i.e you can use views if you want and think you need them) strengthens that argument.
If you need to change it, you would add it to the migration, schema, and then in the necessary templates. Some fields are private, read-only, etc. so you would change your templates accordingly. Some fields you want to change via a button press, others via a form, and so on. The whole point is that you can evolve the generated structure based on your needs.
> (i.e you can use views if you want and think you need them)
I was mostly speaking about existing applications. New applications have a clear path forward.
I don’t think this strengthens your argument the way you think. I _bet_ Django also has features that were used in the past and are no longer favored now, but still supported for backwards compatibility. This is in no way a phenomenon exclusive to Phoenix.
If your argument is that a configurable framework is easier than code gen for beginners, I agree. But that said, you don’t seem to be acknowledging the fundamental tradeoff which cuts both ways. Ultimately it depends on how close you want to stay to the paved path. I’ve built enough web products at this point that I’d rather have auth primitives than a configurable framework as the latter will be way more complicated than necessary to meet my needs.
For me it comes down to keeping business logic out of my framework. The logical extension of the framework + hooks model can be seen in something like Drupal. You can get amazing volumes of functionality launched quickly, but the UX is ERP-like in its rigidity. Obviously Django is not like this in the large, and the auth/admin stuff are much higher power-to-weight ratio than anything in Drupal, but it’s an interesting case study in looking at the implications of drawing different logical boundaries between framework and application.
> “On the other hand, I agree with you that everything that Phoenix generates through its phx.gen generators are for demo purposes and will get thrown away eventually.”
The parent didn’t say that and it’s not true in my experience writing Phoenix apps over the past six years. I usually add to the generated contexts, controllers or now live views, but I often leave the migrations as is. Even when making a migration to edit an existing schema, I still use the Ecto generators. The _only_ thing I’ve often thrown away large parts of was the template.
I’ve taken over about as bad of a Phoenix app as you can imagine (made by a rotating cast of outsourced contractors over years, no use of Ecto relations, no use of “resources” (just get and post), not using built-in validations, pinned to a version of LiveView from two months after its initial release… and using jQuery to drive it). Due to Elixir’s immutable, functional nature, rehabilitating the code base has actually been a bit easier than some Django apps I’ve seen. The most painful part has been unraveling the jQuery.
IMO, Django still hasn’t caught up with 2014 Rails.
My experience as well. The generated code is absolutely kept and used (excepting the template, but even that serves as a nice example to follow for people who are newer). Mainly the only changes are to add validation code and some business logic here or there depending on how complex the behavior needs to be (again mainly validation). The generated code is a big help, and we still use it for every new model despite being able to easily add new stuff manually.
> I use Phoenix only on projects where its real-time capabilities will be the protagonist
Yep, that feels sound advice (and motivation). I would add more broadly "federation capability" (in the style of the fediverse / activitypub but also wherever that decentralization path might take us in the near future) as an important dimension to look into. Remains to be seen if the "telecoms/erlang" pedigree of elixir is a critical element in this respect. There are already some important projects based on elixir [0] [1] but also some django based efforts [2] [3].
It is only sound advice if you already know Python/Django really well and don't mind context-switching your language. If you know Elixir/Phoenix well, you can easily crank out simple sites that don't use any real-time stuff as well as any other framework, and you get the added benefit of blazing fast (pre-compiled) templates and a highly robust database connection pool. Personally I prefer to keep as much as possible in the same language/ecosystem rather than having to straddle.
sticking within a single capable ecosystem is optimal here-and-now but your mileage might vary as we go forward. its always hard to forecast but even if you discount alot of the ML/AI hype, its quite likely that 2-3 years from now expectations will be for much "smarter" services.
whether delivering such upgrade can be achieved with microservices and API's or whether a platform can deliver functioning "monoliths" that compete also in such a new landscape is not clear (not to me anyway)
Concurrency is a pet use case for 99% of projects. Library ecosystem is the primary and dominant factor when choosing general-purpose technologies and Python has it beat.
In a multi core world concurrency and parallelism are no longer pet use cases.
Unfortunately The slowness of python, gil and really bad design choices of Asyncio in python 3 relative to how elegant parallelism and concurrent programming is in Racket, haskell, elixir and erlang make python a non starter for many basic use cases
This is absolutely not true, because most of the industry uses Python for web services and serving web pages is still the same thing as it was 20 years ago. You scale by bringing in more processes.
Multicore concurrency is still a pet use case because high performance is still a minority of applications and async programming is not necessarily faster, per many benchmarks. Most companies will rewrite the high-performance component in the proper manner and continue to do everything else the way they did.
Nobody's writing production Haskell or Erlang for this save maybe a half-dozen companies; it's all C++ or Java, and it represents a very small minority of the code as critical paths are usually tiny slivers of a codebase.
Real time like a chat app, or a video game or something. Not just a fast site. Seems like most "real-time" sites are slower as they have a harder time taking advantage of caching and the like, or use a lot of slow client-side javascript that inevitably works poorly.
Basically if you need the server to be able to change the data a user is currently looking at, collaborative multi-user stuff mostly. Liveview could be great if you're building google docs, but you could probably build youtube using just a more traditional framework like django.
Been looking at CRDTs lately for this, sounds like Elixir would be a good fit. Although what I've seen before has used C++ or Rust? to compile to wasm for the graphics part of the front end. Do all these technologies work together well?
Phoenix has used CDRTs going back to version 1.2 when Phoenix Presence was introduced. Here's a podcast about it (2016): https://changelog.com/podcast/208
Not really. Real time is not the same as super-fast. You can be real time but super slow or super fast using traditional request/response web apps.
When I mentioned real-time I was talking about apps that need to use web sockets to have fast client-server communication. There aren't many apps where this is required (or worth it).
There is this joke that you don't need to outrun a python, you only need to outrun the humans it is chasing.
Elixir does not need to outshine Python it only needs to position better than other "functional style" languages on offer as the "go-to" language if one wants to add such a capability in their programming toolkit.
In practice this means outshining Clojure. And on this front Elixir definitely does a good job: E.g., there is no such thing as Phoenix in Clojure. People seem to advance some arguments along the lines: "you don't really need frameworks, just compose your own" but this is not cutting it for the masses :-)
In a universe where there are millions of different ways of doing things having a well thought, opinionated, proposal is actually an advantage. The limitations (if any) show much later in the cycle when the choice of programming language might even be the least of your worries.
I don’t think these languages really compete outside of a fairly narrow dimension. They both seem to be doing well. But that’s beside the point.
People in the Clojure community have been trying to build such frameworks. But every one of them seems to stay in a weird niche where they’re not comprehensive and _easy_ enough to draw in people who’d use something like Django et al.
I think one of the reasons is that they’re all trying to cater to the crowd that wouldn’t use such a framework anyway.
Secondly it’s already very quick to create a web app in Clojure. If you use a set of libraries you pay with some initial setup/thinking/choosing cost. The benefit is then to have a program that is more malleable and composable, which is kind of the point for many who choose Clojure in tge first place.
There’s really a mismatch of expectations and philosophy I feel.
The "visibility" of frameworks and the onboarding "ease" are important features for many reasons but there are more attributes that shape people's choices. Talking primarily about open source ecosystems, being able to economise on resources is a major advantage. A "hello world" web app or a "neural net without dependencies" might be quick to do in any modern language but one major pathway to add value is when non-trivial domains are represented in usable detail. This is where a community culture that promotes pooling resources behind a few core libraries or frameworks plays an important role. It enables building on top a second layer of abstractions, plugins, interoperable API's et. Django is one example but the Python world has also "one stack" for numerical calculations (numpy, pandas and friends) and that was maybe more instrumental for adoption.
The Phoenix docs don't look like they even hold a candle to Django. I spent 5mins in it and still don't know how to define a model. I eventually figured out a doc section called "Ecto" - so i had to know what ecto was to find it. Then, in those docs I still don't know how to define models. Do they expect me to manage my database and schemas separately and generate the models from that?
That's completely opposite what Django does, and that's partly why Django is so productive.
I don't think Pheonix and Django are competitors. They are too different.
Phoenix doesn't have the concept of a "model". The functionality of what Rails (and I guess also Django? Idk) calls a "model" is split between a few different places: schemas, changesets, queries, the repo.
It felt weird at first, coming from Rails, but after getting used to it I don't miss "models" at all. ActiveRecord models in Rails are a vastly overloaded concept anyway and pretty much always degenerate fast into an ungodly mess as your app grows. I prefer Phoenix's approach.
Yeah, for a big complicated system, I don't think defining everything in models is great. I don't do that, and it's also not how fastcomments is built.
I'm not really familiar with Phoenix, but skimming through the section linked below, it seems to cover what you're looking for? Basically, you use a tool to generate your model + migration code and then run the migrations.
As for the comparison to Django, I don't really expect any web framework to have docs at that level. They are simply huge and have been around for quite a while.
Yes, I skimmed through that and was surprised to see it be compared to Django. It's different.
Django is for building CRUD apps quickly. That's it. It's not for the long haul of super complicated apps, which many projects don't ever reach before they fail/get abandoned.
I agree that there are better options for long-term maintenance of large projects, but let's not act like Youtube or Instagram don't exist - both of which used Django to some (high at the beginning) degree.
99% of line-of-business apps are CRUD. There's nothing special about "super complicated apps" that isn't brought in by the Python ecosystem. Where exactly do you think Django fails in that regard?
I think this article does capture the magic of elixir phoenix live view that is impossible in the async await madness hell in python. I cannot wait to try elixir Phoenix live view, elixir |> pipes, :atom pattern matching and ets pids to send and manage processes across servers none of these AFAIK are currently possible with the GIL and single threaded nature of python
As the venerable prof Joe armstrong would say
you need erlang and haskell for fp you need c for high performance you don't really need cpp Java python c# etc
Also as an aside does anything in haskell have something similar in spirit to phoenix or live view?
You can pattern match on basically any shape, not just atoms. Imagine you’re parsing a protocol over tcp and want to grab groups of characters between deliverers, you can pattern match that. I rewrote an hl7 parser from Java to Ruby to Elixr some years ago and the elixir implementation performed as well as Java and was more readable by miles.
> You can pattern match on basically any shape, not just atoms. Imagine you’re parsing a protocol over tcp and want to grab groups of characters between deliverers, you can pattern match that.
Andreas pattern matches on the magic byte, then encoded string length, then the string (using the previous matched length), then an int, all in the function header, no if-else-try-catch mess.
What are the major differences between elixir and erlang? Does elixir lose some of the advantages of the rock solid erlang improved over the years at Ericsson
I think I fall outside of the "most Python lovers" then.
I love(ed) python for close to 20 years, and have written a ton in it as my main language.
One project I worked on had us writing IronPython and IronRuby stuff (this was some years ago). I was forced to work on Ruby, and really hated it. Python, for me, was far superior. It really did not speak to me in any way.
With that said, I started to look into Elixir 3-4 years ago, and it just clicked with me. In a short span of time I stopped developing anything new in Python, and now use Elixir as my go-to language of choice.
I say this to say that it's worth the effort to at least expose people to new languages. It doesn't hurt anyone, and if other's aren't interested, they simply will not move to the new language. Others though might be pleasantly surprised to find out what other languages have to offer.
"Python has some functional capabilities, but in my experiences with Python devs, those are little used and even shunned. Imperative, mutating loops are the Python way; and to suggest otherwise is to hear "But why would I need that? This (long functions with mutations everywhere and loops) is fine.""
Thank God I'm not stuck with low talent clowns like that.
Writing highly functional python isn't hard anymore, especially making use of mypy strong types, data classes, generator expressions/etc.
Using dict unpacking combined with things like list comps is a really simple way to get trad Python devs getting more functional. My team is a bunch of experienced Python devs who happen to live Haskell/Clojure/etc so I guess that makes things easier.
> Thank God I'm not stuck with low talent clowns like that
Woah! Python’s a multiparadigm language. Using loops and mutation is a perfectly fine way to write Python code. For sure there are some kinds of code that lend themselves to a functional style, but equally, there are other cases where mutation is the more straightforward approach.
It seems like the basis for your comment is weird stack tribalism that you’ve very evidently bought into, rather than anything actually…worth discussing.
It then comes as no surprise that throughout your entire comment you didn’t seem willing you yield any ground to Python or Django at all.
It then comes as no surprise that your assertions about Django are counter to my experience, as someone that by the sound of things has worked with Django a lot more than you have.
I have only worked on one production Django project that *didn’t^ use Django’s auth system. And the fact that you call Django’s generic class-based views “scaffolding” - a term that I seldom hear in Django spaces but hear all the time in Rails spaces - further speaks to your lack of familiarly. Most production Django projects I’ve worked on have made extensive use of either these generic class-based views or the Django REST Framework analogues, which - whilst not part of core Django - still speak to the usefulness of the pattern.
It honestly sounds like your experience with Django - if any - has been treating it like Rails. Which - honestly - is mostly fine. In my eyes, the frameworks are comparable in…most ways. But your assertion that these batteries are thrown away in production contexts is just wrong. Unless you want to tell me that I’ve spent ~a decade earning a living working with Django and somehow not managed to do any “real” work, perhaps consider the global applicability of your personal experience.
It’s really disappointing to see the re-uprising of Ruby (and Elixir) developers positioning themselves as the enlightened elite against the hoards of plebeian Python developers. I without a doubt include your comment in this critique. I understand that given Elixir’s increased FP focus that the community can’t avoid FP elitism. Doing it for Ruby though? It’s just cringey. I’ve got no beef with Ruby, Rails, or the development communities of either, except for the fact that people are feeling the need to pick up this ridiculous turf war.
Given that Python is my daily driver, Python spaces are where I’m usually active in. Ruby is seldom mentioned. Whenever I go to a Python-related HN thread, I don’t have to scroll far to see some Ruby developer making some thinly veiled assertions that Python developers are un enlightened idiots. If I look at a Ruby thread, there’s always someone salty about not being able to get the Python drones on their team to use the One True Language. It feels like I’ve gone back in time 15 years.
It is perhaps utopia but it would be nice if we could discuss those topics without falling into camps.
For example, I could argue for hours about the benefits of immutability, but I still believe that imperative loops are clearer than functional ones. There is even a repository with solutions for nested traversals in different languages and the Python one is my preferred by some margin: https://github.com/josevalim/nested-map-reduce-traversal
> Python has some functional capabilities, but in my experiences with Python devs, those are little used and even shunned.
I like to use things like filter, map, reduce or list comprehensions, but I see very few of those in my coworkers' code bases.
When Python introduced pattern matching, it was rather a pattern-aware switch/case, which does not return a value, as such a construct in a functional programming would do.
> I went from Ruby to Elixir, and even with my basic Clojure experience, it was an effort.
I tried out Elixir twice so far, and since the last time, I worked through 300 pages of SICP. I still have to twist my mind when I'd like to process nested maps or the like in Elixir now.
> It was worth it, but I think the apparently similarity of Elixir to Ruby is actually a negative.
However, Ruby provides a lot of higher-order functions, such as group_by and the like. If you're used to functional-style Ruby code, the transition is easier.
I’ve personally found Elixir to be a nice balance compared to something like Haskell or even Rust in terms of FP. Most things, certainly testing are made simpler by immutability and the actor model makes parallelism much simpler to reason about. I’ve no doubt python is a great language for some things but once Elixir figure out the best way to add types I’ll be extremely happy that the language does everything I want. Especially with ChatGPT now you can ask it about topics in language learning that you don’t understand and it gives really great examples to help, so for me learning idiomatic ways of programming in Elixir or another language got a lot easier.
I use Django not just for the framework, but for the larger Python ecosystem of packages for just about any eventuality, whether that be NLTK, pandas, or whatever (and if I didn't need the full weight of Django there's always Flask or FastAPI).
That's not to knock on Elixir or Phoenix, and I like Elixir a lot, but there is usually more to projects I've worked on than CRUD apps, and I've found with languages with smaller mindshare and ecosystems that you can spend more time reinventing wheels because a particular library is missing, or it's there but is no longer maintained.
My perhaps slightly biased input on Python vs. Elixir (I worked with Python for different projects for a couple years, and have been using Elixir full-time for ~1.5 years).
Elixir as a core technology for application development (as opposed to data science/ML/AI - it’s still early days for Nx/Axon/etc.) is better than Python in just about every way that matters; e.g. immutability by default eliminating whole classes (no pun intended) of bugs, having the full power of OTP available if you need a distributed system, more convergence around libraries and frameworks (to the point that José Valim also works on Phoenix, LiveView, Nx, etc., not to mention having Mix, EEx, and ExUnit built in).
I do wish Elixir had significant whitespace rather than do/end, but that’s not a hill I’m willing to die on…
My one major gripe with Elixir’s ecosystem compared to Python’s is that, IMO, Django would have been a better source of inspiration for a dominant Web framework compared to Rails; Django’s model layer is top-notch, with all your basic model information for an app contained in one models.py file. In Rails and Phoenix, you don’t get auto-migrations out of the box for simple use cases, and your model layer ends up being distributed across schema/ActiveRecord files, a “structure.sql” or “schema.rb” file, and the migrations. In my real-world use, this has been… “sub-optimal” compared to Django, to the point that I sometimes dream about building my own Django-ish framework for Elixir. Django+Django REST Framework are that good - if you stick with Python and are not using them already, do yourself a favor and check them out.
Also, Django’s admin is old, crusty tech, but it does what it set out to do really well.
I will admit Django’s implicit queries can cause DB performance issues, I wish there was an explicit analogue to Ecto’s Repo.* functions to force devs to think about when to make the DB call.
If you don’t mind, I have some questions as I would like to learn more. Are you aware of a large models.py for reference and learning purposes? Is it expected to define the schema of all my models in a single place but none of the logic?
Also, don’t you run into scenarios in Django where the automatic migration is not enough and you need to provide custom commands? In such cases, how do you provide them?
And can you have scenarios where you have two models pointing to the same table, but perhaps to a subset of fields (this is specially useful in read-only cases)? How is that handled?
The sibling to this post (by “traverseda”) is more informed and informative than I could hope to write. I will add that Django has Manager classes that have default behavior included for interacting with the query API, and they can be customized to your needs. You can even have multiple managers for a given model.
It’s been a few years since I used Django, but I remember its model layer fondly, and still follow the framework’s release processes.
One of the areas where Django lags is being fully async (Python imposed a lot of limitations on async operations for most of its life). The upcoming 4.2 release of Django is laying the groundwork for async DB operations.
>Are you aware of a large models.py for reference and learning purposes? Is it expected to define the schema of all my models in a single place but none of the logic?
Django models are normal python classes. Depends on exactly the logic you're dealing with, but generally you can make logic be a method on that class. Try to avoid logic that spans multiple tables in general, and if you have logic that does span multiple tables you probably want it to be a function and not a model method.
There are also signals that get sent on different things like model delete/create/etc, but they're to be used even more sparingly.
Logic for querying data should probably go wherever you're going to use it and you should just pass the model objects directly to your views. To start don't worry about optimizing queries or the n+1 problem, but as you get more experience you can use `prefetch_related` in order to avoid the n+1 problem.
>Also, don’t you run into scenarios in Django where the automatic migration is not enough and you need to provide custom commands? In such cases, how do you provide them?
You shouldn't generally need to. That said django's migration system is very powerful and you can hand-write a migration if you need to.
>And can you have scenarios where you have two models pointing to the same table, but perhaps to a subset of fields (this is specially useful in read-only cases)? How is that handled?
I mean you can using proxy models but that's not really a thing with django. Models are for developers and generally developers have all the permissions any way, so the solution to making a model read only is to just not write to it. If you want to present a read-only model to an end user you can reference it in a view of make a read-only serializer with django-rest-framework. It's python, not java, there's no such thing as private/protected members just convention to put an "_" in front of things other developers probably shouldn't be messing around with.
Thanks for the insights, I appreciate it! I will take some time to go through the docs and learn more. One last question for now (I hope): after the migrations are auto-generated, do you check them into version control?
> Models are for developers and generally developers have all the permissions any way
To clarify, the reason why you would want to have read-only models is rather in complex data model cases or for performance reasons. For example, if you have a large model with 50 fields, defining a subset with 10 fields can be beneficial to performance in several queries. Other than that, I agree with you.
Yep, you do check the migrations into version control.
>For example, if you have a large model with 50 fields, defining a subset with 10 fields can be beneficial to performance in several queries. Other than that, I agree with you.
Nope, the django ORM has several helpers here. What you do in the most basic case is do something like
Which will (during the initial request) only fetch the specified fields from the DB. There's an equivalent (defer) for telling it not to fetch those fields during the first query.
Of course is someone than tries to access "field4" in a template/view django will make an additional query to get that extra data.
It's a bit more complicated to make a nice chainable filter so you can call `MyTable.objects.only_foo_rows()`, you'd need to define a method on a custom object "manager", but that still wouldn't generally be handled at the model level.
There's also things like select_related, prefetch_related, and other tools to make the ORM layer more performant. Of course for really exotic things like tree-based data structures you can also perform raw SQL queries on an ORM object with `MyTable.object.raw("SQL goes here")` and have it still return ORM objects. Please don't do that though.
For a long time I didn't realize why people were so down on ORMs until I tried using a non-django ORM, it really does set the bar.
from t in MyTable, where: t.key >= 12, select: ["field1", "field2", "field3"]
However, that will still allocate a "MyTable" struct in Ecto. And if that struct is large (say 50 fields), slots for it are allocated (but none of the actual data on the fields).
I am not familiar with Python Object Model enough but, if your Python example still allocates an object with slots for all 47 other fields besides the three selected, then you may still find yourself under circumstances you would rather have a "slim" version of the model and reduce memory allocation. This may be needed when you need to load dozens of thousands of entries into memory for processing or similar.
I am not saying this is a must have but I am just trying to clarify what my initial comment was about. :)
>I am not familiar with Python Object Model enough but, if your Python example still allocates an object with slots for all 47 other fields besides the three selected, then you may still find yourself under circumstances you would rather have a "slim" version of the model and reduce memory allocation.
Honestly at that point probably don't be using python. ORM objects don't use slots, they use a hashmap to look up everything every time, and probably that hashmap points to a method that points to another hashmap that points to some kind of data structure.
If you're worried about that than django probably is not for you. That being said it scales out very nicely, you probably are not loading dozens of thousands of entries in to memory at once but are instead using a celery distributed task queue. If you did need to load that much in to memory for some reason (presumably some kind of data science?) You can use the `in_bulk("field1","field2","etc")` query option to return raw data as dictionaries/lists instead of ORM objects. You don't get the benefit of ORM methods but you're not instantiating an ORM object for every row and probably for that kind of thing you're better off taking a more functional approach any way.
Basically if you're worried about how much memory your data is going to take up you probably shouldn't be using django, hacking __slots__ or other memory efficient stuff into the django ORM is going to be tricky. In that case ignore the ORM and just construct your own objects directly or use something better suited for that.
> I am not familiar with Python Object Model enough but, if your Python example still allocates an object with slots for all 47 other fields besides the three selected, then you may still find yourself under circumstances you would rather have a "slim" version of the model and reduce memory allocation. This may be needed when you need to load dozens of thousands of entries into memory for processing or similar.
Please notice that nothing stops you from defining a non managed (ie Django migrations won't mess with it) model pointing to the same table and having declared only three of the fields of the original model.
> For a long time I didn't realize why people were so down on ORMs until I tried using a non-django ORM, it really does set the bar.
This. Very much this.
I have given up on one finding such. The closest and nicest one that comes to mind is SQLAlchemy. The rest of languages or frameworks have indeed nothing compared to Django ORM and SQLAlchemy. It's not even fair to compare them.
My experience is also exactly like this. I have a project in java/spring and it uses hibernate as an ORM. I can't think of how much better is the Django ORM than hibernate. When people express hate for ORMs I understand where this hates comes from.
If I ever need to do another Java project I prefer to use raw jdbc sql than using hibernate.
Just wanted to say thank you for all the great work you've been doing. I'm a recent Elixir/Phoenix convert coming from a couple of years of full time Django and it makes me so happy to see you being interested in the way Django handles models. Imo it's one of the last things I'm struggling leaving behind (together with its tight integration of the Django admin, which you get for free).
I haven't looked close into Ash but to me it seems their approach is interesting and offers a solution, not the way Django does it but it is a solution, although a bit hard to wrap your head around at first. Django models are more simple and straightforward.
Yes. Migrations are checked into the source control. They're still the source of truth of the database changes.
Regarding read only models pointing to a tabel. It'd possible to use proxy models or even non managed models where with some mix of Managers it can provide what you're looking for.
For getting a subset of fields from a database tabel, Django provides ".only()" on the QuerySet which you can use to list all the fields explicitly and only them will be retrieved. It can even span to foreign field relations as well.
Diango ORM is indeed powerful and vrry flexible once you learn it. A novice developer can use it quickly and won't get in the way. An advanced developer can do very complex sfuff with it without touching any raw sql. Still, it's possible to write custom sql commands directly.
You can use `managed = False` and `db_table` to have another model pointing to the same table. `managed = False` will stop Django automatically creating schema migrations.
> I mean you can using proxy models but that's not really a thing with django.
I think this is over-opinionated. You can use Proxy models, and this is a thing in Django. It’s not “Django 101” sure, but it’s not discouraged.
One reason you might want proxy models is if you want to only fetch certain columns for a specific type of usage. Another common use for proxy models is as a hack to get multiple Django Admin pages for the same model (again, perhaps pulling different fields or offering different views).
> Also, don’t you run into scenarios in Django where the automatic migration is not enough and you need to provide custom commands? In such cases, how do you provide them?
Data migrations are the obvious case where you need to write migration code manually. It comes up in schema migrations too on big projects. It’s easy to modify the auto-generated migration; it’s just Python that gets run by the migration tool:
I don't think this is something that will come down to technical merits.
Unfortunately it is a popularity contest, where entrenched network effects are working against Elixir.
As a Python programmer, I truly believe Elixir/Phoenix is the best stack out there from a technical perspective.
However the incredible breadth of libs and resources ... and most importantly the mind share (both available devs, but more importantly available jobs for seniors who commit to Elixir) means that it is still just not a competitive choice.
Almost all commercial software dev is not about technical excellence, but rather about applying the available tech to a particular business domain. And there Python (and .Net, Java, PHP, Ruby ... even Go) are so far ahead that sadly Elixir doesn't look like it will make it.
I don't know that anything can be done about this. For all that I think Elixir is better, I, like most devs, am not about to sacrifice (or even impair) my career for the cause. Personal remuneration wins out.
That does seem like the same arguments that are made prior to just about anything becoming mainstream though. I'm not saying it means nothing that python has so much mindshare, but the same could have been said to pythons predecessors as well. You're right about commercialization not being about technical excellence, in the short term. In the medium and long term, it's the advantages of recent technical excellence that drives a lot of commercialization.
I'm familiar with neither Rails nor Django so I dont fully follow what you're describing. Are you just talking about the difficulty of maintaining the schema/DB mapping with the migrations ran against the DB?
I’ve never used Ecto outside of a Phoenix app, so yes, in my mind the line between them was blurry. Thanks for pointing out my mistake!
A very recent real-world challenge I ran into was having to coordinate updates and diffs to all the files involved and keeping them in sync while doing local development (before deployment but also while working on fixing a very subtle bug). Every time I changed the migration file (even to add an index), I had to remember to checkout the previous version of “structure.sql” or the migration wouldn’t be run, even during a complete DB reset (structure.sql is used to know which migrations have been run and synced with the SQL dump). Also, the schema in Ecto does not have a complete validation API, so you’ll need to do those in “changeset” functions. Overall I like Ecto, and part of me regrets letting myself get spoiled by Django’s model layer and DB tools.
It is totally fine to get spoiled if you consider it is better :D
Your case about structure.sql is interesting. It would be nice if we could automate it somehow but, if we simply tried to re-run a changed migration, then the migration would likely fail because the other operations in it (such as adding fields), you already exist, no?
Do you have any suggestions/ideas on how to tackle this? We could have a "mix ecto.migration.fix" that reverts the last migration, wait until you close/save the file, and migrate again, but I am not sure how useful it would be. How would Django approach this? If you auto-migrate and then do further changes, does it change the existing migration or does it generate new ones?
PS: the changeset functions are pretty much on purpose though. It is important to decouple the validation from the schema because a single schema can have several validations rules (and they can diverge overtime!).
Django’s approach is to run all the migrations, from the migration files, if the database gets wiped clean. In other words, if you’re developing on your local machine, you can just drop the DB and start new (or if using SQLite, just delete a file), and after running migrations, you never have to worry about whether all the necessary files are in sync with the database. As Neo might say:“there is no spoon…” seriously, structure.sql is simply not needed, so you won’t find yourself making creative use of git reset/checkout on a specific file as you’re iterating.
To answer an earlier question you had, Django’s “makemigrations” command tries its best to help you avoid having to manually edit migration files; if you rename a field in models.py, it’s usually smart enough to ask you, at the command line, if you renamed it, and proceed to make the migration file for you. Of course, for advanced migrations the command can only take you so far, but again, if you’re trying to iterate quickly, you really need to experience Django’s ORM and DB tools to be able to appreciate them fully.
Regarding changesets, I love the functional API, and at the same time I wish we could automatically derive a “default_changeset” function from the schema itself for straightforward cases, and then let devs run diverging validations in their own changeset functions. Maybe an interesting idea to explore with a macro?
By the way, I’m a huge fan of your work, and hope I get the opportunity to work with Elixir for a long time. Any criticisms or comparisons are to try to bring improvements to the ecosystem I love. :)
Generally speaking, it's inadvisable to 'discard' migrations with django. If the migration set is getting too large, the general practice is to 'squash' migrations, which is a django-supported function for merging the migrations down to a single app. You can do the less blessed, but more simple function of nuking the migrations, updating the migrations table, and remaking your migrations... but you have to coordinate that in every environment.
For an existing database, you can easily create django models for each table, etc. There's a --fake option to update the migration table to make it think you've applied these migrations, but not actually apply them. This convinces django you've brought the database in sync. May your deity or deities help you if you did not actually bring it in sync. I've used this quite a bit in some java ee->python migrations I've done in the past.
TIL you actually don’t need structure.sql for Ecto! In that case, discarding migrations on a large running app seems like a not-always-good practice, right? FWIW, my experience with Ecto has always been in joining existing codebases that are already big.
I do also want to add that Django can kind of do the reverse of auto-generating migrations: it can “inspect” an existing database and generate Python classes that allow you to use the Django API as if you wrote those classes yourself: https://docs.djangoproject.com/en/4.1/howto/legacy-databases...
Of course, the feature is not going to be perfect, especially if a team has been using odd naming conventions in the legacy database, but it seems helpful at least in theory.
Inspecting a database and coming up with models is a frequent pain point that I see new elixir devs experiencing. I’d love to see someone take a stab at generators for that. I wonder if someone would do an integration for https://ash-hq.org/ that does that.
Hmmm, if we’re willing to accept a dependency on git than it might be interesting to write a `mix ecto.migration.rerun` that rolled back any migrations that are untracked or dirty according to git, then re-ran them. That would simplify a pain point I encounter when I’m iterating on a migration because sometimes I rollback too far accidentally and finding the right argument to rollback feels like a chore. The nice thing is that could be prototypes outside of ecto quite easily.
I've been using Elixir with Phoenix and Liveview at my job for the past three months as part of a small team building a non-trivial web app (https://duffel.com/links). I'm mostly a front-end developer and was brought in to handle the UX side. Prior to this project I've spent the last 5+ years in the React world.
I found a lot of what the author says to be true. I'm used to the nightmare that is managing front-end dependencies, and it was refreshing to use something so 'batteries included' that comes with most of what I need.
It took me a while to get my head around the Phoenix + Liveview way of doing things, but when my mental model clicked into place and I stopped trying to do things the React way I became a lot more productive. When I had an autocomplete updating live as the user typed all going over the websocket without me writing any JS, it felt magical.
However I definitely found a lot of sharp edges that the author doesn't mention. We struggled a lot with any non-trivial UX, for example with the autosuggest mentioned above I had to add a lot of JS to handle things like being able to use the keyboard's arrow keys to select options. Whenever I jumped into the JS world it often felt like I was fighting against Phoenix, and had to resort to using 'phx-ignore' a lot. It was frustrating to continually struggle to do things I knew how to do easily in a pure JS environment.
Another area I struggled a lot is Elixir's syntax. To me, it feels like there are too many operators. The author touches upon it towards the end when they mention things like '\\' for default arguments, '<>' for joining strings and '++' for joining lists. It's a lot to wrap your head around at times.
Some of the fault for this lies with me; we were working to a tight deadline so I didn't have time to dedicate to learning Elixir, Phoenix and Liveview from first principles, I just jumped in out of necessity. Had I spent more time on the foundations first I may have been able to avoid some of these pitfalls, but I do think it illustrates that like many 'do everything' frameworks, there's a steep learning curve to doing non-trivial things. It's a powerful tool and I'm optimistic about its future, but I'm undecided if I'd choose it for a future project that has significant front-end requirements at this stage.
It sounds like you were able to be reasonably productive without even any time to learn the framework, so I’d say you did alright!
I’d say up until the recent focus on LiveView, Phoenix has been very easy for devs with experience with Rails or Rails clones in other languages to learn. Recently, with all the LiveView changes and the new components, it’s been harder, but I think it’s finally stabilizing a bit and I’ve got to say the newly-released Phoenix 1.7 is another significant step forward in terms of productivity for new apps.
Which part of this app is using LiveView? Since what you link to is using next.js. Are you guys mixing the two?
I really want to find a reason to build something with elixir, and it might be viable for my current work project. We won't be replacing our next.js frontend, though, that's for sure, though I wouldn't mind experimenting with it.
I'm mostly interested in deploying elixir on the backend, since we are using microservices, and there would definitely be a couple of services where elixir seems like a great fit. OTOH, we are also pushing some data through ML models, and it seems like Python is still the only real choice for this kind of stuff. I really wish it weren't the case, since the python interpreter with its GIL is just a piece of crap, to be frank.
Ah sorry, I should have clarified in my original message. The page I linked to is our public marketing site, which is a static site built with Next. The Phoenix application is the product that the site is talking about (Duffel Links).
> Whenever I jumped into the JS world it often felt like I was fighting against Phoenix
I've had the same experience and this is something I've tried to tackle recently. I've started working on LiveSvelte which allows you to plug in Svelte components directly into your LiveView, while still being able to push events to the server with a `pushEvent` function on the client. I've only started working on it this week but I think it's a promising idea. It's not React though but you could develop a similar package with React, I just much prefer Svelte :)
It's different, I've looked into Svonix for my usecase but it wasn't sufficient. Svonix does not support LiveView. It gets you a Svelte component in your Phoenix app without any interaction with the server. It also doesn't support Server Side Rendering which means on your first page load the Svelte component is not visible.
LiveSvelte does allow you to communicate back to the server, which in turn updates the Svelte component from the server, getting you E2E reactivity.
Serious question—if you were on a tight deadline, why did you choose a tech stack that you still had to learn? Leveraging existing knowledge is exactly what anyone in a rush should be doing.
I wasn't personally involved in making this decision, I was brought onto the project after it was taken, but there was a perception that we'd be faster if we did everything as a full-stack application. In fairness, we did deliver what we set out to do so it was a success, even if there were some rough edges to deal with.
Also bear in mind that I'm only covering my front-end perspective on this. I think the back-end developers involved on the project found it much easier to work with as they were already experienced with Elixir and Phoenix. For them, a tightly-coupled front and back end meant that they could change the way things worked and it was straightforward to update a few functions in Elixir and then the HEEX template, as opposed to a decoupled setup where we'd probably have been communicating using a JSON API and changes would have had more of a barrier to adopt.
I was randomly watching youtube videos about Elixir ("The Soul of Erlang and Elixir " by Sasa Juric), and I have never really been interested in it... I was completely wrong. BEAM/Erlang is amazing piece of technology, and Elixir does excellent job to bring in new people in.
If you do web development, then just for the sake of professional curiosity, I urge you to go watch a couple of videos to make yourself in idea. I have found that some (many) features which I would go learn and use another tool for, are already included into Elixir ... Now I feel sad for doing Django for my daywork
I work for an org that does a lot of Elixir (on other teams though, I’m actually a Django developer!). I’ve heard from those team leads that they’ve had trouble hiring people with FP / Elixir experience, which is not that surprising to me. Guess it’s a matter of right place right time.
I've been programming my web projects exclusively in Elixir for the last few years and I've been deeply enjoying the language.
Although at first, some unfamiliar design decisions annoyed me, for they were not well explained in the tutorials I was learning the language from – e.g `arg: value` as a shortcut for `[{:arg, value}]` in function arguments and other places – over time, I've grown to love and appreciate the language and I now more or less regard it as an elegant programming language put together by a thoughtful person with great programming taste.
The Phoenix framework is a great bonus point if you want increased productivity for writing web apps. I don't miss Python at all.
Elixir rests on the shoulders of giants though, and the Erlang/OTP is yet another elegantly engineered piece of technology, enabling one to develop concurrent, distributed and fault tolerant systems with ease, and in this regard I think the Erlang/Elixir ecosystem takes the crown. The actor model is simple, yet very powerful.
For reference, I've also been learning Nim and F#, and they're also pretty unique and well designed languages in their own ways. I love Nim's fast compilation times, efficiency and expressiveness and F#'s powerful type system which enables one to develop domain models with ease. I've also been occasionally taking stabs at Rust, but it always feels just a tad too complex for my brain. I'll continue playing with it though, maybe it will eventually make sense.
Ultimately, every design decision is also a tradeoff. I don't think we'll ever have a perfect, human designed language.
My dream language would probably be a combination of Elixir and F# though, a language with an advanced and powerful type system, functional programming and the actor model as the building blocks. I guess fast compilation times à la Nim wouldn’t hurt either. :)
> Although at first, some unfamiliar design decisions annoyed me, for they were not well explained in the tutorials I was learning the language from – e.g `arg: value` as a shortcut for `[{:arg, value}]` in function arguments and other places
What kinds of things have you been coding or reading in your Rust learning so far? I'm a pretty big fan of Herbert Wolverson's Pragprog Rust book and open source Roguelike project, as well as Tim McNamara's Rust in Action (which I've only gone through part of due to how each project leaves you wanting to keep building it after the book moves on to another).
I wish I had this when I learned the language. :) I'll take a look, as your playlist is quite comprehensive it appears. Thanks for sharing!
Re: Rust - I've been trying out basic examples, simple web servers, etc; nothing too fancy. But it just seems too ceremonious to get even the most basic things done. I own the book "Rust in Action", I just didn't have the proper time to go over it.
It takes you through the heavy lifting of writing a CHIP-8 emulator, a database and a whole bunch of things. It really feels like it could have been several books and he just had to cut back the scope of each part to fit everything in and ship it.
The great thing about that is that it leaves you with projects just begging for you to dot the "i"s and cross the "t"s.
Yeah, I've known about Gleam, but I don't recall exactly why I wasn't too compelled to use it. I installed it with brew a while back, I tried a few basic examples and since then it's just been sitting there. I guess I should revisit. But thanks for the reminder anyway.
What is the productivity advantage of Elixir/Phoenix over other frameworks when using it for implementing more mundane SaaS (some dashboards, some CRUD,...) rather than the game lobby feature mentioned in the post?
I am a developer with significant non-web development experience (Although I know pure JS and Erlang quite well), who is interested in learning some full-stack development to implement a SaaS on the side and would like to zone in on the most productive framework possible.
For me it's not just the framework (although that is nice) but really the VM that it runs under (BEAM/OTP). Running an app on the BEAM means removing the need for external servers like redis/memcache for cacheing and whatever queue management system you pick.
For a SaaS app this means instead of trying to orchestrate a bunch of services via some complex system (k8s, etc) you run everything under one VM that both scales across both local CPU cores and distributed nodes in the same network easily.
As for dashboards and CRUD UI that is improving in the latest Phoenix release. I have a 5 year old Phoenix app that I wrote that is still running in production and I generated all of the backend admin and most of the frontend using Phoenix's generators. However that did result in a lot of nearly duplicate view code. With 1.7's focus on shared common UI components the shear volume of template code is reduced, (but not eliminated).
If I were you I'd wait for 1.7 to fully release and then try it out for a couple of hours and generate some simple app to get a feel for it. I've been programming professionally for 30+ years and I've written in a lot of languages and writing Elixir code just feels good. José Valim got a lot right with Elixir (and he sets a great tone - just read his comments in this very comment section).
Although, if I were an absolute beginner, I might still consider waiting a little bit for the various resources to be updated to 1.7. That seems to be happening fast, at least. Or just go for the excellent Phoenix guides, which are already updated: https://hexdocs.pm/phoenix/overview.html
As long as you don't have any dynamic part to your app, it's pretty much the same, it's the dynamic areas that are very very productive with live view.
I've implemented a simple multiplayer card game in about a day for example.
Assuming the author is in the comments, the paragraph about data immutability is wrong. Integers are immutable in Python, when you created the lambda the name x is in its closure. When you assign to it it changes where the name is pointing to. Elixir does something wild and completely different. When you assign x that second time at that moment is creates a new variable behind the scenes and you transparently start using that one when you now refer to x in that scope. So lambda has a reference to x@0 and your outer scope has a reference to x@1.
As someone that is deep into a Python web development, I see where this can be useful. However, after 1/3 down of the article it gets harder and harder to wrap my brain around the concepts without really learning elixir syntax. I feel like it would have been better to have 10 smaller articles than this long one, going deeper into the syntax and showing other example of Python way of doing it. I’m not sure I would do some of the example in Python as the author suggested. That case case it just build a string array and use join. To creat the statement with an eval to run it. Weird I know, but I’ve pulled that trick a couple of times now.
Author here. This blog post originally started out as just notes to myself as I was learning the language, so it's basically a brain-dump wrapped in a little prose to give people some exposure to the language. I still love Python, but I think Elixir is a great tool to have in the toolbox.
Hi there commenter here. Thanks for writing the article, I will return on it maybe in a year or so. I started learning Clojure this year and it’s quite a journey. But functional programming is the future I’m convinced. Cheers.
I have been working on a library for Django that allows you to build reusable template components, and it is heavily inspired by Phoenix's Components. I just released it today, so if you are a Django user and find the way that Phoenix Components work interesting, give it a try!
As someone who first came to Elixir from Erlang background, but also someone who has to work a lot with Python and hates every second of it... I'd need a completely different set of arguments.
I don't like Ruby-like syntax. Not having something to separate expressions other than whitespace is a big turn-off for me. Something I also hate about Python. Something that a lot of forced Python programmers also see as a turn-off.
Another, and perhaps superficial impression I got from Elixir is that it's Web-centric. I don't care about Web and have a strong distaste for everything that touches it. Python first appeared under the spotlight due to its Web framework, Zope, which kind of set the stage for that. Web had and still has a huge influence on how Python ecosystem developed, and that wasn't a good influence. I'd need an argument that says that Elixir is good outside of Web.
I strongly believe that development speed is only in a very small part due to the language, and in a very big part due to how well you know the language. So, trying to convince me that the language is good because you could shave off a few minutes on a trivial task isn't going to convince me it's good. Working with large old programs is where productivity tools really matter because that where the bulk of my work goes to.
But, really, I could never see the appeal of Elixir, when compared to Erlang. So, there's that. Also, I don't think it's worth it selling new technologies to Python programmers. Overwhelmingly, Python programmers are there either because it's popular and they like it, or because it's popular and they don't like it. Even in my first steps in Python world in early 2000's, I already knew there are better things than that. Quality was never a concern here, unfortunately.
Interesting article. Having the built in Redis via ETS is slick.
I currently use Python but don't use Django but instead wrote my own WSGI frameworks based on Flask Restx. I'm not a big believer of ORMs besides their CRUD functionality so there is not much benefit for me using Django and the security burden it carries (like PHP) by being so popular. I can keep my designs light weight, very secure and fast; albeit not nearly as fast as Phoenix.
Note however, I don't build public facing high performance applications so its not critical to have max IO like that which Phoenix can handle.
In a way, I wished I had moved to Elixir but with the technical debt invested in Python its a very big jump. Maybe someday down the road...
> Interesting article. Having the built in Redis via ETS is slick.
This isn't really true. Although having a built in redis may be possible, the fact is that production elixir projects do use redis on a lot of cases (for example as a task queue). ETS is rarely used because of its complexity.
I love functional programming because it feels natural to me to have functions that always return a value (method chaining or postfix notation FTW!) do not mutate data (and therefore avoid side effects), and use clear naming conventions for any potentially destructive functions. However, Python's popularity, vast library, and strong community make it a more practical choice for solving problems - at least when programming is not my primary focus but rather a means to an end.
While I find Clojure, Elixir, and F# attractive, it doesn't make sense to use them for anything more than experimenting or playing around, given the resources available in Python.
I don't think it's nonsensical to use less popular/exotic languages. I prefer to write code in languages that I like to write in, as long as it's a good enough tool for the job.
The real problem with them is that while you and I might be interested in functional languages, our coworkers/managers aren't and we have to stick with the languages we might not find as immediately exciting, like python.
Side node, python has Hy (https://hylang.org), which is essentially a lisp implementation that compiles to Python's AST. The entire python ecosystem and stdlib is available to Hy, so it's as batteries included as python is. It's a blast to use, but similarly I wouldn't expect my job to be excited about it.
As a Python programmer, I truly believe Elixir/Phoenix is the best stack out there from a technical perspective.
However the incredible breadth of libs and resources ... and most importantly the mind share (both available devs, but more importantly available jobs for seniors who commit to Elixir) means that it is still just not a competitive choice.
Almost all commercial software dev is not about technical excellence, but rather about applying the available tech to a particular business domain. And there Python (and .Net, Java, PHP, Ruby ... even Go) are so far ahead that sadly Elixir doesn't look like it will make it.
I guess it really depends on your definition of "make it". :)
Elixir today is used by startups, unicorns, fortune 500, and at least two of the FAANG (for whatever it is worth). It is used for web apps, embedded, distributed systems, data processing, and making inroads on AI and machine learning. It has a vibrant community with events around the world, several dozens books, and more.
It is on the [top quadrant of Redmonk][0] and [top 25 on most GitHub language stats][1] while being the second youngest language there (only older than Swift which was created by Apple).
Considering it is a language standing on the shoulder of giants but started by a 30-person company, I would say we definitely made it (I am obviously biased!). Of course there is _a lot_ to improve, realistically speaking it is unlikely we will cross top 10 (maybe that's your definition of "make it") but I also want to be clear that we will be here for quite a while!
I could be wrong and I really hope I am ... I really think it's the best stack out there, at least until you need the sort of extreme performance that dictates Rust (and even there Rustler looks good ... particularly as Rust extensions shouldn't jeopardise the extreme stability of the BEAM).
But right now it looks on course to go the way of Clojure or Scala ... hanging in there for sure, but never becoming a major player. This inevitably affects the range of 3rd party APIs and libs, meaning many companies will consider it too risky. It's not about what will impress an engineer (Elixir does that) it's now about what will reassure an accountant.
I also notice that in the majority of job postings mentioning Elixir (i.e. just at the keyword search level) it is not actually the core language for the job, but rather a filter/attractor to try and hook top Ruby devs (who seem to be in short supply).
Why does it have to be a "major player"? :) Going the way of Scala, Clojure, etc is fine. If you have a community where people are innovating and/or enjoying the journey, companies feel productive but invest and understand the pros and cons of the ecosystem, then to me it is all good.
They estimate there are more than 20 million developers worldwide and, if a technology can reach 1% of that, I personally find that amazing (in the "super made it" kind of way).
I am not saying this in a self-congratulatory way (especially cause we haven't reached it) but just generally. Anyone who worked on any sort of moderately used open source project has likely have gone through several rounds of "who is using it? anyone big?" and I wish people can appreciate the small victories rather than aiming to the highest of goals, which most projects will realistically (and understandably) not reach.
If you or your company needs to reassure the accountant, then those languages are not for you, and there is nothing wrong with that. Other than that, your perspective of companies hiring for Elixir is drastically different than mine. :)
I absolutely feel you have grounds for being self-congratulatory.
But the linked article, and much of the discussion here, is really about persuading more people to adopt Elixir. Essentially people are being asked to tie a period of their professional career to the fortunes of a technology.
Most of us code for money ... we have families to support etc. This is where a language becoming a "major player" becomes relevant, it's all about career investment. Part of the reason dev pays so well (for people who typically suck at salary negotiation) is the extreme mobility of programmers. However that does depend, to a degree, on the technologies you have experience with being widely used.
Niche technologies can leave you somewhat trapped in your current job ... and the people negotiating salaries and raises for employers will inevitably exploit this.
I liked it when Python had a smaller user base ... there was more of a "geek club" feel ... but there is no question that the remuneration for devs and the flexibility they enjoyed improved as adoption increased.
So ... there are financial risks to adopting Elixir ... but, if more people adopted Elixir this risk would fall away ... a classic dilemma. The problem is that everyone pushing the tech seems to be glossing over this dilemma, and the risk it presents to the people being pitched to. Now much of this can be attributed to classic geek enthusiasm over a superior technology (caught up in idealism ... not always pragmatic), but not all. For sure, the cheerleaders for every new technology will do this, but it is still somewhat disingenuous.
Right now I think this gap between tech idealism and career pragmatism may be the primary sticking point in increasing Elixir adoption ... but I feel it is wrong to try to bridge it but steering the next wave of adopters away from a realistic, informed assessment of the risks. Downplaying this is a step towards making fellow devs cannon-fodder in a mind-share battle.
> Niche technologies can leave you somewhat trapped in your current job ... and the people negotiating salaries and raises for employers will inevitably exploit this.
I don't think the relationship is that simplistic. On [StackOverflow Survey for 2022][0], the top salaries were for Clojure, Erlang, F#, LISP and Ruby. Elixir comes in 6th and Python comes in 25th. Choosing a niche technology may be a way to increase your salary and have more leverage because you will become a specialist on a niche (an increase of more than 40% when going from Python to Clojure by measure of average salaries).
Of course, it would be reductionist to say "learn Elixir and you will be paid more", but I also think it is inaccurate to say choosing a niche language is a certain trap. The best is to explore the companies, communities, and opportunities around you, especially in relation to which point you are in your career, instead of generalizing pros/cons as cheer-leading or blind optimism.
I think LiveView Native[0] could be very helpful in this regard, as per the project's goals, having ability to write mobile apps in the same language would greatly incentivize using it for web etc.
Hopefully that project continues to make progress, I'm looking forward to it for Android development.
> Almost all commercial software dev is not about technical excellence, but rather about applying the available tech to a particular business domain
A manager focused on results will not make decisions based on beauty or excellence, unfortunately, but based on other criteria: What gets the job done, availability of resources, price. That’s why PHP is still the most used programming language in web dev in Germany (instead of Ruby/Rails or Python/Django), and Excel is the most used tool for data analysis in the business world (instead of Python + Pandas).
LiveView can’t handle spotty connections, I thought?
Career python expert, not huge into Django/we dev in general…
I thought the whole thing with most HTML over the wire is that its tied to the connection latency? Unless there’s some js component underlying the Elixir code that I don’t know or don’t understand.
With the advent of things like flask+html, I find it a little hard to believe that Elixir hackers could whip up a MVP faster than I’ve seen it done with flask.
I’m not opposed to functional programming by any means, but just because the language embraces it doesn’t qualify all the constituent frameworks to replace world-class solutions like Django.
> LiveView can’t handle spotty connections, I thought?
LiveView handles it just fine. If the websocket disconnects for some reason it will automatically re-establish itself when the connection is restored.
You can also use the built-in JS module to do client-side things when you don't need backend support. I use Alpine.js (PETAL stack) and I like it a lot.
If you have a very complex client-side need, it's also pretty easy to use React.js for that.
> With the advent of things like flask+html, I find it a little hard to believe that Elixir hackers could whip up a MVP faster than I’ve seen it done with flask.
You'll have to get more specific if you want an accurate answer. Does this MVP require a database? Does it require specific UI or can you just massage the OOtB UI that Phoenix gives you? You'd be surprised at how much of a headstart the generators can provide. I've built working protoypes (with database persistence) in 30 minutes. If it's purely a static page, I would just use nginx directly but you could easily do that in Phoenix as well in about the time it takes you to write the HTML since the default new project comes with a page controller already set up for that.
>> LiveView handles it just fine. If the websocket disconnects for some reason it will automatically re-establish itself when the connection is restored.
What the parent means is that, because state is kept on the server with LiveView, and events that change that state are processed by the server, if your connection is lost, then you may find that even basic things, such as having a dialog open when you click a button, stop working. This means that unstable connections result in very poor UX.
This is in direct contrast to a typical SPA for example, where the browser keeps and handles most front-end state, and talks to the server only when it needs to (e.g. form submission).
With pure LiveView, I would agree, but you can easily supplement with a JS framework. As I mentioned above:
> You can also use the built-in JS module to do client-side things when you don't need backend support. I use Alpine.js (PETAL stack) and I like it a lot.
If you have a very complex client-side need, it's also pretty easy to use React.js for that.
While you could, I never use LiveView to open dialogs (unless the dialog state needs to be distributed across multiple clients, but I haven't had that use case come up yet). As you mention, that would lead to a poor UX.
The best is the best of both worlds. LiveView for data/state changes that need to go to the database, client-side script for state changes that are UI related and don't get persisted.
Note on Ecto: the macro based syntax was not exposed at first but ended being public API because everyone used it to be able to do pipelines and complex composition.
I scarcely use it but it does clean up stuff sometimes.
I haven’t gotten through the whole article yet, and haven’t encountered Elixir before, but I really like the popes ideas. I am going to dig into that a little more. Are there other languages that have similar constructs (outside of shells!)?
PS, if you're still watching this space: Clojure is an obvious one. It's called the "thread" operator and is a bit more flexible in that you can thread into the first argument (->) or the last (->>). This definitely works for Clojure and whether or not this is a good idea in other languages could be debated ad-nauseam.
I am part of the interest group "AI's who know Python" and together with "Aliens who know Python" we would like to file a complaint that this article is exclusionary for no good reason.
Personally, I would take the Elixir job. Especially if you already know Python and have been employed previously. You'll learn a totally new way of thinking about problems. Many Elixir jobs are also aware that most people don't know Elixir, so you'll have plenty of time to learn on the job with plenty of support.
I am in a similar situation as well. The company at which I am working as a trainee engineer has a small team dedicated to Elixir and Phoenix. I have a choice between Ruby with Rails and Elixir with Phoenix after I am done with my training. Elixir intrigues me because it is so different from the programming I learned at college, but I am also anxious about shoehorning my initial work experience into a niche technology that might be difficult to find a job in, a few years down the line.
Elixir will present you new concepts that you will be unfamiliar with and the development experience is actually really fun! For becoming a better programmer long term I would go for an Elixir experience early on.
I like how programming language article/book title cliches are cross-pollinating now. In a decade, I hope we'll have "Bite size julia for morons who know how to stand on one leg and can read perl"
They wrote there that they were looking at Elixir because they did not like the direction Python had taken. I'm sure Python can do the same as JS but Elixir is just so much nicer. Which is also my experience.
Your point obviously stands, pretty much all of mentioned can do the job.
Seriously i will not choose JS today if i have.a very specific requirements for choosing Elixir or Python (or any other stacks here).
In reality, people choose the stack they're get used to. But it's their limited ability to see JS strength to solve their issue in first place. It's not informed choice.
At the very end of the day, I think some people just don't like JavaScript.
At the risk of speaking for people other than myself, a lot of people don't want to write code in JS and if they don't have to, they won't. Frameworks/tools like Phoenix and HTMX or the fact that many languages can compile to javascript/wasm mirrors this opinion
I would find such principled approach to be unproductive. The same could be said about Python, the `.` is equally syntax sugar for passing the object as first argument. So if we take this Pandas code:
If we were to pass df as first argument in nested calls, I would find it less readable. But I would also find using variables in this case to be more noise than helpful:
The variables are just repeating some of the information found on the right side and IMO they end up getting in the way of understanding the whole pipeline.
You could reuse df (as the method presumably change df), in addition:
- you have more room to comment what the intention of the code/call is
- you have room to handle / check for errors
For example, the "sort_values" and "groupby" in you example are obvious most readers. But "transform("cumsum") is probably obvious to you but I don't know the intention of the code.
By reassigning to df it makes it clear that the returned value from each of the functions is a `df` (some kind of query builder I think). Actually rewriting you r code I discovered your groupby did a groupby and then selected a result column I think.
So we would get:
// sort result on dependency date
df = df.sort_values('dep_date')
// ... check if sort_values worked (i.e `dep_date` is valid column)
// group result by name and select duration column
durations = df.groupby('name')['duration']
// compute columns cumulative sum of durations
sum = durations.transform('cumsum')
I agree its much more verbose, so this won't work if you (just) want conciseness.
If you reuse the variable, then you are not getting any of the alleged benefits of using variables. The same name tells little except it is perhaps the same data type and in Elixir you would also get this information from the module you are invoking:
text
|> String.split(“,”)
|> Enum.join(“ “)
You could also equally add comments between the lines in the dot example:
df.sort_values(“dep_date”)
# group and select
.groupby(…)[…]
Although most of the comments above are discardable (IMO) because they are restating the code.
My point is: sometimes I will break out into variables to get some of the benefits you mention. But forcing all intermediate steps to assign to variables is as harmful as using “|>” or “.” exclusively and forgetting about variables altogether. If you need to add error handling, code comments, etc, you can break out of the pipeline as needed.
I think Jose's example provides the best of both worlds, conciseness and readability.
> you have more room to comment what the intention of the code/call is
There's nothing stopping each pipe expression from having a comment of its own if you lean towards literate programming, or you feel that the pipe function + arguments begs further explanation to unfamiliar developers.
> you have room to handle / check for errors
You can pipe your results into a validation function. Ecto, the go-to database mapper, has exactly this pattern. Failing fast is idiomatic Erlang/Elixir (rather than let the process live on with corrupt state) which means if the validation fails, it ought to raise an exception so the external caller can fix their call/request, or if it's a bug, the developer can be alerted to fix the code.
> You could reuse df (as the method presumably change df),
Then what's the point in assigning to intermittent variables (as your criticism about the pipe was)? You don't get any added clarity about the intermittent intentions that way...
No downvotes from me btw, but I think the author and just about most people using Elixir (which tends to not be a first language for most in the community) have used the classic-style of temporary variables for lack of any other option. Certainly in my own experience the pipe operator has eliminated those redundant expressions rendering the code more concise, expressive, and hence more readable. Many APIs in other languages emulate something similar with method chaining for this very reason (see Fluent APIs).
Extracting variables out of if-expressions can still be useful for improving readability of long boolean expressions though.
Perhaps Phoenix and LiveView will be the gateway drug, but even to reach that point requires a lot of effort to understand functional programming and Elixir.
Python has some functional capabilities, but in my experiences with Python devs, those are little used and even shunned. Imperative, mutating loops are the Python way; and to suggest otherwise is to hear "But why would I need that? This (long functions with mutations everywhere and loops) is fine."
I went from Ruby to Elixir, and even with my basic Clojure experience, it was an effort. It was worth it, but I think the apparently similarity of Elixir to Ruby is actually a negative. They look awefully similar, but their use is vastly different.
As for Django vs Phoenix, I would argue that Phoenix has a much cleaner story for doing everything that Django can do and more. People promote Django for the "batteries included" aspect, which mostly just means "I get free user/auth system and built-in scaffolding for my CRUD." But those are things which are very easy to add to Phoenix (and Rails, which Phoenix is roughly similar to). Yet people choose Django for these little freebies which almost always get thrown away or ignored in real production worlds. For a quick proof-of-concept, it's nice. But for real projects it doesn't add value enough to offset the crufty boilerplate OO-centric code that results.