I shared this article internally and my peers were impressed about how similar it is to our final implementation. (It differs in the fact that we use Redis as queue.)
On one hand, I love the possibility of having millions of albums at your disposal via streaming services. On the other hand, I hate having to type or click to select them (voice recognition just doesn't work).
I am forced to use a MacBook M4 at work but I have and love my Framework 13 Intel.
Battery management is superior in MacOS but I can leave my Framework suspended for more than at least a week (I never measured how long it can stay like this).
I run vanilla Ubuntu with TLP, though, which might be the trick.
TLP is definitely the trick. TLP makes a huge difference in improving battery life while on suspend in Linux - especially since laptop makers stopped supporting S3 sleep.
For all those hating the Twitter link, it's a practical way to convert traffic and likes. It doesn't have to be your favourite tool, it serves his purpose.
I wasn't aware about how bad the situation was for devices, in general, about privacy until I installed a Pi-Hole in a Raspberry and used it as DNS service at home.
It's really surprising the amount of data that tries to leave the premises, and this just the one that I block with a mid-range security ban list.
I work daily with PHP and honestly nearly all my code I write is synchronous.
The shared-nothing architecture of PHP makes that really a non-issue for me. Requests never share any state with each other. Something like RabbitMQ can handle communication between systems.
That's in no way lightweight though, and most languages can easily do the same. Just launch multiple instances/VMs/processes. That's having multiple separate OS processes, each having everything that is needed to run PHP, and having no way to communicate with each other, other than what you implement. No channels, no task distribution, no queue on which to sync and take tasks from, no signaling of all processes being done and then accumulating the results. That is why you then need something like RabbitMQ or other things, and it does not mitigate the heaviness of the approach.
It is kinda funny, that you mention RabbitMQ, which is written in Erlang, which is famous for its lightweight processes. But also compare the approach with thread pools built into the standard libraries in other languages. And even many of those are heavy weight compared to Erlang's lightweight processes.
PHP's approach is simple though, and in my experience that simplicity pays off when you do start scaling the systems.
In other systems once you get beyond a single machine you need that external communication mechanism anyway, and now you have multiple classes of comms which introduces bugs and complexity and performance cliffs.
In PHP you just throw another server at it, it'll act the same as if you just added another process. Nice linear scaling and simple to understand architectures.
The shared-nothing architecture is great for some scenarios.
Long running processes and async I/O are a great tool to have though. They are present in PHP for almost two decades now, and despite having many incarnations (select(), libevent, etc) and even frameworks (amp, reactphp, etc) the knowledge is highly transferrable between them if you understand the fundamentals.
Don't worry, we're not comparing languages here. You are free to chose the language and community that fits best to your approach to software development.
It's better to build your app in e.g. PHP, prove its worth, then identify the bottlenecks, THEN determine if you need multi-threading. And if so, determine if PHP would be the best language for it or if you'd be better off going for a different language - one with parallelism / multithreading built into its core architecture.
The first logical step after PHP is NodeJS, which has the fast iteration cycles of PHP without the low-level memory management or the enterprisey application server headaches of other languages, AND it has the advantages of a persistent service without needing to worry too much about parallelism because it's still single process.
But if you still need more performance you have a few options. But if you're at that point, you're already lucky. Most people wish they needed the performance or throughput of a language/environment like Go.
By this logic, it seems like it would make more sense to just start with Node rather than PHP for the prototype and save the potential rewrite. Node does seem more popular than PHP nowadays to me as an outsider to both, so maybe that's exactly what did happen.
> Most people wish they needed the performance or throughput of a language/environment like Go.
Most people do need the performance and throughput offered by modern languages like Go, though. Time to market is the most important consideration for most. Maybe at Facebook scale you can spend eons crafting perfection in a slow-to-develop language/ecosystem like PHP or NodeJS, but most people have to get something out the door ASAP.
I still have flashbacks from working with the pthreads extension, which caused extremely hard to debug, non-reproducible segfaults sometimes; I realise Joe has probably started from scratch and improved a lot on that (and I know he's a generally awesome guy), but without a properly financed maintainer team to support him, I'm not sure I want to take that risk again before parallel has gained some maturity.
But why pick PHP then? Why not use nodejs or similar where the language, application stack and the community is already in agreement on the execution model.
I don’t use php but it’s extremely popular around the world. There’s also Laravel which seems a lot more productive than anything in js for full stack dev.
I just spent €1,250 for an oven (the only one I could find that was 45cm in height and could cook with steam).
It works very well but I was horrified when I saw the message saying that "It required an urgent security update" and that it would download it from the cloud.
Happy to exchange notes about our journey too.
Cheers