Hacker Newsnew | past | comments | ask | show | jobs | submit | pjungwir's commentslogin

Does this scheme give a way to progressively load slices of an array? What I want is something like this:

    ["foo", "bar", "$1"]
And then we can consume this by resolving the Promise for $1 and splatting it into the array (sort of). The Promise might resolve to this:

    ["baz", "gar", "$2"]
And so on.

And then a higher level is just iterating the array, and doesn't have to think about the promise. Like a Python generator or Ruby enumerator. I see that Javascript does have async generators, so I guess you'd be using that.

The "sort of" is that you can stream the array contents without literally splatting. The caller doesn't have to reify the whole array, but they could.

EDIT: To this not-really-a-proposal I propose adding a new spread syntax, ["foo", "bar", "...$1"]. Then your progressive JSON layer can just deal with it. That would be awesome.


From what I understand of the RSC protocol which the post is based on (might be wrong since I haven't looked closely at this part), this is supported: https://github.com/facebook/react/pull/28847.

>The format is a leading row that indicates which type of stream it is. Then a new row with the same ID is emitted for every chunk. Followed by either an error or close row.


I've seen this a lot when someone wants to add "workflow automation" or "scripting" to their app. The most success I'd had is embedding either Lua or Javascript (preferably Lua) with objects/functions from the business domain available to the user's script. This is what games do too. I think it's a great way to dodge most of the work. For free you can support flow control, arbitrary boolean expressions, math, etc.


I find that the #1 reason people add those "simpler than Lua" homegrown languages in Enterprise is to allow non-programmers to program. This not only has a tremendous cost to develop (compared to something like embedding Lua) but it also creates the worst kind of spaghetti.

One of the most unhinged pieces of software I have ever seen was the one from a fintech I worked with. Visual programming, used by business specialists. Zero abstraction support, so lots of forced repetition. No synchronous function call, so lots of duplications or partitioning to simulate it. Since there were two failed versions, there are three incompatible versions of this system running in parallel and migration from one to the other must be done manually.

The problem is about 90% of the business rules were encoded into this system, because business people were in a hurry. People wanted a report but didn't want to wait for Business Intelligence? Let's add "tags" to records so they appear on certain screens, and then remove them when they shouldn't anymore.

In the end the solution was adding "experts" to use it, but the ones who actually knew or learned any programming would just end up escaping to other companies.


One pitfall that is so obvious it hurts (but I have seen people fall into it), goes a bit like this:

1. We have a python application

2. We need a configuration format, we pick one of the usual (ini/toml/yaml/...)

3. We want to allow more than usual to be done in this config, so let's build some more complex stuff based on special strings etc.

Now the thing they should have considered in step 3 is why not just use a python file for configuration? Sure this comes with pitfalls as you now allow people who write the config to do similar things than the application, but you are already using a programming language, why not just use it for your overly complex configuration? For in house stuff this could certainly be more viable than writing your own parser.


Because now, anything that wants to read that config has to be written in Python. You've chained yourself to a stack just for a dynamic config. I ran into this issue at a previous job, but with a service that leaned heavily on hundreds of Django models. It made it impossible to use those models as a source of truth for anything unless you used Python and imported a heavyweight framework. It was the biggest blocker for a C++ rewrite of the service, which was really bad because we were having performance issues and were already reaching our scaling limits.

Declarative configs are preferable for keeping your options open as to who or what consumes them. For cases where config as code is truly necessary, the best option is to pick something that's built for exactly that, like Lua (or some other embedded scripting language+runtime with bindings for every language).


I would tread carefully around this (although you know the specifics !).

Simply being tied to one language is rarely a bad thing - at a certain point in a company size growth, having a common language and set of tools (logging, dbase wrappers etc) acts as a force multiplier beyond individual team leads preferences.

I would be interested in exactly what scaling issues you hit but I would ask if Inwere financing the company if overcoming scaling problems in python would cost less and lead to better cadence than a migration to C++


I’ve worked in several python shops, and now work with Rust. Python’s performance can be a real cost problem at scale. Where this bit us in the past was with the sheer number of containers and nodes we had to spin up in k8s to support comparatively moderate traffic in a relatively simple web application.

It’s been a while, so take the numbers with a grain of salt, but where we might have needed 10 pods across several nodes to process a measly 100 req/s, we can easily handle that with a single pod running a web application written in rust, with plenty of room to spare. I suspect some of it is due to the GIL: you need to scale instances rather than threads to get more performance in Python.

Anyway, at some point the cost of all those extra nodes adds up, or your database can’t handle the absurd number of concurrent connections all your pods are establishing, or whatever.


> impossible to use those models as a source of truth for anything unless you used Python

Django models are just a SQL database under the hood. Is there a reason you couldn’t just connect to the database from C++?


This can sometimes be a good idea. But it isn't without downsides. Now your config file is Python and capable of doing anything Python can do (which isn't necessarily a good idea), it's no longer safe, you now have to deal with shitty Python tooling, you might have to debug crashes/lockups in your config files, you can no long switch implementation languages, etc. etc.

It isn't a magic solution.


Not that this is a magic or even a good solution, I just wanted to mention that sometimes you already have the thing you are looking for directly under your nose.

I never had any project were a toml config wasn't enough.


One improvement though is using Starlark, instead of directly Python, since it offers a lot of advantages for a more lightweight runtime and parallelism.


I've done both. I embedded VBScript/JScript in an app via Microsoft's Active Scripting[1] interfaces and wrote a template language that grew to contain typical programming language constructs.

Looking back it was the VBScript/JScript functionality that caused me the most problems. Especially when I migrated the whole app from C++ to .Net.

[1] https://en.wikipedia.org/wiki/Active_Scripting


i yearn for an alternate reality where every unix command/service had the same syntax and a lua interpreter.


You should take a look at arcan, it's almost exactly that: http://arcan-fe.com

- https://arcan-fe.com/2022/10/15/whipping-up-a-new-shell-lash...

- https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger...

I am not using it as a daily driver, because, emacs, but I keep an eye on it because, well, emacs.


I'm sorry you get our reality where it is nested quotes, parentheses, dollar signs and backslashes all the way down.


I lol'ed .


https://github.com/oasislinux/oasis

It's not perfect, but it's clean and moves in this lua direction.


I've used [mmdc](https://github.com/mermaid-js/mermaid-cli) to generate mermaid images from a Makefile. It looks like it is implemented with puppeteer, so perhaps it doesn't quite fit your request. But if you just want something you can use at the cli, it is great.


Thanks. You're right, unfortunately this doesn't fit my needs, but I can see how it would be fine for CLI and casual use cases.


SEEKING WORK - Portland, OR or Remote

I'm a full-stack developer with 20+ years experience. My specialties are deep Postgres consulting, web development in Rails/Django, and devops with Kubernetes, AWS, Azure, Terraform, Ansible, etc. From time to time I've done projects in Javascript/Typescript (React, Vue, Angular, Node), Java, C#, C, Go, Rust, Elixir, Perl, etc.

I am reliable, easy to work with, quick to turn things around, and a good communicator. I can work solo or on a team, either as lead or a team member. I value client satisfaction as highly as technical excellence.

You can see some of my recent work here:

https://illuminatedcomputing.com/portfolio

https://commitfest.postgresql.org/49/4308/ (Adding SQL:2011 application-time to Postgres)

https://commitfest.postgresql.org/31/2112/ (Adding multiranges to Postgres)

https://github.com/pjungwir/aggs_for_arrays

https://github.com/pjungwir/aggs_for_vecs

https://github.com/pjungwir/active_model_serializers_pg

If you'd like to work together, I'd be happy to discuss your project!: pj@illuminatedcomputing.com


Nero isn't from the the late Roman Empire either.


And you're reading a popsci article by a freelance journalist. Stick to the actual important topic instead of going "akschually" _WHILE_ being incorrect.


Exactly, and Augustus was the first Roman emperor!


Great article! I'm looking forward to reading the rest of the series.

I noticed a couple details that seem wrong:

- You are passing `context` to `log_then_get` and `get`, but you never use it. Perhaps that is left over from a previous version of the post?

- In the fiber example you do this inside each fiber:

    responses << log_then_get(URI(url), Fiber.current)
and this outside each fiber:

    responses << get_http_fiber(...)
Something is not right there. It raised a few questions for me:

- Doesn't this leave `responses` with 8 elements instead of 4?

- What does `Fiber.schedule` return anyway? At best it can only be something like a promise, right? It can't be the result of the block. I don't see the answer in the docs: https://ruby-doc.org/3.3.4/Fiber.html#method-c-schedule

- When each fiber internally appends to `responses`, it is asynchronous, so are there concurrency problems? Array is not thread-safe I believe. So with fibers is this safe? If so, how/why? (I assume the answer is "because we are using a single-threaded scheduler", but that would be interesting to put in the post.)


How did JFrog know this github token was so powerful, compared to all the other ones I'm sure their scanner detects? What caused a human to get involved?


If you add contact info to your profile, I will reach out. Or feel free to send me a note.


Calagator has been dead for years, and I don't understand why. Way before the pandemic: agreed. Most of what I see there now is business networking or people selling something.

But there are still things happening, e.g.:

- pdxpug (Postgres)

- Database Reading Group (DBRG) at PSU

- pdx.rb

- pdxruby Slack channel

- pdxstartups Slack channel

- Portland Papers We Love

- Portland Linux Users Group

- Linux Kernel meetup

- Rose City Techies

That's from just a little bit of research. But I miss when Calagator was full of cool stuff. What happened there I wonder?

I've tried hosting things in my home/back yard before. One was just to come hack on your projects together. Another was to work on open source contributions. I'd be up for trying something like that again. Something about systems/databases would be right up my alley. Maybe a reading group. If anyone sees this and is interested, send me an email.


Indeed, if you read The Goal or The Phoenix Project, they call this "slack". There is a whole theory about why slack matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: