I get where you come from, but really needs it to be a whole SQLite instance per database? Wouldn’t be more efficient just logic separation in a larger DB?
Better usage of resources and it always allows a parent style agent do complex queries (e.g: intersection of two different actors data doesn’t need to fetch all, copy and do it in buggy non sql code)
In our experience, most apps don't need cross-tenant queries outside of BI. For example, think about the apps you use on a daily basis: Linear, Slack, ChatGPT all fit well with an actor-per-workspace or actor-per-thread model.
To be clear, we're not trying to replace Postgres. We're focused on modern workloads like AI, realtime, and SaaS apps where per-tenant & per-agent databases are a natural fit.
Using SQLite for your per-tenant or per-agent databases has a lot of benefits:
- Compute + state: running the SQLite database embedded in the actor has performance benefits
- Security: solutions like RLS are a security nightmare, much easier to have peace of mind with full DB isolation per tenant
- Per-tenant isolation: important for SaaS platforms, better for security & performance
- Noisy neighbors: limits the blast radius of a noisy neighbor or bad query to a single tenant's database
- Enables different schemas for every tenant
- AI-generated backends: modern use cases often require AI-generated apps to have their own custom databases; this model makes that easy
A few other points of reference in the space:
- Cloudflare Durable Objects & Agents are built on this model, and much of Cloudflare's internal architecture is built on DO
I like a lot of the idea behind such theorem provers, however, I always have issues with them producing compatible code with other languages.
This happened to me with idris and many others, I took some time to learn the basics, wrote some examples and then FFI was a joke or code generators for JavaScript absolutely useless.
Apart from prioritizing FFI (like Java/Scala, Erlang/Elixir), the other two easy ways to bootstrap an integration of a new obscure or relatively new programming language is to focus on RPC (ffi through network) or file input-output (parse and produce well known file formats to integrate with other tools at Bash level).
I find it very surprising that nobody tried to make something like gRPC as an interop story for a new language, with an easy way to write impure "extensions" in other languages and let your pure/formal/dependently typed language implement the rest purely through immutable message passing over gRPC boundary. Want file i/o? Implement gRPC endpoint in Go, and let your language send read/write messages to it without having to deal with antiquated and memory unsafe Posix layer.
“The current interface was designed for internal use in Lean and should be considered unstable. It will be refined and extended in the future.“
My point is that in order to use these problem provers you really gotta be sure you need them, otherwise interaction with an external ecosystem might be a dep/compilation nightmare or bridge over tcp just to use libraries.
If you look at their comment history it's quite clear that's what they are.
What's the HN stance on AI bots? To me it just seems rude - this is a space for people to discuss topics that interest them & AI contributions just add noise.
Serious question: why won’t JUST use SELinux on generated scripts?
It will have access to the original runtimes and ecosystems and it can’t be tampered, it’s well tested, no amount of forks and tricky indirections to bypass syscalls.
Such runtimes come with a bill of technical debt, no support, specific documentation and lack of support for ecosystem and features. And let’s hope in two years isn’t abandoned.
Same could be applied for docker or nix Linux, or isolated containers, etc… the level of security should be good enough for LLMs, not even secure against human (specialist hackers) directed threads
It was better because it had no silent errors, like 1+”1”. Far from perfect, the fact it raised exceptions and enforced the philosophy of “don’t ask for permission but forgiveness” makes the difference.
IMHO It’s irrelevant it has a slightly better typesystem and runtime but that’s totally irrelevant nowadays.
With AI doing mostly everything we should forget these past riddles. Now we all should be looking towards fail-safe systems, formal verification and domain modeling.
Conflating types in binary operations hasn't been an issue for me since I started using TS in 2016. Even before that, it was just the result of domain modeling done badly, and I think software engineers got burned enough for using dynamic type systems at scale... but that's a discussion to be had 10 years ago. We all moved on from that, or at least I hope we did.
> Now we all should be looking towards fail-safe systems, formal verification and domain modeling.
We were looking forward to these things since the term distributed computing has been coined, haven't we? Building fail-safe systems has always been the goal since long-running processes were a thing.
Despite any "past riddles", the more expressive the type system the better the domain modeling experience, and I'd guess formal methods would benefit immensely from a good type system. Is there any formal language that is usable as general-purpose programming language I don't know of? I only ever see formal methods used for the verification of distributed algorithms or permission logic, on the theorem proving side of things, but I have yet to see a single application written only in something like Lean[0] or LiquidHaskell[1]...
I think the only people who burnt of the discussion were people who is terminally online. But in the industry, there is people in any paradigm with any previous development times you can remember of.
> Is there any formal language that is usable as general-purpose programming language I don't know of?
That’s sort of my point, the closest thing to a rich type system yet pragmatic enough is to me F# and it’s still falls short as formal verification and ecosystem integration.
I think eventually, we should invest into this direction so LLM production can be trusted, or even ideally, producing or helping with the specific specifications of models. This is yet to be done.
I don’t want to make a prophecy, but the day, ergonomics and verification meet in an LLM automated framework, this new development environment should take over everything previous.
> With AI doing mostly everything we should forget these past riddles.
How I finally was able to make a large Rust project without having to sacrifice my free time to really fully understand Rust. I have read through the Rust book several times but I never have time to fully “practice” Rust, I was able to say screw it and built my own Rust software using Claude Code.
compressing the kernel loads it faster on RAM even if it still has to execute the un compressing operation. Why?
Load from disk to RAM is a larger bottleneck than CPU uncompressing.
Same is applied to algorithms, always find the largest bottleneck in your dependent executions and apply changes there as the rest of the pipeline waits for it.
Often picking the right algorithm “solves it” but it may be something else, like waiting for IO or coordinating across actors (mutex if concurrency is done as it used to).
That’s also part of the counterintuitive take that more concurrency brings more overhead and not necessarily faster execution speeds (topic largely discussed a few years ago with async concurrency and immutable structures).
No one thought juniors would be more benefited than seniors. St some people some said everything would be automatic and seniors would disappear altogether with programming itself.
But that was just said by crappy influencers whose opinion doesn’t matter as they are impressed by examples result of overfitting
This might depend on where you live and the kind of business… last time I made an Unmeldung online I needed to call after a week waiting and they literally told me that in person would be solved the same day. And it was.
reply