> I like Rust's result-handling system, I don't think it works very well if you try to bring it to the entire ecosystem that already is standardized on error throwing.
I disagree, it's very useful even in languages that have exception throwing conventions. It's good enough for the return type for Promise.allSettled api.
The problem is when I don't have the result type I end up approximating it anyway through other ways. For a quick project I'd stick with exceptions but depending on my codebase I usually use the Go style ok, err tuple (it's usually clunkier in ts though) or a rust style result type ok err enum.
I have the same disagreement. TypeScript with its structural and pseudo-dependent typing, somewhat-functionally disposed language primitives (e.g. first-class functions as values, currying) and standard library interfaces (filter, reduce, flatMap et al), and ecosystem make propagating information using values extremely ergonomic.
Embracing a functional style in TypeScript is probably the most productive I've felt in any mainstream programming language. It's a shame that the language was defiled with try/catch, classes and other unnecessary cruft so third party libraries are still an annoying boundary you have to worry about, but oh well.
The language is so well-suited for this that you can even model side effects as values, do away with try/catch, if/else and mutation a la Haskell, if you want[1].
I don't think it's specifically hard, it's more related to how it probably needed more plumbing in the language that authors thought would add to much baggage and let the community solve it. Like the whole async runtime debates
You have to chill with rust. Just anyhow macro wrap your errors and just log them out. If you have a specific use case that relies on using that specific error just use that at the parent stack.
PLEASE DON'T DOWN VOTE ME TO HELL THIS IS A DISCLAIMER I AM JUST SHARING WHAT I'VE READ I AM NOT CLAIMING THEM AS FACTS.
...ahem...
When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
> When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.
And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together
I think you know as well as I do that it very much does matter. Even if you have an army of engineers around to fix things when they break, things still break.
I think the point is that for Amazon it's their own code and they pay full time staff to be familiar with the codebase, make improvements, and fix bugs. OpenStack is a product. The people deploying it are expected to be knowledgeable about it as users / "system integrators" but not developers. So when the abstraction leaks, and for OpenStack the pipe has all but burst, it becomes a mess. It's not expected that they'll be digging around in the internals and have 5 other projects to work on.
The reason there were so many commercial distributions of open stack was because setting it up reliably end to end was nearly impossible for most mere mortals.
Company’s like meta cloud or mirantis made a ton of money with little more than openstack installers and a good out of the box default config with some solid monitoring and management tooling
CERN is the biggest scientific facility in the world, with a huge IT group and their own IXP. Most places are not like that.
Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
> Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.
I currently work in AI/ML HPC, and we use Proxmox for our non-compute infrastructure (LDAP, SMTP, SSH jump boxes). I used to work in cancer with HPC, and we used OpenStack for several dozen hypervisors to run a lot of infra/services instances/VM.
I think that there are two things determine which system should be looked at first: scale and (multi-)tenancy. More than one (maybe two) dozen hypervisors, I could really see scaling/management issues with Proxmox; I personally wouldn't want to do it (though I'm sure many have). Next, if you have a number internal groups that need allocated/limited resource assignments, then OpenStack tenants are a good way to do this (especially if there are chargebacks, or just general tracking/accounting).
I'm happily running some Proxmox now, and wouldn't want to got more than a dozen hypervisor or so. At least not in one cluster: that's partially what PDM 1.0 is probably about.
I have run OpenStack with many dozens of hypervisors (plus dedicated, non-hyperconverged Ceph servers) though.
Thank you for your work. I was in a position where I had to choose between minio and seaweed FS and though seaweed FS was better in every way the lack of an includes dashboard or UI accessibility was a huge factor for me back then. I don't expect or even want you to make any roadmap changes but just wanted to let you know of a possible pain point.
I'm just as much of an avid llm code generator fan as you may be but I do wonder about the practicality of spending time making projects anymore.
Why build them if other can just generate them too, where is the value of making so many projects?
If the value is in who can sell it the best to people who can't generate it, isn't it just a matter of time before someone else will generate one and they may become better than you at selling it?
> Why build them if other can just generate them too, where is the value of making so many projects?
No offence to anyone but these generated projects are nothing ground-breaking. As soon as you venture outside the usual CRUD apps where novelty and serious engineering is necessary, the value proposition of LLMs drops considerably.
For example, I'm exploring a novel design for a microkernel, and I have no need for machine generated boilerplate, as most of the hard work is not implementing yet another JSON API boilerplate, but it's thinking very hard with pen and paper about something few have thought before, and even fewer LLMs have been trained on, and have no intelligence to ponder upon the material.
To be fair, even for the most dumb side-projects, like the notes app I wrote for myself, there is still a joy in doing things by hand, because I do not care about shipping early and getting VC money.
Weird, because I've created a webcam app that does segmentation so they can delete the background and put a new background in I mean, I suppose that's not groundbreaking. But it's not just reading and writing to a database.
I've just added a ATA over Ethernet server in Rust, I thought of doing it in the car on the way home and an hour later I've got a working version.
I type this comment using a voice to text system I built, admittedly it uses Whisper as the transcriber but I've turned it into a personal assistant.
I make stuff every day I just wouldn't bother to make if I had to do it myself. and on top of that it does configuration. So I've had it build full wireguard configs that is taking on our pay addresses so that different destinations cause different routing. I don't know how to do that off the top of my head. I'm not going to spend weeks trying to find out how it works. It took me an evening of prompting.
> I make stuff every day I just wouldn't bother to make if I had to do it myself
> I'm not going to spend weeks trying to find out how it works.
Then what is the point? For some of us, programming is an art form. Creativity is an art form and an ideal to strive towards. Why have a machine to create something we wouldn’t care about?
The only result is a devaluation to zero of actual effort and passion, whose only beneficiary are those that only care about creating more “product”. Sure, you can pump out products with little effort now, all the while making a few ultrabilionaires richer. Good for you, I guess.
The value is that we need a lot more software and now, because building software has gotten so much less time consuming, you can sell software to people that could/would not have paid for it previously at a different price point.
We don’t need more software, we need the right software implemented better. That’s not something LLMs can possibly give us because they’re fucking pachinko machines.
Here’s a hint: Nobody should ever write a CRUD app, because nobody should ever have to write a CRUD app; that’s something that can be generated fully and deterministically (i.e. by a set of locally-executable heuristics, not a goddamn ocean-boiling LLM) from a sufficiently detailed model of the data involved.
In the 1970s you could wire up an OS-level forms library to your database schema and then serve literally thousands of users from a system less powerful than the CPU in modern peripheral or storage controller. And in less RAM too.
People need to take a look at what was done before in order to truly have a proper degree of shame about how things are being done now.
Most CRUD software development is not really about the CRUD part. And for most framework, you can find packages that generate the UI and the glue code that ties it to the database.
When you're doing CRUD, you're spending most of the time with the extra constraints designed by product. It's dealing with the CRUD events, the IAM system, the Notification system,...
> That’s not something LLMs can possibly give us because they’re fucking pachinko machines.
I mostly agree, but I do find them useful for fuzzing out tests and finding issues with implementations. I have moved away from larger architectural sketches using LLMs because over larger time scales I no longer find they actually save time, but I do think they're useful for finding ways to improve correctness and safety in code.
It isn't the exciting and magical thing AI platforms want people to think it is, and it isn't indispensable, but I like having it handy sometimes.
The key is that it still requires an operator who knows something is missing, or that there are still improvements to be made, and how to suss them out. This is far less likely to occur in the hands of people who don't know, in which case I agree that it's essentially a pachinko machine.
I’m with you. Anyone writing in anything higher level than assembly, with anything less than the optimization work done by the demo scene, should feel great same.
Down with force-multiplying abstractions! Down with intermediate languages and CPU agnostic binaries! Down with libraries!
Idk I built a production system and ensured all data transfers, client to server and server to client were proto buf and it was a pain.
Technically, it sounds really good but the actual act of managing it is hell. That or I need a lot of practice to use them, at that point shouldn't I just use JSON and get on with my life.
I think the it-just-works nature and human readability for debugging JSON cannot be overstated. Most people and projects are content to just use JSON even if protos offer some advantages, if not only to save time and resources.
Whether the team saves times in the longer when using protos is a question in its own.
There are plenty of it-doesn't-just-work things about JSON though. Sending binary data or 64-bit integers is a huge pain. Or maps with non-string keys, or ordered maps. Plus JSON doesn't scale well with message size because it doesn't use TLV so parsing any of a message requires parsing all of it.
It's not some perfect format.
That said, I'm disappointed with Protobuf too. Especially the handling of optional/default fields. I believe they did eventually add an `optional` tag so you can at least distinguish missing vs default field values.
The lack of required fields makes it very annoying to work with though. And no, there's no issue with required fields in general. The only reason it doesn't have them is because the implementation in Protobuf 2 caused issues and Google had to have a smooth transition from that.
If you're starting a greenfield project it seems silly to opt in to Google's tech debt.
If you're thinking "but schema evolution!!" well yeah all you need is a way to have versioned structs and then you can mark fields as being required for ranges of versions. So you can still remove required fields without breaking backwards compatibility.
Supports 64 bit ints, raw byte datatype, zero-copy parsing, does not require schema and can be converted to JSON for readability while retaining all field names.
What issues did you have? In my experience, most things that could be called painful with protobuf would be bigger pains with things like JSON.
Making changes to messages in a backwards-compatible way can be annoying, but JSON allowing you to shoot yourself in the foot will take more time and effort to fix when it's corrupting data in prod than protobuf giving you a compile error would.
Well at the bare minimum setting up proto files and knowing where they live across many projects.
If they live in their own project, making a single project be buildable with a git clone gets progressively more complex.
You now need sub modules to pull in your protobuf definitions.
You now also need the protobuf tool chain to be available in your environment you just cloned to. If that environment has the wrong version the build fails, it starts to get frustrating pretty fast.
Compare that to json, yes I don't get versioning and a bunch of other fancy features but... I get to finish my work, build and test pretty quickly.
Man I really feel you, being a solo founder is tough. If it's possible for you to relocate temporarily Antler has a accelerator-esque program. They'll pay you a tiny stipend to move to Austin or NYC depending on which location you get into, for the duration of the program (5-6 weeks). You'll have like minded people around you giving you an opportunity to maybe find a co founder and get some motivation.
At the end you'll get a chance to pitch and get 100+k(varies by location) in seed and then another shot at getting funding.
Tbh their deal is doodoo compared to YC but it's a lot less competitive and you'll be out of the rut you're in right now.
As for the MVP taking time... I think it's perfectly fine, probably better even. The bar to impress it's very high now, specially since llms have made it pretty easy to add polish. I completely understand that feeling where you've already whittled your idea down to it's bone but you still can't get it done. That happens, I've faced that same problem numerous times. Whenever we thought we'd done enough we'd go out to testers and they'd point out the exact rough edges we'd intentionally tried to ignore to ship it out faster. Most of the time those edges put off people completely.
All you need to know is your target audience. If it's enterprise, having that added polish will make or break your deal if they're larger companies, specifically when they're customer 1-2. What won't matter is having 1-2 more metrics in a dashboard or maybe a customizable layout. But let's say you're report generation is a csv file that's generated client side with nothing asynchronous or email based, that would raise some eyebrows about the expected quality of service. This isn't to say you won't have understanding and patient customers, however you'll always have to earn it first.
Get the basics right, make sure it's all functioning together well and hopefully it'll all go well. The hardest part is always bd, that's where you'll truly start to feel like Will Smith walking around with your bone density scanner.
This mad man had the courage to present BOA a rust project at JS Conf. The project had it's spotlight taken by Bun and Deno. I also think the project was progressing pretty slowly from what people were expecting.
Well the first two are runtimes built on top of JavaScriptCore and V8, respectively. So we're definitely in a different space.
QuickJS/QuickJS-NG might be a better comparison, but I think they are limited in specification conformance or at least selective in specification conformance in favor of remaining in a single file and fast. For instance, I'm not entirely sure whether they will be supporting Temporal once it goes Stage 4 because of the size of the feature, and I don't think they support Intl. But I also can't speak for QuickJS.
I disagree, it's very useful even in languages that have exception throwing conventions. It's good enough for the return type for Promise.allSettled api.
The problem is when I don't have the result type I end up approximating it anyway through other ways. For a quick project I'd stick with exceptions but depending on my codebase I usually use the Go style ok, err tuple (it's usually clunkier in ts though) or a rust style result type ok err enum.
reply