Hacker Newsnew | past | comments | ask | show | jobs | submit | adriancooney's commentslogin

You might find your answer with `zx`: https://google.github.io/zx/



Zod is installed in nearly every project I use. It’s an essential part of my toolkit. I adore this library. It near perfect as-is and these additions make it even better. Thanks for all the hard work.


I used to use Zod until I realised it’s rather big (for many projects this isn’t an issue at all, but for some it is). Now I use Valibot which is basically the same thing but nice and small. I do slightly prefer Zod’s API and documentation, though.

Edit to add: aha, now I read further in the announcement, looks like @zod/mini may catch up with valibot -- it uses the same function-based design at least, so unused code can be stripped out.


Though exciting, looks like there's still room for shrinkage, the linked article puts a bare-minimum gzipped @zod/mini at 1.88kb, while Valibot is at 0.71kb [1].

[1] https://github.com/anatoo/zod-vs-valibot


I know this is a port but I really hope the team builds in performance debugging tools from the outset. Being able to understand _why_ a build or typecheck is taking so long is sorely missing from today's Typescript.


Yes, 100% agree. We've spent so much time chasing down what makes our build slow. Obviously that is less important now, but hopefully they've laid the foundation for when our code base grows another 10x.


Please do.


Agreed. I first seen it at Stripe (along with prefixing every ID). Whoever at Stripe (or where ever it was invented) needs a good pat on that back. It's adoption has been a huge for DX generally.


We can always work backwards, regardless of AI.


Sure, but with this new predictive model we will have better predictions to work backwards from.

OC was saying (I’m going to paraphrase) that this is the death of understanding in meteorology, but it’s not because we can always work backwards from accurate predictions.


Or we could wait 15 days and work backwards from what the weather actually turned out to be.

I guess there could be some value in analyzing what inputs have the most and least influence on the AI predictions.


Comparing the difference between correct predictions and incorrect predictions, especially with a high accuracy predictive model, could give insight into both how statistical models work and how weather works.


Cool. I'm a bit unsettled that my camera's green dot didn't turn on for it though.


Lex Fridman has a long interview [1] with Marc Raibert, CEO of Boston Dynamics, which is really excellent. It might partially or wholly answer your question.

[1]: https://www.youtube.com/watch?v=5VnbBCm_ZyQ


This is fantastic and exactly what I’ve been looking for. Thank you.


With the pace of AI, that (large) investment into a custom toolchain could be obsolete in a year. It feels like ChatGPT is going to gobble up all AI applications. Data will be the only differentiator.


Not unless your toolchain is highly specialized.

There's not even a good way to benchmark language models at the moment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: