Hacker Newsnew | past | comments | ask | show | jobs | submit | barremian's commentslogin

You could have a look at Avro (https://avro.apache.org/) and Yardl (https://microsoft.github.io/yardl/).


> When judging which alternative will succeed, lower perceived human cost beats lower machine cost every time.

Yup this is it. No architect considers using protos unless there is an explicit need for it. And the explicit need is most times using gRPC.

Unless the alternative allows for zero cost startup and debugging by just doing `console.log()`, they won't replace JSON any time soon.

Edit: Just for context, I'm not the author. I found the article interesting and wanted to share.


Print debugging is fine and all but I find that it pays massive dividends to learn how to use a debugger and actually inspect the values in scope rather than guessing which are worth printing. It also is useless when you need to debug a currently running system and can't change the code.

And since you need to translate it anyway, there's not much benefit in my mind to using something like msgpack which is more compact and self describing, you just need a decoder to convert to json when you display it.


> rather than guessing

I'm not guessing. I'm using my knowledge of the program and the error together to decide what to print. I never find the process laborious and I almost always get the right set of variables in the first debug run.

The only time I use a debugger is when working on someone else's code.


That's just an educated guess. You can also do it with a debugger.


The debugger is fine, but it's not the key to unlock some secret skill level that you make it out to be. https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/


I didn't say it's some arcane skill, just that it's a useful one. I would also agree that _reading the code_ to find a bug is the most useful debugging tool. Debuggers are second. Print debugging third.

And that lines up with some of the appeals to authority there that are good, and that are bad (edited to be less toxic)


Even though I'm using the second person, I actually don't care at all to convince you particularly. You sound pretty set in your ways and that's perfectly fine. But there are other readers on HN who are already pretty efficient at log debugging or are developing the required analytical skills and I wanted to debunk the unsubstantiated and possibly misleading claims in your comments of some superiority in using a debugger for those people.

The logger vs debugger debate is decades old, with no argument suggesting that the latter is a clear winner, on the contrary. An earlier comment explained the log debugging process: carefully thinking about the code and well chosen spots to log the data structure under analysis. The link I posted was to confirms it as a valid methodology. Overall code analysis is the general debugging skill you want to sharpen. If you have it and decide to work with a debugger, it will look like log debugging, which is why many skilled programmers may choose to revert to just logging after a while. Usage of a debugger then tends to be focused on situations when the code itself is escaping you (e.g. bad code, intricate code, foreign code, etc).

If you're working on your own software and feel that you often need a debugger, maybe your analytical skills are atrophying and you should work on thinking more carefully about the code.


Debuggers are great when you can use them. Where I work (financial/insurance) we are not allowed to debug on production servers. I would guess that's true in a lot of high security environments.

So the skill of knowing how to "println" debug is still very useful.


I think the it-just-works nature and human readability for debugging JSON cannot be overstated. Most people and projects are content to just use JSON even if protos offer some advantages, if not only to save time and resources.

Whether the team saves times in the longer when using protos is a question in its own.


There are plenty of it-doesn't-just-work things about JSON though. Sending binary data or 64-bit integers is a huge pain. Or maps with non-string keys, or ordered maps. Plus JSON doesn't scale well with message size because it doesn't use TLV so parsing any of a message requires parsing all of it.

It's not some perfect format.

That said, I'm disappointed with Protobuf too. Especially the handling of optional/default fields. I believe they did eventually add an `optional` tag so you can at least distinguish missing vs default field values.

The lack of required fields makes it very annoying to work with though. And no, there's no issue with required fields in general. The only reason it doesn't have them is because the implementation in Protobuf 2 caused issues and Google had to have a smooth transition from that.

If you're starting a greenfield project it seems silly to opt in to Google's tech debt.

If you're thinking "but schema evolution!!" well yeah all you need is a way to have versioned structs and then you can mark fields as being required for ranges of versions. So you can still remove required fields without breaking backwards compatibility.


Check out Lite³, a schemaless binary format: https://github.com/fastserial/lite3

Supports 64 bit ints, raw byte datatype, zero-copy parsing, does not require schema and can be converted to JSON for readability while retaining all field names.


> it codes fully functioning Windows and Apple OS clones, 3D design software, Nintendo emulators, and productivity suites from single prompts

> As is so often the case with AI, that is exciting and frightening all at once

> we need to extrapolate from this small example to think more broadly: if this holds the models are about to make similar leaps in any field where visual precision and skilled reasoning must work together required

> this will be a big deal when it’s released

> What appears to be happening here is a form of emergent, implicit reasoning, the spontaneous combination of perception, memory, and logic inside a statistical model

> model’s ability to make a correct, contextually grounded inference that requires several layers of symbolic reasoning suggests that something new may be happening inside these systems—an emergent form of abstract reasoning that arises not from explicit programming but from scale and complexity itself

Just another post with extreme hyperbolic wording to blow up another model release. How many times have we seen such non-realistic build up in the past couple of years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: