Hacker Newsnew | past | comments | ask | show | jobs | submit | MrDarcy's commentslogin

The correct meaning has always been 1024 bytes where I’m from. Then I worked with more people like you.

Now, it depends.


The best wolves I know define their own process to find out if the idea is good and ignore all externally imposed processes that get in their way.

This. And the reason this is relevant today is because an excellent build and test system multiplies the effectiveness of coding agents which multiply again the effectiveness of next level engineers.

There aren’t many other things I can think of that have such a huge positive impact on the productivity of an engineering team.


The wolf and Rands are next level.

> Good work does not speak for itself, it needs a shitload of people to speak for it.

Good work does speak for itself. The work speaks directly to other like minded engineers who "get it" and spread the word.

Nobody in authority needs to do any selling or mandating for the kind of work and the kind of person Rands is talking about here.


Have you never seen a good project or idea, that every engineer who saw it thought was awesome, die because it wasn't sold well to authority?

Authority may not have to sell or mandate for that person to get their work done but supporting good ideas and effective people is what authority should be doing.


No I haven’t. I’ve also deliberately avoided the types of places where that happens throughout my entire career.

Those types of ideas and projects have no need for the approval of authority. They are their own authority.

If a person in authority ends such a project or idea then the engineering culture at that organization is so completely broken the best path is to leave.


Understood, I think we are coming from different environments.

Edit: I wish I had access to ones more like yours.


On the other hand the incorrect values may drive architects to think more critically about what their tools are producing.

On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.

The ecosystem. Micro services are the most efficient way to integrate CNCF projects deeply with your platform at any size.

Edit: Genuinely curious about the downvotes here. The concept directly maps to all the reasons the article author cited.


I'm not following. What does CNCF have to do with it?


Hmm. Could you maybe elaborate on why integrating "CNCF projects deeply with your platform at any size" is so desirable?

The French railway service?

I know at least one of the companies behind a coding agent we all have heard of has called in human experts to clean up their vibe coded IAC mess created in the last year.

TIL serializing a protobuf is only 5 times slower than copying memory, which is way faster than I thought it’d be. Impressive given all the other nice things protobuf offers to development teams.

I guess that number is as good or as bad as you want with the right nesting.

Protobuf is likely really close to optimally fast for what it is designed to be, and the flaws and performance losses left are most likely all in the design space, which is why alternatives are a dime a dozen.


Now check this out:

> Protobuf performs up to 6 times faster than JSON. - https://auth0.com/blog/beating-json-performance-with-protobu... (2017)

That's a 30x faster just by switching to a zero-copy data format that's suitable for both in memory use and network. JSON services spend 20-90% of their compute on serde. A zero copy data format would essentially eliminate it.


I wouldn't hold onto that number as any kind of fixed usable constant since the reality will depend entirely on things like cache locality and concurrency, and the memory bandwidth of the machine you're running on.

Go around doing this kind of pointless thing because "it's only 5x slower" is a bad assumption to make.


Serializing a protobuf can be significantly faster than memcpy, depending. If you have a giant vector of small numbers represented with wide types (4-8 bytes in the machine) then the cost of copying them as variable-length symbols can be less.

5x is pretty slow honestly. Imagine anything happening 5x as slow as you'd expect it to. I mean, for a recent project I had to inline Rust structs rather than parsing JSON too for specific fields, and that definitely sped it up.

that actually crazy fast

How much are you staking on that bet?

Well, I spent a good part of my career reverse engineering network protocols for the purpose of developing exploits against closed source software, so I'm pretty sure I could do this quickly. Not that it matters unless you're going to pay me.

So you are basically overqualified to tell other people how to do it, especially with the payment part.

What are you even trying to say? I suppose I'll clarify for you: Yes, I'm confident I could have identified the cause of the mysterious packets quickly. No, I'm not going to go through the motions because I have no particular inclination toward the work outside of banter on the internet. And what's more, it would be contrived since the answer has already shared.

I think the point they're making is that "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance" is neither surprising nor interesting.

That'd be like saying "I, an emergency room doctor, do not need AI assistance to interpret an EKG"

Consider that your expertise is atypical.


Sure, but that is aside from my original point. If somebody:

a) Has the knowledge to run tcpdump or similar from the command line

b) Has the ambition to document and publish their effort on the internet

c) Has the ability identify and patch the target behaviors in code

I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently, and would have learned more along the way. Forgive me for being so critical, but the LLM use here simply comes off as lazy. And not lazy in a good efficiency amplifying way, but lazy in a sloppy way. Ultimately this person achieved their goal, but this is a pattern I am seeing on a daily basis at this point, and I worry that heavy LLM users will see their skill sets stagnate and likely atrophy.


>I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently

Hard disagree. Asking an LLM is 1000% more efficient than reading docs, lots of which are poorly written and thus dense and time-consuming to wade through.


The problem is hallucinations. It's incredibly frustrating to have an LLM describe an API or piece of functionality that fulfills all requirements perfectly, only to find it was a hallucination. They are impressive sometimes though. Recently I had an issue with a regression in some of our test capabilities after a pivot to Microsoft Orleans. After trying everything I could think of, I asked Sonnet 4.5, and it came up with a solution to a problem I could not even find described on the internet, let alone solved. That was quite impressive, but I almost gave up on it because it hallucinated wildly before and after the workable solution.

The same stuff happens when summarizing documentation. In that regard, I would say that, at best, modern LLMs are only good for finding an entrypoint into the docs.


While my reply was snarky I am prepared to take a reasonable bet with a reasonable test case. And pay out.

Why I think I’d win the bet is I’m proficient with tcpdump and wireshark and I’m reasonably confident that running to a frontier model and dealing with any hallucinations is more efficient and faster than recalling the incantantions and parsing the output myself.


> I argue that, had they not run to an LLM, they likely would have solved this problem more efficiently

This is just expert blindness, and objectively, measurably wrong.


Oh come on, the fact that the author was able to pull this off is surely indicative of some expertise. If the story started had started off with, "I asked the LLM how to capture network traffic," then yeah, what I said would not be applicable. But that's not how this was presented. tcpdump was used, profiling tools were mentioned, etc. It is not a stretch to expect somebody who develops networked applications knows a thing or two about protocol analysis.

The specific point I was trying to make was along the lines of, "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance. And yet, I’d probably lose a bet on a race against someone like me using an LLM."

Sigh.

I'm still waiting for a systems engineering tool that can log every layer, and handle SSL the whole pipe wide.

Im covering everything from strafe and ltrace on the machine, file reads, IO profiling, bandwidth profiling. Like, the whole thing, from beginning to end.

Theres no tool that does that.

Hell, I can't even see good network traces within a single Linux app. The closest you'll find is https://github.com/mozillazg/ptcpdump

But especially with Firefox, good luck.


Real talk though, how much would such a tool be worth to you? Would you pay, say, $3,000/license/year for it? Or, after someone puts in the work to develop it, would you wait for someone else to duct tape something together approximately similar enough using regexps that open source but 10% as good, and then not pay for the good proprietary tool because we're all a bunch of cheap bastards?

We have only ourselves to blame that there aren't better tools (publicly) available. If I hypothetically (really!) had such a tool, it would be an advantage over every other SRE out there that could use it. Trying to sell it directly comes with more headaches than money, selling it to corporations has different headaches, open-sourcing it don't pay the bills, nevermind the burnout (people don't donate for shit). So the way to do it is make a pitch deck, get VC funding so you're able to pay rent until it gets acquired by Oracle/RedHat/IBM (aka the greatest hits for Linux tool acquisition), or try and charge money for it when you run out of VC funding, leading to accusations of "rug pull" and development of alternatives (see also: docker) just to spite you.

In the base case you sell Hashimoto and your bank account has two (three!) commas, but worst case you don't make rent and go homeless when instead you could've gone to a FAANG and made $250k/yr instead of getting paid $50k/yr as the founder and burning VC cash and eating ramen that you have to make yourself.

I agree, that would be an awesome tool! Best case scenario, a company pays for that tool to be developed internally, the company goes under, it gets sold as an asset and whomever buys it forms a compnay and tries to sell it directly and then that company goes under but that whomever finally open sources it because they don't want it to slip into obscurity but if falls into obscurity anyway because it only works on Linux 5.x kernels and can't be ported to the 6.x series that we're on now easily.


Terrible for those laid off but perhaps not for Evernote customers if it means there isn’t unwelcome feature creep.

Been a paiyng Evernote customer since its launch. I unsubscribed at the beginning of 2025 after 7/8 years of shitty releases, not fixing old bugs, and new useless features.

I don't user Evernote very often, but I have a bunch of stuff stored in there and use it basically in a read-only mode. For a long time I was able to get the $36 / year plan which I felt pretty good about. It was a great app and service which I didn't use very much, so that felt like a fair price and I felt good about supporting them at that level. Basically every time I opened Evernote, I was paying $2.

But then the price tripled and for me, it's too much. I'll pay $2 per session, but not $5.

I remember their CEO (Phil Libin I think) on their podcast explaining how they were building a 100 year company. I really wanted to believe that.

I use Obsidian now and like it, but it feels like they are going down the same path. They keep adding features that don't really fit the original editor-for-a-folder-of-markdown-files. I wish they would stop.

It's a bummer but the feature treadmill seems inescapable. Bending Spoons will probably be able to buy Obsidian for a very nice price in a few years and the Obsidian founders will do very well.


If everyone gets salaries and equity is paid for then everyone's done great. And then we can build another one, or an open source equivalent once all the money's been spent researching useful features, and then we're done.

I wouldn't classify a small number of people with equity scoring big and millions of users losing out as everyone doing great.

The users will have done worse had the company simply closed, though. I think that's generally the alternative.

Just unmitigated bug and software rot creep.

It's worse. When a company like this is "mature", they don't try to appeal to new users. They instead squeeze what they can out of the existing user base, because that user base is probably already dying off. This isn't about attaining a steady state business, its about seeing how much of the toothpaste you can still squeeze out the bottle before it crusts up.

This practice is derogatorily called "vulture capitalism" for a reason. I hope the remaining engineers are either lining up for retirement or networking around for their next gig.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: