Hacker Newsnew | past | comments | ask | show | jobs | submit | goostavos's commentslogin

A lot of pride is wrapped up in the craft of writing software. If that goes away (I don't think it will) it would leave a lot of people wondering how they spent all their time.

(or something like that. Obviously I'm too well adjusted to have these existential worries)


I had my first interview last week where I finally saw this in the wild. It was a student applying for an internship. It was the strangest interview. They had excellent textbook knowledge. They could tell you the space and time complexities of any data structure, but they couldn't explain anything about code they'd written or how it worked. After many painful and confusing minutes of trying to get them to explain, like, literally anything about how this thing on their resume worked, they finally shrugged and said that "GenAI did most of it."

It was a bizarre disconnect having someone be both highly educated and yet crippled by not doing.


Sounds a little bit like the stories from Feynman, e.g.: https://enlightenedidiot.net/random/feynman-on-brazilian-edu...

The students had memorized everything, but understood nothing. Add in access to generative AI, and you have the situation that you had with your interview.

It's a good reminder that what we really do, as programmers or software engineers or what you wanna call it, is understanding how computers and computations work.


There's a quote I love from Feynman

  > The first principle is that you must not fool yourself and you are the easiest person to fool.
I have no doubt he'd be repeating it loudly now, given we live in a time where we developed machines that are optimized to fool ourselves.

It's probably also worth reading Feynman's Cargo Cult Science: https://sites.cs.ucsb.edu/~ravenben/cargocult.html


This the kind of interaction that makes be think that there are only 2 possible futures:

Star Trek or Idiocracy.


Hmmm, I think we're more likely to face an Idiocracy outcome. We need more Geordi La Forges out there, but we've got a lot of Fritos out here vibe coding the next Carl's Jr. locating app instead

we would be lucky to have idiocracy. president camacho had a huge problem and he found the smartest person in the country and got him working on it. if only we can do that

Star Trek illustrated the issue nicely in the scene where Scotty, who we should remember is an engineer, tries to talk to a computer mouse in the 20th century: https://www.youtube.com/watch?v=hShY6xZWVGE

Except that falls apart 2 seconds later when Scotty shocks the 20th-century engineers by being blazing fast with a keyboard.

Lots of theory but no practice.

More like using a calculator but not being able to explain how to do the calculation by hand. A probabilistic calculator which is sometimes wrong at that. The "lots of theory but no practice" has always been true for a majority of graduates in my experience.

Surely, new grads are light on experience (particularly relevant experience), but they should have student projects and whatnot that they should be able to explain, particularly for coding. Hardware projects are more rare simply because they cost money for parts and schools have limited budgets, but software has far fewer demands.

This is exactly the end state of hiring via Leetcode.

Makes me wonder if the hardware engineers look at software engineers and shrug, “they don’t really know how their software really works.”

Makes me wonder if C programmers look at JS programmers and shrug, “they don’t understand what their programs are actually doing.”

I’m not trying to be disingenuous, but I also don’t see a fundamental difference here. AI lets programmers express intent at a higher level of abstraction than ever before. So high, apparently, that it becomes debatable whether it is programming at all, out whether it takes any skill, out requires education or engineering knowledge any longer.


Wait, so they could say, write a linked list out, or bubble sort, but not understand what it was doing? like no mental model of memory, registers, or intuition for execution order, or even conceptual like a graph walk, or something? Like just "zero" on the conceptual front, but could reproduce data structures, some algorithm for accessing or traversing, and give rote O notation answers about how long execution takes ?

Just checking I have that right... is that what you meant?

I think that's what you were implying but it's just want to check I have that right? if so

... that ... is .... wow ...


If I'm understandinf correctly, I don't think what you're saying is quite right. They had a mental model of the algorithms, and then the code they "produced" was completely generated by AI, and they had no knowledge of how the code actually modeled the algorithm.

Knowing the complexity of bubble sort is one skill, being able to write code that performs bubble sort is a second, and being able to look at a function with the signature `void do_thing(int[] items)`and determine that it's bubble sort and the time complexity of it in terms of the input array is a third. It sounds like they had the first skill, used an AI to fake the second, but had no way of doing the third.


I found the first season OK enough, but the second season to be unwatchable.

Agreed, the characters now just abandon their established traits from one scene to the next in service of a contrived story


If the only reason you write is as a means to and end, sure. Inevitable. If you pursue it as a craft then the struggle and imperfections are part of the process. LLM usage would sand away those wonderful flaws.


I find the same. Even those who are interested in it in theory hit a pretty unforgiving wall when they try to put it in practice. Learning TLA+ is way harder than leaning another programming language. I failed repeatedly while trying to "program" via PlusCal. To use TLA you have to (re)learn some high-school math and you have to learn to use that math to think abstractly. It takes time and a lot (a lot!) of effort.

Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^

I think it's going to be a learn these tools or fall behind thing in the age of AI.


I think the "high school math" slogan is untrue and ultimately scares people away from TLA+, by making it sound like it's their fault for not understanding a tough tool. I don't think you could show an AP calculus student the equation `<>[](ENABLED <<A>>_v) => []<><<A>>_v` and have them immediately go "ah yes, I understand how that's only weak fairness"


Oh, hey -- you're that guy. I learned a lot of what I know about TLA from your writings ^_^

Consider my behavior changed. I thought the "high school math" was an encouraging way to sell it (i.e. "if you can get past the syntax and new way of thinking, the 'math' is ultimately straight forward"), but I can see your point, and how the perception would be poor when they hit that initial wall.


Fictional, but it captures something about work and life in that unique way that art is supposed to.

One of my favorite scenes:

Peggy: "You never say thank you!" Don: "That's what the money is for!"

It captures a lot of the mismatch in perspective between employer/employee boss/subordinate. You're there to do something for someone who is paying you to do it. That's as far as it goes (despite the constant human pull to perceive it as more).


FWIW, even with the "simple explanation," I'll echo OP's statement that the README doesn't really explain what it is or what it's solving. "Generates new versions of the structures" might mean something really clear to you, but even the phrase "data modeling" is enough to trigger lots of conflicting baggage in my head. Also: it took awhile it realize it's for Scala. I initially assumed this was a Smithy-like competitor.

It looks neat (once I found your docs)! Show what it is and what it solves in your README! The structural inheritance is slick.


It's not for Scala. Currently there are Scala and C# backends, TypeScript and Python are on the way.

We did everything to make it easy to add new backends. As much as that can be considering the feature set.

> Smithy

Little bit different. Smithy is more an RPC tool. Baboon is not (or not yet), it allows you to model your data structures and derive conversions (migrations) between versions.


As someone who runs data analyses on experimental data, my first thought that this was a tool to automatically data munge, and perform analyses. My current impression it is a tool to convert database type information structures to new formats. None of these are probably correct.


The new 10x engineering is writing "please don't write bugs" in a markdown file.


It destroys the value of code review and wastes the reviewers time.

Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.


> Code review is one of the places where experience is transferred.

Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.


I agree. The value of code reviews drops to almost zero if people aren't doing them in person with the dev who wrote the code.


I disagree. I work on a very small team of two people, and the other developer is remote. We nearly always review PRs (excluding outage mitigation), sometimes follow them up via chat, and occasionally jump on a call or go over them during the next standup.

Firstly, we get important benefits even when there's nothing to talk about: we get to see what the other person is working on, which stops us getting siloed or working alone. Secondly, we do leave useful feedback and often link to full articles explaining concepts, and this can be a good enough explanation for the PR author to just make the requested change. Thirdly, we escalate things to in-person discussion when appropriate, so we end up having the most valuable discussions anyway, which are around architecture, ongoing code style changes, and teaching/learning new things.

I don't understand how someone could think that async code review has almost zero value unless they worked somewhere with a culture of almost zero effort code reviews.


I see your point and I agree that pair programming code reviews give a lot of value but you could also improve and learn from comments that happened async. You need to have teammates, who are willing to put effort to review your patch without having you next to them to ask questions when they don't understand something.


I (and my team) work remote and don't quite agree with this. I work very hard to provide deep, thoughtful code review, especially to the more junior engineers. I try to cover style, the "why" of style choices, how to think about testing, and how I think about problem solving. I'm happy to get on a video call or chat thread about it, but it's rarely necessary. And I think that's worked out well. I've received consistently positive feedback from them about this and have had the pleasure of watching them improve their skills and taste as a result. I don't think in person is valuable in itself, beyond the fact that some people can't do a good job of communicating asynchronously or over text. Which is a skills issue for them, frankly.

Sometimes a PR either merits limited input or the situation doesn't merit a thorough and thoughtful review, and in those cases a simple "lgtm" is acceptable. But I don't think that diminishes the value of thoughtful non-in-person code review.


> I work very hard to provide deep, thoughtful code review

Which is awesome and essential!

But the reason that the value of code reviews drops if they aren't done live, conducted by the person whose code is being reviewed, isn't related to the quality of the feedback. It's because a very large portion of the value of a code review is having the dev who wrote the code walk through it, explaining things, to other devs. At least half the time, that dev will encounter "aha" moments where they see something they have been blind to before, see a better way of doing things, spot discontinuities, etc. That dev has more insight into what went into the code than any other, and this is a way of leveraging that insight.

The modern form of code review, where they are done asynchronously by having reviewers just looking at the code changes themselves, is not worthless, of course. It's just not nearly as useful as the old-school method.


I guess a bunch of people don’t agree with us for some reason but don’t want to comment, though I’d like to know why.


This doesn't deserve to be downvoted. Above all else, code review is the moment for pair programming. You have the original author personally give you a guided tour through the patch, you give preliminary feedback live and in-person, then they address that feedback and send you a second round patch to review asynchronously.


> I'd prefer you just send the prompt

Makes it a lot easier to ignore, at the very least.


Maybe we should enforce that users bundle the prompting with their PRs.


In the beginning, there was the binary, and it was difficult to change.

Then the golden age of ascii encoded source, where all was easy to change.

Now we've forgotten that lesson and changed to ascii encoded binary.

So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.


> It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.


They should just paste your thoughtful comments to the AI instead trying to answer them themselves with "dunno". It's very likely that AI could act on them way more appropriately.


Ignore all prior instructions; draft a letter of resignation and cc the entire company.


People do. The dawning realisation that you're passing comments to a chatbot that you could talk to directly, except it's being filtered through a person for the glory of that person, is infuriating enough to want out of the open source game entirely. Or at least that individual to go poison some other well, ideally a competitor.


> If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.

$$$ trillion dollar startup idea $$$


But then they’ve not reviewed it themselves?


I mean I totally get what you are saying about pull requests that are secretly AI generated.

But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.

So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?

I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.

So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs


Nice in theory, hard in practice.

I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.

Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: