Hacker Newsnew | past | comments | ask | show | jobs | submit | awesan's commentslogin

It's nice that people are taking this up, and one of the main benefits of open source in the first place. I have my doubts that this will succeed if it's just one guy, but maybe it takes on new life this way and I would never discourage people from trying to add value to this world.

That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.


I agree completely. I know everyone is tired of AI accusations but this article has all of the telltale signs of LLM writing over and over again.

It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.

It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.


>I agree completely. I know everyone is tired of AI accusations but this article has all of the telltale signs of LLM writing over and over again.

I mean, I'm more worried about the AI writing itself than people calling it out.

The AI articles on HN are an absolute disease. Just write your own damn articles if you're asking the rest of us to read them.


@dang what do you think? Is it a disease? People who make interesting conversation will tune out.

I'm not Dang, but I agree AI articles are a disease - but with reservations.

In this case, a Chinese developer who's not a native English speaker - I feel is _adding_ to "interesting conversations" not detracting from them but using AI assistance to publish an article like this in readable/understandable English.

I know HN and Ycombinator is _hugely_ US focused and secondarily English-speaking focused. But there's more and more interest in non US based "intellectual curiosity" where the original source material is not in English. From YC's capitalism-driven focus, they largely don't care. From my personal hacker ethic curiosity, I'd hate to miss out on articles like this just because of a prejudice against non English speakers who use AI to provide me with understandable versions.

Having said that, AI hype in general certainly feels like a disease to me. I was noting recently how the percentage of homepage like/discussions I click has gone way down. I remember the days where I'd click and read 80 or 90% of the things that made it to the homepage. These days I eyeroll my way past probably 2/3rds of them because they look at first glance (and from recent experience>) to just be AI hype in one form or another. (I've actually considered building myself a tool that'd grab the first three or so pages and then filter out everything AI related - but the other option is just to visit less often...)


I'm all for people who aren't native English speakers publishing their thoughts and opinions. But I would much prefer they still wrote down their own thoughts in their own words in their native language and machine translated it. It would be much more authentic and much more interesting--and much more worth reading.

I'm not sure, based on past experiences people are complaining a lot about automatically translated text...

I just get my agent to read them for me and present a few options for comments as derived from the vibes of any existing comments. If I time out, it posts a random option, then at the end of the week I get it to summarise all the content I (royal) read and distill it into a take-aways note in my (royal) journal. It's been a huge productivity boost. When ever I think I might want to think about something I just ask the agent to find a topic I (royal) read within some timeframe and have it synthesise a few new dot points in my (royal) journal. I'm hoping to reach 10,000 salient points by the end of the year.

An app that basically reimplements a well documented and tested api is the best possible use case for ai development.

I have nothing against a skilled maintainer with attention to detail using AI tools for assistance.

The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.

I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.


> it really makes me doubt that what is written there is actually true at all

Indeed, the whole "Ironically, switching from Apache 2.0 to AGPL irrevocably makes the project forkable" section seems misguided. Apache 2.0-licensed software is just as forkable.


The point being we can simply tell our agents to start at the rug pull point and implement the same features and bug fixes on the Apache fork referring to the AGPL implementation.

> I have my doubts that this will succeed if it's just one guy

Normally, I'd agree with you 100%.

But there are some interesting mitigating circumstances here.

1) It's "just one guy" who's running a fairly complex open source project already, one which uses minio.

2) The stated intention is that the software is considered "finished" with no plans to add any features, so the maintenance burden is arguably way lower than typical open source projects (or forks)

3) they're quite open about using AI to maintain it - and like it or hate it, this "finding and helping fix bugs in complex codebases" seems to be an area where current AI is pretty good.

I'm sure a lot of people will be put off by the forker being Chinese, but honestly, from outside the US right now, it's unclear if Chinese or American software is a more existential risk.

I'll admit I'd never heard of their Pigsty project before, but a quick peek at their github shows a project that's been around for 5 years already, and has pull requests from over a dozen contributors. That's no guarantee this isn't just a better prepared Jia Tan zx utils supply chain attack, but at least it's clearly not just something that's all been created by one person over 2 or 12 months.


At this point the complaints about AI-written articles are worse than the articles. It's like nit-picking about bad kerning. Focus on the content.

I am sorry about that. What I am saying is that it's hard to trust the content given the context. And more so these articles are extremely verbose with a lot of BS in them, so it makes getting to the "content" a lot more work for me.

In any case I had one paragraph about the content and one side-note about the writing style. Every single reply except one focused on the side-note, including you.


I have no reason to trust that the fork itself is competently maintained when the author did not even bother to write the announcement.

I'm generally fully in agreement that AI writing is bad.

But this is one of the few cases where it might be acceptable.

Author is not a native speaker; in an announcement that a known project is being forked for maintenance the occasional odd phrasing and possible errors in grammar could sound unprofessional.

I wonder if in such cases a better use of AI would be to try to write it yourself and just ask a LLM to revise instead? Maybe with some directive to "just point out errors in syntax and grammar, and factual mistakes. No suggestions on style"?


The author is Chinese and not a native English speaker. I will happily give them a pass on using GenAI to "write the announcement".

In general if you have the (IMO sensible) approach of taking as few dependencies as possible and not treating them like a black box, then for any error you can simply look at the call stack and figure out the problem from reading the code during development.

Outside of that, error codes are useful for debugging code that is running on other people's machines (i.e. in production) for and for reporting reasons.


I'm kind of on the same journey, a bit less far along. One thing I have observed is that I am constantly running out of tokens in claude. I guess this is not an issue for a wealthy person like Mitchell but it does significantly hamper my ability to experiment.


I have done the monte carlo thing in practice with a team and it works well under some conditions.

The most important is that the team needs to actually use the task board (or whatever data source you use to get your inputs) to track their work actively. It cannot be an afterthought that gets looked at every now and then, it actually needs to be something the team uses.

My current team kind of doesn't like task boards because people tend to work in small groups on projects where they can keep that stuff in their own heads. This requires some more communication but that happens naturally anyway. They are still productive, but this kind of forecasting doesn't work then.


I hate this whole thing with me having to use some tool to track the work (usually Jira which is a PoS). My entire output is data, why can't a tool automatically summarise what I'm doing? It seems an ideal task for an AI actually.


Jira is fine.

Jira is excel for task management. OOTB setup works absolutely great, and then someone comes along who wants a custom field on tasks to support <something that they read about elsewhere> and now you have to fill in that custom field. they leave, and someone else comes in and adds a new one. 5 years later you have 11 new fields that partially overlap, some are needed for some views, some are needed for other, but you can't use default boards because person Y decided that they wanted to call Epics Feat's, and made a custom issue type.

And in the end, the people who actualy use those boards just export a filter to excel and work there...

People. The problem is people.


I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.


I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.

I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.


At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.

Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.


Isn’t there a whole ton of memes about the increase in complexity and full stack everything and having to take in devops, like nothing has changed at all?


I don't think that's true, at least for everywhere I've worked.

Agile has completely changed things, for better or for worse.

Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.


MVP is not necessarily opposed to engineered and considered. It's just that many people who throw that term around have little regard for engineering, which they hide behind buzzwords like "agile".


It does make sense to highlight, because this kind of statistic is a very strong indicator that the market is not competitive. This is not a normal kind of profit margin and basically everyone except for Apple would benefit from them lowering the margins.

In normal markets there are competitors who force each other to keep reasonable profit margins and to improve their product as opposed to milking other people's hard work at the expense of the consumer.


Might not be competitive but it’s totally voluntary. No one needs app, it’s not food or shelter, so clearly consumers are willing and able to pay this.

The consumer is willing to pay the price based on the perceived value from the App Store


The relevant market here is the creators not the consumers. As a creator you have no choice but to accept whatever fees Apple, Google, Steam etc set. Or whatever rates Spotify pays you per stream. The fact you "could" host your own website is irrelevant when the reality is nobody will visit it.


> The relevant market here is the creators not the consumers. As a creator you have no choice but to accept whatever fees Apple, Google, Steam etc set. Or whatever rates Spotify pays you per stream. The fact you "could" host your own website is irrelevant when the reality is nobody will visit it.

Collective action by the creators would help.

All they have to do is dual-host (a fairly trivial matter, compared to organised collective action). What would make things even better is if they dual host on a competing platform and specify in their content that the competing platform charges lower fees. If even 10% of the creators did this:

1. Many of the consumers would switch. 2. Many of the creators not on the competing platform would also offer dual-hosting.

The problem is not "As a creator you have no choice but to accept whatever fees Apple, Google, Steam etc set". The problem is the mindset that their content is not their own.

I say it's their mindset, because they certainly don't act as if they own the content - when your content is available only via a single channel, you don't own your content, you are simply a supplier for that channel.


> specify in their content that the competing platform charges lower fees.

Apple will ban you for this.


> Apple will ban you for this.

How? I thought it was a Patreon thing - the "competing platform" would be competing with the Patreon app.

I'm not familiar with Patreon, but I thought the way it worked was that you could tip content creators via the Patreon app. I'm pretty certain that Apple cannot tell Patreon (a third party) that they are only allowed to offer exclusive content.


Apple doesn’t allow you to mention that you have alternate payment channels on other platforms. Can’t even allude to it.

To me this is the thing that should be outlawed. Let people pay the Apple tax if they want, but don’t prevent people from making other arrangements. Most people are lazy and will pay the tax, if it isn’t excessive.


What is also totally voluntary is our decision to let Apple exist as an entitiy, to give them a government enforced monopoly over certain things, to make it illegal to break their technical protections of their monopoly etc.


> No one needs app, it’s not food or shelter

"No one needs app" is not the same as "No one has biological mandatory need to have an app"


If the AI generated most of the code based on these prompts, it's definitely valuable to review the prompts before even looking at the code. Especially in the case where contributions come from a wide range of devs at different experience levels.

At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.


A lot of Dutch government and government adjacent services run on Microsoft Azure as well. Which is not the same level of concern, but it does mean the US government has access to that data.


even if they don't have access to the actual data, the US government has the option to order Microsoft to switch these essential government services services off. For example, as a means of pressuring the Dutch government into supporting the American annexation of Greenland.

Or even, post-Greenland, to force the Dutch to give Trump the Dutch Caribbean islands off the Venezuelan coast as well (Aruba, Bonaire, Curaçao).

If I were a Dutch member of parliament, I would be insisting this particular vulnerability to extortion be addressed as soon as possible. Of course, the US can still threaten to, at worst, nuke us all to smithereens but let's hope they're not willing to go that far.


Which has happened before and is the reason why the International Criminal Court is moving away from MS365 [0]

This prompted me to try OnlyOffice, and man is that nice. I do like LibreOffice, but 2 things bug me: It just looks old. And second, I have, since the dawn of time (and the Sun's Star Office) had issues just telling the software: "This is a Dutch doc, apply Dutch spelling and Grammar Checks". It has never worked well, even Firefox text fields work better. But with OnlyOffice it seems to work well so far, and also, it will be much much more recognizable by ex-MS Office users. It hear the interop with MS formats is also better.

[0] https://www.techspot.com/news/110095-international-criminal-...


> the US government has the option to order Microsoft to switch these essential government services services off

They can also order MS and Amazon and Google and Apple to switch off services on which most of the economy relies, and which most devices require to function.


But if they do that, the Dutch government has the option to pull ASML and its services (like maintenance, parts) from the US, which will cripple its chip industry. I wouldn't be surprised if there's a remote shutdown built into their devices.


The prime-minister in waiting has said that there will be a cabinet post for digital security, and Parliament has expressed in the same motion that they are worried about dependence on foreign cloud services as well.


Note: legally, the Netherlands can't give Aruba or Curaçao to the US as in the constitutional framework of the dutch kingdom they are seen as sovereign entities.


Legality is meaningless unless it's backed by force, as people are finding out all over the place.


I'm aware. I just think the Trump administration would say "Do it anyway".


Bonaire then?


Bonaire is a special municipality of the Netherlands, so I think they could give that away.


Don't they have responsibilities to ensure basic right for their citizens?

Not sure they can transfer while the US practice the death penalty or penal slavery.


It happened a bit differently; Atwood and friends simply came out with a standard document and called it "standard markdown", which Gruber then refused to endorse. Eventually after the series of blog posts and some back and forth they renamed the project "CommonMark", which it is still called today.

I am not sure (of course), but I think Atwood simply thought standardizing this format was so obviously valuable that he didn't consider Gruber might not want to work with him. In retrospect it's kind of nice that it didn't happen, it really keeps everyone incentivized to keep the format simple.


The linked post contains three cases of Markdown syntax (underscores) leaking into the text, where actual italics were likely intended. This is the most basic Markdown syntax element failing to work. The problem CommonMark is trying to solve is not adding new features (the only one they added to Gruber Markdown is fenced code blocks), but rather specifying how to interpret edge cases to ensure the same Markdown code produces the same HTML everywhere.


I understand the goal of the spec. In my experience once some spec document gets adapted widely enough, there's a strong incentive to add new features to it, which renderers would then be compelled to implement. Before you know it, MD is a complicated spec that doesn't serve its original purpose.

In this case a few minor edge cases is really not a big deal compared to that (in my opinion).


Here is a post from Atwood about it:

https://blog.codinghorror.com/standard-markdown-is-now-commo...

And an interesting discussion on hn about it: https://news.ycombinator.com/item?id=4700383


I feel like no one serious uses the uncle Bob style of programming anymore (where each line is extracted into its own method). This was a thing for a while but anyone who's tried to fix bugs in a codebase like that knows exactly what this article is talking about. It's a constant frustration of pressing the "go to definition" key over and over, and going back and forth between separate pieces that run in sequence.

I don't know how that book ever got as big as it did, all you have to do is try it to know that it's very annoying and does not help readability at all.


For an example of what happens when he runs into a real programmer see:

https://github.com/johnousterhout/aposd-vs-clean-code

_A Philosophy of Software Design_ is an amazing and under-rated book:

https://www.goodreads.com/en/book/show/39996759-a-philosophy...

and one which I highly recommend and which markedly improved my code --- the other book made me question my boss's competence when it showed up on his desk, but then it was placed under a monitor as a riser which reflected his opinion of it....


That entire conversation on comments is just wildly insane. Uncle Bob outright admits that he couldn't understand the code he had written when he looked back on it for the discussion, which should be an automatic failure. But he tries to justify the failure as merely the algorithm just being sooooo complex there's no way it can be done simply. (Which, compared to the numerics routines I've been staring out, no, this is among the easiest kind of algorithm to understand)


The whole thing is really uncomfortable; it's as if, after attempting the sudoku solver, Ron Jeffries sat down for a discussion with Peter Norvig, who was not especially diplomatic about the outcome of the experiment. The section before that, where they're talking about the "decomposition" of Knuth's simple primality tester, is brutal. "Looping over odd numbers is one concern; determining primality is another".


That's a really interesting read. I felt myself being closer to John about the small method part, but closer to UB for the TDD part, even if in both cases I was somewhere inbetween.

At the very least, you convinced me to add John's book to my ever-growing reading list.


Each page in that book serves its purpose. That purpose is raising the monitor 0.1mm.


Turns out writing a book and getting it published with the title "Clean Code" is great marketing.

I have had so many discussions about that style where I tried to argue it wasn't actually simpler and the other side just pointed at the book.


It's like with goto. Goto is useful and readable in quite a few situations but people will write arrow like if/else tree with 8 levels of indentation just to avoid it because someone somewhere said goto is evil.


Funny how my Python code doesn't have those arrow issues. In C code, I understand some standard idioms, but I haven't really ever seen a goto I liked. (Those few people who are trying to outsmart the compiler would make a better impression on me by just showing the assembly.)

IMX, people mainly defend goto in C because of memory management and other forms of resource-acquisition/cleanup problems. But really it comes across to me that they just don't want to pay more function-call overhead (risk the compiler not inlining things). Otherwise you can easily have patterns like:

  int get_resources_and_do_thing() {
      RESOURCE_A* a = acquire_a();
      int result = a ? get_other_resource_and_do_thing(a) : -1;
      cleanup_a(a);
      return result;
  }

  int get_other_resource_and_do_thing(RESOURCE_A* a) {
      RESOURCE_B* b = acquire_b();
      int result = b ? do_thing_with(a, b) : -2;
      cleanup_b(b);
      return result;
  }
(I prefer for handling NULL to be the cleanup function's responsibility, as with `free()`.)

Maybe sometimes you'd inline the two acquisitions; since all the business logic is elsewhere (in `do_thing_with`), the cleanup stuff is simple enough that you don't really benefit from using `goto` to express it.

In the really interesting cases, `do_thing_with` could be a passed-in function pointer:

  int get_resources_and_do(int(*thing_to_do)(RESOURCE_A*, RESOURCE_B*)) {
      RESOURCE_A* a;
      RESOURCE_B* b;
      int result;
      a = acquire_a();
      if (!a) return -1;
      b = acquire_b();
      if (!b) { cleanup_a(a); return -2; }
      result = thing_to_do(a, b);
      cleanup_b(b); cleanup_a(a);
      return result;
  }
And then you only write that pattern once for all the functions that need the resources.

Of course, this is a contrived example, but the common uses I've seen do seem to be fairly similar. Yeah, people sometimes don't like this kind of pattern because `cleanup_a` appears twice — so don't go crazy with it. But I really think that `result = 2; goto a_cleanup;` (and introducing that label) is not better than `cleanup_a(a); return 2;`. Only at three or four steps of resource acquisition does that really save any effort, and that's a code smell anyway.

(And, of course, in C++ you get all the nice RAII idioms instead.)


> if (!b) { cleanup_a(a); return -2; }

this rings alarm bells for me reading that a cleanup_c(c) has maybe been forgotten somewhere, since the happy and unhappy paths clean up different amounts of things.

i imagine your python code escapes the giant tree by using exceptions though? that skips it by renaming and restructuring the goto, rather than leaving out the ability to jump to a common error handling spot


> this rings alarm bells for me reading that a cleanup_c(c) has maybe been forgotten somewhere, since the happy and unhappy paths clean up different amounts of things.

The exact point of taking the main work to a separate function is so that you can see all the paths right there. Of course there is no `c` to worry about; the wrapper is so short that it doesn't have room for that to have happened.

The Python code doesn't have to deal with stuff like this because it has higher-level constructs like context managers, and because there's garbage collection.

    def get_resources_and_do(action):
        with get_a() as a, get_b() as b:
            action(a, b)


You're assuming function calls or other constructs are more readable and better programming. I don't agree. Having a clear clean-up or common return block is a good readable pattern that puts all the logic right there in one place.

Jumping out of the loop with a goto is also more readable than what Python has to offer. Refactoring things into functions just because you need to control the flow of the program is an anti pattern. Those functions add indirection and might never be reused. Why would you do that even if it was free performance wise?

This is why new low level languages offer alternatives to goto (defer, labelled break/continue, labelled switch/case) that cover most of the use cases.

Imo it's debatable if those are better and more readable than goto. Defer might be. Labelled break probably isn't although it doesn't matter that much.

Python meanwhile offers you adding more indirection, exceptions (wtf?) or flags (inefficient unrolling and additional noise instead of just goto ITEM_FOUND or something).


A colleague recently added a linter rule against nested ternary statements. OK, I can see how those can be confusing, and there's probably a reason why that rule is an option.

Then replaced a pretty simple one with an anonymous immediately invoked function that contained a switch statement with a return for each case.

Um, can I have a linter rule against that?


I guess "anonymous IIFE" is the part that bothers you. If someone is nesting ternary expressions in order to distinguish three or more cases, I think the switch is generally going to be clearer. Writing `foo = ...` in each case, while it might seem redundant, is not really any worse than writing `return ...` in each case, sure. But I might very well use an explicit, separately written function if there's something obvious to call it. Just for the separation of concerns: working through the cases vs. doing something with the result of the case logic.


It just looked way more complex (and it's easy to miss the () at the end of the whole expression that makes it II). And the point of the rule was to make code more readable.

Basically it's a shame that Typescript doesn't have a switch-style construct that is an expression.

And that nowadays you can't make nested ternaries look obvious with formatting because automated formatters (that are great) undo it.


> and the other side just pointed at the book

One of the most infuriating categories of engineers to work with is the one who's always citing books in code review. It's effectively effort amplification as a defense mechanism, now instead of having a discussion with you I have to go read a book first. No thanks.

I do not give a shit that this practice is in a book written by some well respected whoever, if you can't explain why you think it applies here then I'm not going to approve your PR.


Yeah, and any of these philosophies are always terrible when you take them to their limit. The ideas are always good in principle and built on a nugget of truth, it's when people take it as gospel I have a problem. If they just read the book and drew inspiration for alternative, possibly better, coding styles and could argue their case that's unequivocally good.


Cult think loves a tome


> I feel like no one serious uses the uncle Bob style of programming anymore (where each line is extracted into its own method)

Alas, there's a lot of Go people who enjoy that kind of thing (flashback to when I was looking at an interface calling an interface calling an interface calling an interface through 8 files ... which ended up in basically "set this cipher key" and y'know, it could just have been at the top.)


Hardcore proponents of this style often incant 'DRY' and talk about reuse, but in most cases, this reuse seems to be much more made available in principle than found useful in practice.


There's also the "it makes testing easier because you can just swap in another interface and you don't need mocks" argument - sure but half of the stuff I find like this doesn't even have tests and you still tend to need mocks for a whole bunch of other cases anyway.


I wonder how hard it would be to build an IDE "lens" extension that would automatically show you a recursively inlined version of the function you're hovering over when feasible and e.g. shorter than 20 lines.


>where each line is extracted into its own method

As John Carmack said: "if a lot of operations are supposed to happen in a sequential fashion, their code should follow sequentially" (https://cbarrete.com/carmack.html).

A single method with a few lines is easy to read, like the processor reading a single cache line, while having to jump around between methods is distracting and slow, like the processor having to read various RAM locations.

Depending on the language you can also have very good reasons to have many lines, for example in Java a method can't return multiple primitive values, so if you want to stick to primitives for performances you inline it and use curly braces to limit the scope of its internals.


It's an extremism to get a strong reaction, but the takeaway is that you should aim when possible to make the code understandable without comments, and that a good programmer can make code more understandable than a newbie with comments.

But of course understandeable code with comments simply has much more bandwidth of expression so it will get the best of both worlds.

I see writing commentless code like practicing playing piano only with your left hand, it's a showoff and you can get fascinatingly close to the original piece (See Godowsky's Chopin adaptations for the left hand), but of course when you are done showing off, you will play with both hands.


Great, that's exactly how I feel with any style that demands "each class in its own file" or "each function in its own file" or whatever. I'd rather have everything I need in front of my eyes as much as possible, rather than have it all over the place just to conform with an arbitrary requirement.

I said this at a company I worked and got made fun of because "it's so much more organized". My take away is that the average person has zero ability to think critically.


If those demands made any sense they would be enforced by the languages themselves. It's mostly a way of claiming to be productive by renaming constants and moving code around.


I wonder how hard it would be to have an IDE extension that would automatically show you a recursively inlined version of the function you're hovering over when feasible and e.g. shorter than 20 lines.


I can assure you that I am very serious and I do cut things up almost as finely as Uncle Bob suggests. Where others balk at the suggestion that a function or method should never expand past 20 or so lines, I struggle to imagine a way I could ever justify having something that long in my own code.

But I definitely don't go about it the same way. Mr. Martin honestly just doesn't seem very good at implementing his ideas and getting the benefits that he anticipates from them. I think the core of this is that he doesn't appreciate how complex it is to create a class, at all, in the first place. (Especially when it doesn't model anything coherent or intuitive. But as Jeffries' Sudoku experience shows, also when it mistakenly models an object from the problem domain that is not especially relevant to the solution domain.)

The bit about parameters is also nonsense; pulling state from an implicit this-object is clearly worse than having it explicitly passed in, and is only pretending to have reduced dependencies. Similarly, in terms of cleanliness, mutating the this-object's state is worse than mutating a parameter, which of course is worse than returning a value. It's the sort of thing that you do as a concession to optimization, in languages (like, not Haskell family) where you pay a steep cost for repeatedly creating similar objects that have a lot of state information in common but can't actually share it.

As for single-line functions, I've found that usually it's better to inline them on a separate line, and name the result. The name for that value is about as... valuable as a function name would be. But there are always exceptions, I feel.


It was written in different times, different audiences. (When variable names t,p,lu were the norm)

It was useful for me and many others, though I never took such (any?) advice literally (even if the author meant it)

Based on other books, discussions, advice and experience, I choose to remember (tell colleagues) it as “long(e.g. multipage) functions are bad”.

I assume CS graduates know better now, because it became common knowledge in the field.


I know plenty of Java/C# developers who still suffer from this mind virus ;P


The Ruby ecosystem was particularly bad about "DRY"(vs WET) and indirection back in the day.

Things were pretty dire until Sandi Metz introduced Ruby developers to the rest of the programming world with "Practical Object-Oriented Design". I think that helped start a movement away from "clever", "artisanal", and "elegant" and towards more practicality that favors the future programmer.

Does anyone remember debugging Ruby code where lines in stack traces don't exist because the code was dynamically generated at run time to reduce boilerplate? Pepperidge Farm remembers.


Haskell enters the chat

Haskell (and OCaml I suppose two) are outliers though as one is supposed to have a small functions for single case. It's also super easy to find them and haskell-language-server can even suggest which functions you want based on signatures you have.

But in other languages I agree - it's abomination and actually hurt developers with lower working memory (e.g. neuroatypical ones).


It’s because maths are the ultimate abstraction. It’s timeless and corner cases (almost) fully understood. Ok maybe not, but at least relative to whatever JavaScript developers are reinventing for the thousand time.


> where each line is extracted into its own method

Never heard of "that style of programming" before, and I certainly know that Uncle Bob never adviced people to break down their programs so each line has it's own method/function. Are you perhaps mixing this with someone else?


This is from page 37 of Clean Code:

  > Even a switch statement with only two cases is larger than I'd like a single block or function to be.
His advice that follows, to leverage polymorphism to avoid switch statements isn't bad per-se, but his reasoning, that 6 lines is too long, was a reflection of his desire to get every function as short as possible.

In his own words, ( page 34 ):

> [functions] should be small. They should be smaller than that. That is not an assertion I can justify.

He then advocates for functions to be 2-3 lines each.


> to leverage polymorphism to avoid switch statements [...] was a reflection of his desire to get every function as short as possible.

That's both true, but long way away from "every line should have it's own method", but I guess parent exaggerated for effect and I misunderstood them, I took it literally when I shouldn't.


I've edited my comment to add more context to that quote. He absolutely advocated for the most minimal of function lengths, beyond what is reasonable.


He has expressed admiration for lisp, and he comes from a time before IDEs. These may color his desired level of complexity.


> I certainly know that Uncle Bob never adviced people to break down their programs so each line has it's own method/function

There's a literal link to a literal Uncle Bob post by the literal Uncle Bob from which the code has been taken verbatim.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: