Hacker Newsnew | past | comments | ask | show | jobs | submit | pottereric's commentslogin

> This author starts with the implicit premise that change is bad and all conclusions are the result of that.

How did you come to that conclusion? The theme of what I was saying is that projects to be given direction from the stakeholders. I never said anything to suggest that the direction couldn't change.


Is it actually a problem that you've seen in real life where programmers write code without any direction? That seems impossible. A program exists to solve a problem. You might have a pretty vague idea of the details of that problem but ultimately you know what it is.

If programmers are coding whatever they want based on wildly unfounded assumptions then they're not doing Salmon Ladder development -- they're just terrible developers. If they're given some direction, coding, and iterating on that result then that's great. Even if they make some wrong assumptions at first, I don't see anything wrong with that.


It is a problem I've seen in the real world.

For example, I've seen a project where upper management tasked a team of developers with writing software and didn't give the team specific direction nor did they communicate clear goals for the software. The team asked for more details, and the management told them to "do it agile". The team wrote good software, but they solved the wrong problem.


This is not a software development process issue. If management is bad, doesn't understand the process, and there isn't buy in then the project will fail. Take that same team and same management and do it waterfall and I doubt the outcome would be any different.

But it is not just management that is at fault. Those developers did not have enough backbone to push back. Instead of educating management about the process or telling them it can't be done they just sulked back to their cubicles and coded. There is no surprise it turned out wrong.

I see where you're coming from on this; you cannot build any product without communicating with the stakeholders. That communication can be rough scribble diagrams, notes, detailed requirements, meetings showing off prototypes -- really anything. All software development processes are about communication.


> this is not a software development process issue... [followed by a list of software development process issues.]


I have seen quite a few cases where more thought to work through the specific implications and consequences following from "vague ideas of the details" would likely have quickly identified problems that did not emerge until a bunch of code had been written.


I've seen many cases where specific implications and consequences from vague ideas of the details could not possibly have emerged until a bunch of code is written.

Obviously it could have been found during planning but it never would have. It's hard to get people to think in that much detail without anything concrete.

Edit: with the exception of literal rocket scientists -- they're very good at planning and getting the details right compared to the average person.


Coincidentally, The Cassini mission to Saturn comes to its scheduled end in a few hours. From what I have read about how NASA develops software, it is essentially a waterfall process, and I am fairly certain that its programmers have more than a vague idea of what they are doing when they write code.

Edit: Do you really want to be an evangelist for mediocrity? You should give thinking things through a try; none of us get it right even close to all the time, but I think you will be surprised by how effective you are at it.


I'm not saying we shouldn't be thinking things through but we aren't often blessed, as developers, with working with actual rocket scientists who will provide perfect models of the product such that it won't literally crash into another planet 1.2 billion kilometers away.

Sometimes you have to do the work and build something that isn't fully specified. Or when management says they want a Facebook clone and then walks away, you have to be willing to say "no, you have to come back the table on that one".


I think we are in agreement here - I acknowledge that the waterfall approach doesn't often work and that spacecraft are a very special case. In fact, I was going to write that waterfall is a straw man in that no-one tries or believes in it any more, but that made me think of NASA (and possibly aerospace in general), which demonstrate that it works (and is possibly the only thing that works) in certain circumstances.

I think the pendulum has swung too far the other way, however, and it is not hard to find opinions (and Stack Overflow comments and sometimes answers) stating, in effect, that the only valid way to develop code is to write some and try it. For the most part, I think these authors have not considered of how much thinking ahead goes into making even that work, and if they did, they would not be so dogmatic about deprecating it.

Three areas, of definite relevance to ordinary commercial applications, where I think thinking ahead is more or less essential to success, are security, concurrency and performance.


My whole line of thought actually comes from misunderstanding the point of the article. My original reading was that it was dismissive of jumping in and coding as a method of working out requirements -- which I disagree with it. I think jumping in and coding is one good way (but not the only way) to work out requirements with stakeholders.

Instead the article is about programmers who code without the involvement of stakeholders at all. The inclusion of agile is a bit of red herring.


Forgive my ignorance. What do Rust and Go call their data structures that are analogous to classes?


There really isn't a direct analog in Rust. The best approximation in Rust is actually a combination of constructs in the language: structs and traits.

Structs are almost exactly like the C-objects of the same name: they define layout in memory for a data structure with discrete types as fields, and there are various packing options available. However in addition to this a struct-specific implementation can exist which defines static and instance based methods. As there is no inheritance in Rust, a struct's "impl" is unique to itself.

Polymorphism is implemented via the traits system. A trait defines a set of methods that objects meant to have said trait must implement (unless a default implementation for a given method in the trait exists). Any struct can have an implementation for a given trait, so these can be considered a (rough) analog to abstract base classes/interfaces (e.g. C++ classes that are either pure virtual or have virtual functions with default implementations).


You say traits are a rough analogue for C++ base classes with virtual methods. What are the fundamental differences?


For one, traits are inherently static and their methods are statically dispatched, somewhat like a concept. Traits which satisfy certain conditions which make them "base class-like" can be reified as a trait object, which is a vtable ptr + data ptr pair, which dynamically dispatches through the vtable. The separation of the vtable ptr from the object means every trait object gives you dynamic dispatch for one trait only and you can have an unbounded number of trait objects, in contrast to C++ where you have dynamic dispatch through the finite number of base classes listed by the class author.


I'm not familiar enough with the internals of how Rust handles v-tables and the like in light of its other features to answer that competently.

In practice, one defines an interface in C++ by having a (hopefully) stateless class with pure virtual method declarations, and then classes derived from this class must implement these methods (in order to instantiate them anyway). In Rust one defines a data structure (either a struct or enum) and then, separately, writes the "impl" for it for a trait.


The big difference is that when you have a trait object in Rust, it's a double pointer: a pointer to the vtable, and a pointer to the data. In C++, in my understanding, you'd have a single pointer to both the vtable and data laid out next to each other.


Thanks for that.

What you describe for C++ is typically the case (and I've written completely unsafe, non-portable code that exploits this fact before), but I'm unsure whether that's required by the standard.


So rust traits are effectively/functionally abstract classes with restricted functionality and a different implementation?


Sort of. The closest thing in C++ would be concepts. That is, even using traits in this way is the minority case; they're more usually used for monomorphized, statically dispatched code.

But, given the case where you want dynamic dispatch, then in a sense, they are, yes.


Right. I suppose in Rust you generally use variants and pattern matching when dynamic dispatching is required.


You could, but that requires that you know all of the variants ahead of time. The advantage of trait objects is that you don't have to.

It really just depends on what exactly you're trying to do.


The whole point is to unbundle what classes are, so there is no direct analogy. Structs are one piece of traditional classes; traits (rust) or interfaces (go) are another part, and delegation a third part.


Well, in Go they are just Types


I agree with you in that I don't agree with the author. I felt that the strongest point of the paper was that it was critical of something that is assumed to be a good thing. That value was that it challenged one of my assumptions that I had never questioned.


The other factor to consider here is that not all specialists are focused on a technology. Some specialists focus on a process or a business domain. If a startup is working on financial software, finding someone that has a deep knowledge of finance software might be incredibly valuable. This could be very true if your software needs to meet regulatory requirements like Sarbanes Oxley.


I assumed the context here was developers. For other kind of specialists, they can be INSANELY valuable from the get-go. In fact, one of the first hires I'd recommend anyone doing enterprise startup is hiring a domain specialist. These people are relatively inexpensive and add tremendous value in giving product feedback between early iterations. They can also talk to early customers with more credibility and thus help adoption of certain features that the customer may other not see much value in.


Actually, I was talking about developers. In my experience working on projects that needed FDA part 11 compliance, it can be incredibly helpful to have developers that know how to write software that meets to compliance rules from process, validation, and auditability standpoints. I presume the same is true for SOX or HIPPA.


As someone that specialized in Palm OS programming for a few years, I concur that there is a risk in picking the wrong specialty. But I did learn a lot in my days as a Palm OS dev. I earned the right to design and implement some really interesting solutions. I got to solve hard problems. I wouldn't have had those chances if I hadn't proven myself to be very good at what I was doing. On the other hand, my vast knowledge of the API of an obsolete OS isn't helping me much these days. I actually keep a copy of one of my Palm OS books as a reminder to never get to tied to one technology.


That's a good point. I might phrase it thusly: in order to build up expertise within a solution space (such as doing cool mobile device applications) you end up having to stick with a particular platform/tool-set long enough so that your knowledge of the platform isn't a barrier to solving complex problems.

It sounds to me like you more or less ended up doing the correct thing.


Agreed. The ruby expert will probably also be proficient in HTML, JavaScript, and CSS. You need a range of skills to be an expert in something.


It's true that a neurologist wouldn't work on your knee, but to become a neurologist, you would have to study the knee in medical school. Even specialists need to have a proficient knowledge in other areas.


There are many interesting ways that the Dreyfus Model of Skill Acquisition could be applied to the ideas in the blog post. There should be a few areas where you are an expert, a handful of things you should know at a proficient level, many things you should know at a competent level. You should be expanding the number of things in which you are an advanced beginner.

I also like what you said about your knowledge of music was useful to your programming skills. Being well-rounded individuals can make us better specialists.


I agree that being a specialist is where the money is. What I was trying to show in the blog post was that the only way become a well-paid specialist is to improve your skills in the surrounding technologies. I doubt there are many developers making tons of money because they know Ruby really well. If there are making good money as a specialist, it is because the know a lot about a framework (probably rails), a communication protocol (HTTP), a design language (CSS), a plugin ecosystem (gems), and an editor. If you want to be a great specialist, you have to know the surrounding technologies as well.


Knowing a lot about TCP as well would make you even more of a generalist, and also more valuable.


Great points. One of the things that I tried to point out is that the breadth of your skills specifically supports your specialty. A company should appreciate your breadth of skills because it makes you better at your specialty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: