Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you give me some examples of those cycles? Genuinely curious what you mean.

I started my career just a couple years ago. In my first company, they used 10+ year old tooling and imo it was terrible. A very old legacy mess monolith that made adding features pure torture. Trunk-based development with a "who needs tests" mindset, resulting in horrendously buggy code, 50:50 chance pulling newest version would break something. Several instances of files with over 10k lines of code and deep nesting, absolute nightmare. Absolutely no mindset for performance. They wrote quadratic complexity code in the backend to fetch data because hash-based data structures already seemed too advanced to many of my coworkers and then they wondered why their frontend was so terribly slow. Not that they had a lot of pressure to deliver a great product, because they are / were market leader in their B2B niche. I left within a year.

Now I'm working in a company that uses all the latest gitlab CI/CD shenanigans, code reviews and heavy use of unit, integration and end to end tests. Everything is hosted in the cloud with a microservice architecture. We actually need it as we scale to millions of customers and have performance and reliability requirements.

The difference is not just the tech stack that was horribly outdated and imo extremely tedious to work with, but also the mentality is completely different.

In the first company, you couldn't change anything, there was a strict hierarchy and everything stayed as is because "it works". You totally got the feeling that there's some old people up in the hierarchy that were way too lazy to learn new things and didn't want to endanger their meaning in the company. When I left, I spoke with the Head of HR and he told me that basically all people that leave do so because of the mentioned reasons. So that company drives away motivated talent with their crap mentality. Pay wasn't very good either, but a first job is a first job after all.

Now mind you, both companies are a couple decades old, but imo one always kept up and the other didn't. Both companies have 10+ year seniors. Personally, the people in the current company are way more competent and excited about work. Much more fun to work with, I learn more and I absolutely don't get the feeling the tooling is reinvented in any way. It's improved in all possible aspects I could think of and makes the development experience much better.



I think the newbie coming to the fancy pancy hubernates cluster company in 5 years when you and the other engineers have moved on will have a complete new level of headache inducing mess to deal with compared to what you had at the boring company as a newbie.

I started out my career thinking best practices with agile, code review, CI and "shared ownership" and stuff were the way to go.

But in the end I like the old siloed do-your-stuff way more. It works and gives you actual ownership and freedom. It turns out that it is easier to cooperate when you can say no.


I was thinking the same. I feel like I've seen the cycle with my own eyes at this point. Projects almost always seem fresh and good at the beginning, and then they become monsters after awhile seemingly no matter what you do.


It is different this time!

I mean GP could have been at a genuinely bad place with bad practices.

It could also be that he was just a idealistic newbie trying to give advice to hardened experts rightfully ignoring it. HR boss agreeing does not tell us anything. They are buzzword driven.


> Could you give me some examples of those cycles? Genuinely curious what you mean.

A large part of the work surrounding the Docker ecosystem has simply been re-creating features that were already around in the JVM ecosystem 10 years ago. In the same decade, we also had the move from server-rendered webpages, to browser-rendering, back to server-rendering.


OK just to be clear: Even if the constant stack switching can be very tiring, you have to do it. The alternative is deep stagnation, which is much worse. I am also very much pro everything that raises quality like CI, but remember some groups where doing it in the 1970's.

I started programming with basic and then DOS and assembly. I have very fond feelings and deep knowledge from both of them, but the UNIX generation rightly looked down on them as 2 turing tarpit hellholes.

Onward to C, with Mix, Watcom and DJGPP. Better programs, but you live with some stupid inefficiencies that wouldn't fly in x86 asm.

Onward to Win3.1 and Win32. The end user experience is much better, but as a programmer you now have to accept control by the OS over your work. You can't just e.g. write to VRAM anymore. First serious dark clouds appeared for me when I realized Microsoft cynically used us all to extinguish all competition. Politics had entered my IT life.

Then came the web. In one way, it was glorious, but programming it in javascript was a serious hellhole. jQuery brought some sanity, but the user friendlyness from windows was almost impossible to reach.

Serverside was java, which was dog slow until hotspot appeared. It eats memory like there is no tomorrow. Sun dictated the very shape of your program. There was some war going on between EE which was horribly verbose, and Spring which was geassroots and looked down upon by the architects, as if it smoked weed or something. Whatever camp you chose, pain would follow. Ir you could go to the PHP camp and spend more time debugging than programming .

There was some python here in my life, good but dog slower than even Java.

Then nodejs. If you thought Java gobbled up memory, you'd just die working with that abomination. End user usability had still not recovered from the win95 days (it never did). You had no type safety with javascript. In fact, every decent tool and technique was sacrificed on the altar of equal back and front end language. Then came frameworks like angular where v2 managed to commut ecosystem suicide, and react. Meanwhile transpulers packers etc managed to undo much of the noneedtocompile of javascript.

In the mobile world, 2 massive companies appeared, and their app stores killed any liberty of publication.

There us more, but I ran out of time ;-)

All of this is quite ranty, partially deserved but there is also quite a lot of good in here. Even so, programming for me was most fun on DOS, and user experience on win95 to xp.


> I am also very much pro everything that raises quality like CI, but remember some groups where doing it in the 1970's.

CI is risky, because it is a great micromanagement tool, just like ticket systems for non-bug tickets. I don't think it is strategic to lure traps for our selves.

I believe one should have a setup such that "good" management wont mess our stuff up and not being dependent on having "great" management.

It is like agile which only works with great programmers and managers but messes up for most of us. But CI is not nearly as bad or dangerous and have benefits if kept simple.


Let me describe to you a system I've seen myself. I think it was created around 1985, in Cobol, by 1 company, for only that company. Afaik, it succesfully runs today.

At the start, there are screens for what we today call issues: 80x25 terminals that input, edit, prioritize and assign changes. Nightly batches provide management views of what is being done where.

Other screens let you check in and out code files, tracking history, linking to issues and people, and managing which versions are on local dev preprod and prod. Nightly batches run the compiler for changes.

Promotion to preprod requires quality checks, where e.g. no new deprecated calls are allowed. Promotion to prod requires a manager sign off, where the input from the test team is validated.

I have not seen this level of integration until github matured. In some ways, github is superior, in other ways the deep integration with their procedures and tech choices is still superior.

That's more than 3 decades, maybe even 4, that this system paid off. It survived the departure and replacement of a whole generation. It survived all attempts to managerial reorgs, and thank god for that. It came from a time that computers where so expensive that having the manpower and long term vision for this was a good choice, even for only 1 company. Unfortunately, it also makes new people relearn everything they know about version management.


Ye. CI systems can be beutiful. And in some companies you want some sort of formal sign off process. I am not dogmatically against CI.

> It survived all attempts to managerial reorgs, and thank god for that

The problem comes when it is cargo culted and forced I guess.

The temptation for some manager to rewrite the system you describe in Groovy and use Jenkins or integrate it into Jira! Imagine the possibilities of unnecessary work and complexity. A big opportunity cost.


There is zero risk to CI as long as you have a proper branching process.


> I started my career just a couple years ago. In my first company, they used 10+ year old tooling and imo it was terrible. A very old legacy mess monolith that made adding features pure torture. Trunk-based development with a "who needs tests" mindset, resulting in horrendously buggy code, 50:50 chance pulling newest version would break something.

Monkey paw curls.

My story is complete opposite. Three years ago joined a startup. We use relatively new Java, branches everywhere, microservice architecture, +85% branch coverage, integration tests, end-to-end, performance, you name it. CI/CD integrated and self hosted. Heaven, right?

It was an absolute shit show. Because of microservice architecture you had no way of running +50 necessary microservice on your machine.

Tests are mandated but brittle. Mocking libraries break whenever you refactor a method. Integration tests are flaky and inconsistent (behaves differently on local vs remote). End to end test takes hours to complete. There are 20 different environments, each with different configuration, each divided into dev/qa/prod.

In how long I was on we didn't have two successful deploys on main branch. But you have to keep adding features because one customer said it might. Oh security found that library is 20ms too old. Have to replace it asap, despite the convoluted nest of dependencies between microservices.

It had good pay though. Taught me to really hate mocks and that tests need to be applied at right level.


Microservices can't have dependencies between each other otherwise they aren't microservices.

I think the main issue is just the other engineers you are working with. If they are bad, they will screw anything up.


Microservice is a module. A module that got separated by a network layer, most often due to somebody's momentary lapse of judgement.

It's encouraging that you forbid the next person to fall into identical trap (you effectively say: this kind of remote module must not use further remote modules). Alas... they can, and they will.


> Microservices can't have dependencies between each other otherwise they aren't microservices.

See Hyrum's law:

    Put succinctly, the observation is this:

    With a sufficient number of users of an API,
    it does not matter what you promise in the contract:
    all observable behaviors of your system
    will be depended on by somebody.
One example we bumped spring from 2.1->2.4 (not actual version numbers) Harmless, no? What's worst that can happen?

Failure when doing some but not all operations.

Why? Because some Python/Java micro-services down the operation chain expected null fields to be omitted and the default behavior was changed (between Spring versions) to write null fields instead. Which only occurred on those services that relied on null fields being omitted. Fix was easy but finding the bug was difficult.


How would you design a microservice, that does not depend on another?


Microservice 1 <-> on call engineer copy pasting <-> Microservice 2


Send a JSON package with some HTML and dimensions, get back a JSON package with links to that HTML rendered as JPEGs at the requested dimensions.


At some point you are going to have another service that uses this HTML->JPEG service though. That would be a dependency, at least in my view (ie, if the HTML->JPEG service goes down, something else will break).

Or are all microservices user facing?


What you are describing at the old company is not a failure of old tools, but rather a failure of management/employee self-management at that company.

Any tool can be used to do good or evil. They were using old tools to do evil things-- namely, writing bad code.

The only caveat here is that if I had to maintain bad bash scripts or bad koobieboobie cicd automated shlalala, I'd always choose bad bash scripts, as the blast radius is smaller and easier to reason about.


> In my first company, they used 10+ year old tooling and imo it was terrible. A very old legacy mess monolith that made adding features pure torture.

Everything becomes like that, legacy, torture, mess. New things comes along, clean, new, solves some problem. Mess dissapears from one place but starts popping out somewhere else but still better than before you think. Wait 10 years and you and you’ve got a completely different mess, lots of people who built it have now left, few know it all but have stopped caring. A new you joins the group. Sees a crazy unweildy legacy system. Sees new technology that solves these problem. Starts over.


>Everything becomes like that, legacy, torture, mess.

>few know it all but have stopped caring

If they kept caring (and were allowed to by being listened at, that's maybe why they stopped), that could have not turned into a mess (I know (of) 15 years old systems that only got better with time, thanks to lead devs playing both as conductors and as musicians).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: