There are a bunch of comments saying that Google is just like any other big tech company and that the exciting engineering bit has gone. My experience is only from the last 2.5 years, but I've got a slightly different take.
Engineering from >10 years ago seems like it was a wild west. Some truly stunning pieces of technology, strung together with duct tape. Everything had its own configuration language, workflow engine, monitoring solution, etc. Deployments were infrequent, unreliable, and sometimes even done from a dev's machine. I don't want to disparage engineers who worked there during that time, the systems were amazing, but everything around the edge seemed pretty disparate, and I suspect gave rise to the "promo project" style development meme.
Nowadays we've got Boq/Pod, the P2020 suite, Rollouts, the automated job sizing technologies, even BCID. None of these are perfect by any means, but the convergence is a really good thing. I switched team, PA, and discipline 6 months ago, and it was dead easy to get up and running because I already knew the majority of the tech, and most of that tech is pretty mature and capable.
Maybe Google has become more like other tech companies (although I doubt they have this level of convergence), but I think people glorify the old days at Google and miss that a lot of bad engineering was done. Just one example, but I suspect Google has some of the best internal security of any software company on the planet, and that's a very good thing, but it most certainly didn't have that back in the day.
I left over ten years ago and it's hard to understand that perspective. Back when I was an SRE (~2006 to 2009) there were only one or two monitoring systems (which didn't overlap, so you could argue there was one) and a handful of config languages. Compared to anywhere else Google had military levels of discipline and order.
> Deployments were infrequent, unreliable, and sometimes even done from a dev's machine.
Deployments were weekly and done from a dev machine because that way someone was watching it and could intervene in case of unexpected problems. Some teams didn't do that and tried to automate rollouts completely. I could always tell which products weren't doing enough manual work because I'd encounter obviously broken features live in production, do a bit of digging and discover end user complaints had been piling up in the support forums for months. But nobody was reading them, and the metrics didn't show any problem, and changes flowed into prod so the team just ... didn't realize their product wasn't working. There's no substitute for trying stuff out for yourself. I encounter clearly broken software that never seems to get fixed way too often these days and I'm sure it's partly because the teams in question don't use their own product much and don't even realize anything is wrong.
I think the state of the art has moved on quite a way from this. I understand the point of view that someone should be watching a release, but the alternative is not "no one watching a release", but more that binary releases should be no-ops. With feature flagging the binary release should do nothing different so that no one watching it is not a problem.
Additionally, rolling out from a dev machine brings so many risks – security, reproducibility, human error, and so on.
I'm glad this is not the way things work anymore, and for the most part things are more reliable as a result.
Well, to be clear "rollout from a dev machine" meant just that the rollout controller ran locally, the actual software being released was built by a release pipeline, placed into signed packages and so on. So it was all auditable. The people doing the rollouts were those who had production administrator access anyway for on-call troubleshooting and debugging and permissions were enforced, so there was no security impact. And the same process was used for flag flips so just putting everything behind flags didn't make much difference.
It doesn't sound like what's done now is a whole lot different tbh.
Although there were some rough areas, Google has been considered to have pretty advanced software engineering for 20 years or more. There was plenty for a new engineer to boggle at even then.
On the other hand, someone who started in 2015 missed out on the years when Google was mostly considered to be the good guys and given the benefit of the doubt. That’s about when the culture started turning against “big tech” in general (rather than specific companies like Microsoft).
Serious question: what king did Google kill? They really didn't take down a big player in an existing industry, they started by being very good at search and displaced other search companies at a time when internet search wasn't a big business.
I don't have a great answer here, but one of the things that I think caused public opinion to shift is when people started realizing how truly massive they had gotten. It's possible to avoid Facebook, Apple, Amazon, and Netflix, but by 2015 it started to become nearly impossible to avoid Google or their products.
Not sure if that's the reason public opinion shifted, I'd started to get worried about the same time I also dropped my Facebook account, so ~2011. I have successfully managed to avoid Facebook products and tracking, but I had to give up on avoiding Google, it's simply not possible unless one is willing to make extreme sacrifices. So i kinda gave up, it was starting to become digital masochism. There's a very bitter irony in me typing this on a pixel 8,but I've just accepted that there isn't any avoiding Google tracking my online life so I've just stopped caring as much as I used to.
Google was the company that kept making the web better in every way. Amazing search, endless services each with solid json APIs, a fast multi-process web-browser, & investment in the web as a whole.
Google killed the king, and the king was the desktop. The king was apps. Microsoft seemed omnipotent & in total control, and the rise of the web isn't totally Google's but they sure did a lot and they sure rode that wave.
My personal feeling is that Google lost the ball in the g+ era. Up until then, it felt like Google understood their role was to help others create value, that they had to offer APIs and platforms to let other developers onto the platform, let other people expand the value proposition. G+ was an about face, a totalizing product push, and one that offered nothing to the world. Essentially no API offered. Google wanted to make g+, they wanted to run itz and if you wanted to use it, you needed to use their client and your account with them.
Where-as in the past, with efforts like Buzz, they we're trying to expand the protocols & value of the web as a whole. Once they gave up on platform & tried to be a product company, it was much harder to believe in the futures they were trying to sell.
I don't really understand this. Don't you think search, mail, docs, maps, adsense, were products? What were the API platforms they were making up until G+?
To my recollection, G+ was actually pretty good at launch. It just was killed (or hobbled, for future killing) incredibly early for a network-effects, non-first mover product.
I'd disagree with parent a smidge and say Google turned evil when it became a platform.
When it was a disparate group of products... incentives were generally aligned with the users of those products.
When they began to look at themselves as a platform company (Google search-on-everything, Android, Chrome), that fundamentally broke and they started making sound-platform-business but user-hostile decisions.
So I guess the moral of that story is that platforms will make you rich, but you have to be very careful to enunciate your value priorities clearly to users. (E.g. Apple: "privacy"; Google: "openness"?)
I don't see Google Cloud as "giving up on being a platform." It's a different kind of platform, though!
Early Google initiatives also had a high failure rate. (Buzz, for example.)
My guess is that there are still Googlers trying to improve the web. Young people are idealistic, so why wouldn't they? But nowadays it's unlikely to be successful unless it's relatively uncontroversial infrastructure. (Some examples might be things like certificate transparency and QUIC, which became HTTP/3.)
Higher-profile initiatives to really change things often fail because they raise deep suspicion and resistance. They're certain to be misinterpreted in the worst possible way.
Also, significant changes affect vested interests. Some of those vested interests are internal.
> I think caused public opinion to shift is when people started realizing how truly massive they had gotten.
I'm personally more of the opinion that Google has caused enough serious issues for enough people - with famously no way to get the issues resolved - that they themselves seeded or caused the negative public opinion.
Combine that with Google behaviour of clearly doing things in their own interest even when not to their user's benefit (manifest v3 proposal is a good example), and many people are like "screw Google". ;)
I don't think people understand what manifest v3 is trying to solve. It's a good example of how a worthwhile effort to make the Internet less terrible can be misinterpreted.
Manifest v3 is an effort to make a browser slightly better at its core feature, that it is importantly already fine at, at the cost of making it worse at serving the user. The internet is increasingly user-hostile, and manifest v3 makes it harder to fight back.
More explanation from you on this could help. I don’t even know where to begin - everything I search just details how v3 greatly limits the efficacy of ad blockers.
The problem they’re worried about is untrustworthy browser extensions that have broad permissions to do harm. There are over 100,000 extensions and from a security standpoint, not having good-enough sandboxing is a vulnerability.
> In reality, Manifest V3 was meant to solve a real problem — and to do so for the right reasons. I know this because about eight years ago, we set out to conduct a survey of the privacy practices of popular browsers extensions. We were appalled by what we uncovered. From antivirus to “privacy” tools, a considerable number of extensions hoovered up data for no discernible reason. Some went as far as sending all the URLs visited by the user — including encrypted traffic — to endpoints served over plain text. Even for well-behaved extensions, their popularity, coupled with excessive permissions, opened the doors for abuse. The compromise of a single consumer account could have given the bad guys access to the digital lives of untold millions of users — exposing their banking, email, and more.
Maybe they could have avoided controversy by grandfathering in a few popular extensions and watching them closely?
Every bad thing has to have some nominal selling point as the way to get everyone to take it.
mv3 sales pitch is to remove the ability for plugins to harm users.
It does do that, but:
1: Only by also removing plugins ability to help users
2: and giving google themselves and anyone else google approves of (entities who pay google or who have other influence like government) the very same ability to work against the user that they took away from anyone else. ie they control the entire browser let alone a plugin. They literally control what you can even see at all. You search and they choose whether something is in the results. You search with not-google and they still control if the dns resolves anything. You use other dns and they still control if the ssl is valid, which it doesn't matter if 11 techies know how to overcome all that, they still controlled what 7 billion people saw, and thus what they were allowed to even think, minus a handful of impervious super geniuses like you and me.
3: There are infinite possible ways to address the supposed problem of harmful plugins, just as there are infinite possible ways to attack any problem. Even if one decides to agree that it was necessary to do something about the problem, it was not necessary to do this about the problem.
There used to be a theory that apparently doesn't exist any more, about the appearance of impropriety. The idea goes that in any situation where someone has power over another, especially over the public at large, like a judge or a politician etc, where everyone has to simply trust that they are acting with integrity, that in fact no they don't have to simply trust. The appearance of impropriety is damning enough all by itself. Since no one can prove what someone was thinking, and the position carries enough responsibility and consequence, then the office holder doesn't get to say "it just looks like I awarded this contract to my brother because he's my brother, that's just a coincidense" That might be true in the absolute term like in physics where technically anything is literally possible. But since there is no way to disprove it, and the bad effects are bad enough, we don't have to prove it. The appearance of impropriety is enough, because anyone holding a position like that also already knows that they have a responsibility to act with integrity and not allow any possible question about that. They already know that they can't just give a contract to a family member. And so doing it anyway and expecting to be able to excuse it, is it's own form of impropriety regardless what quality work the brother will do or what the alternatives were.
Google removing utility from the user and granting it to themselves is way way more than merely "the appearance of impropriety".
It doesn't matter what harms some plugins have done.
> Serious question: what king did Google kill? They really didn't take down a big player in an existing industry, they started by being very good at search and displaced other search companies at a time when internet search wasn't a big business.
The answer is easy - Microsoft. They didn't directly take on MS's business, but in the general understanding of what was the big tech company, the one that gobbled up the best engineers, the one that startups were afraid would decide to compete with them - that was Microsoft in the late 1980s and 1990s, and it started shifting to Google.
And even though they didn't directly compete with them on being everyone's OS, they indirectly competed with them by turning the browser itself into, effectively, the OS everyone uses. They killed IE with Chrome, they made which OS you use far less relevant, they took over Mobile OS (shared with Apple, of course), and they are now competing on Cloud.
There is no question that what Google was from 2005ish to 2020ish, Microsoft was before that. You can even read pg articles about this exact thing.
"kill" is not a good metaphor in tech/markets, because established things and companies actually goes away.
"Dethrone" is better, as the newer thing at some point becomes equal, and then dominant. But the old still exists, potentially staying in terms of absolute market value, but minority in terms of relative market value.
During the lifetime of Google: mobile has dethroned PC, video streaming has dethroned TV, SaaS over pay-once software, online advertising over physical, etc. Google has been a force in all of these (and more) - though not alone of course.
No, that culture started turning against big tech when Microsoft and Oracle started very brazenly abusing their market domination in the late 90s.
The culture turned on Google around the same time Google lost their innocence and dropped the "Don't be evil" clause (note the dropping of that clause was not the cause, just one of the symptoms).
Back in the 90s, at least in the UK, it wasn't called "big tech"; and it wasn't a major part of our direct culture as a family home might have one personal computer if they were well off but it wasn't all that important.
I suspect that even today you'd have a hard time finding a random person on the street who even knows that Oracle is a company and not a character from The Matrix or ancient Greek mythology. And if you tell them they bought Sun they'll think you mean the newspaper, if you talk about Java they'll either think you mean the island or are talking about a brand of coffee.
Back in 2018 [1], at a similar time that there was a lot of moving about and restructuring. I think this is about the time that Google search started going downhill as well
Again, the phrase was not dropped. You might notice that your link is titled "Google removes nearly all mentions of don't be evil from its code of conduct". (By which they mean 3 mentions.)
But as that article notes, they didn't drop the phrase. It's there now. It's always been there. There was never a time when they dropped it.
From my experience crossing eras; the tools have improved, and there are still some amazingly brilliant people, but the amount of overhead and the ratio of top level, passionate builders has dropped meaningfully. I have had a priority project stuck in legal hell for six months because it’s just a morass of logistics. The project is benign but it doesn’t matter. We are ossified.
I look at it as somewhat inevitable considering the path the company has taken, but it is certainly different. We are cranking out money and that’s fine, but it is a change.
"Bad engineering" in adhoc ways tends to mean new ideas are being explored. Sophisticated deployments, logistics, procedures, tends to mean you're optimizing or extending existing system.
That's not to disparage the latter. making things work at scale is hard engineering. But when people praise the glory days, it may be a preference for working on new ideas in small projects.
There are most definitely trade-offs. Getting a small project off the ground now is hard because the size of the organisation puts so many requirements on you.
The way I like to look at it is that small things take a while, but big things happen remarkably quickly. For example, rolling out a "hello world" service might take a few days, but having that service serve 1M QPS is pretty much free (in terms of effort). At my previous place a new service might have taken an hour to set up a deployment for, but having it serve 1M QPS would have required overhauling several aspects of our infrastructure over months.
On the flip side, clueless execs are killing tools like Code Search because they don't understand the value. They're willing to layoff off a team to save money even if it reduces the productivity of 20,000 employees by 1%
What’s the point if the capacity to deploy good products has been lost in the process? What if what you see as unreliable was at the hearth of what enabled great things to happen. This is a typical caveat of a programmer centric organization. Things get more reliable but more frozen and innovation die.
Boq/Pod: canonical service frameworks and configuration automation all-in-one system. Boq and Pod give you blueprints for server configs, deployment, server discovery, environments + release pipelines, monitoring, alerting, canary analysis, functional testing, integration testing, unit testing and presubmit, server throttling, etc etc all for free with automated setup.
P2020 + Rollouts: This is for intent-driven deployment, where deployment configuration for jobs, networks, pubsub topics, and other resources are declaratively checked into source, and the Annealing system automatically finds diffs against production state and resolves those diffs (aka a rollout).
Automated job sizing: load-trend driven automated capacity planning. Separate from autopilot, which is a more time-sensitive autoscaler. This will actually make configuration changes based on trends in traffic, and request quota for you automatically (with your approval).
BCID: this is for verifiable binaries and configs in prod, where it requires two parties to make source changes, two parties to make config changes, two parties to approve non-automated production changes, and only verified check-in binaries and configs can run in prod, not stuff you build on your desktop machine.
> Boq/Pod: canonical service frameworks and configuration automation all-in-one system. Boq and Pod give you blueprints for server configs, deployment, server discovery, environments + release pipelines, monitoring, alerting, canary analysis, functional testing, integration testing, unit testing and presubmit, server throttling, etc etc all for free with automated setup.
These are well-lit paths, that try to make it easy to take a server you want to run in production, and give you reasonable releases/monitoring/canarying for free (e.g. no one should have to configure something that stops pushes that are causing crashes of the updates tasks).
And even more importantly, the teams responsible have (at least in the past) done a pretty decent job of keeping things up-to-date and automatically moving you onto newer versions of systems. My services on Boq/Pod have been moved to new rollout-management platforms multiple times and I didn't have to do anything besides learn the new UIs.
can't speak for others, but BCID is this thingy: maybe your team has some petabytes quota in the datacenter, but as a software engineer, you no longer can test your number-crunching data-processing program by running it there, not before checking it into repository. Instead, you'll have to check it in, and then run, and then check in the fixes you found
So sure they can very reliably and safely deploy... It just doesn't feel like they've actually deployed anything interesting in 5+ years (except maybe some AI stuff). From the outside it looks like the company is perpetually refactoring
history's biggest yakshave...? not surprising for a programmer-run company
I'm sure it still matters to bring down the power and server bills. But one can't help feel like they could be doing much more
Boq/PoD merely standardizes production deployments. You can choose to not do it, and end up having to redo everything yourself. The vast majority of all services uses it perfectly fine.
Unified rollouts, unified versioning, universal dashboards, security compliance, standardized quota management, standard alert configs. It's opinionated, but I can drop into any team and hit the ground running. I don't want to learn your custom dashboards doing the exact same thing with different names.
The issue with PoD is that it's a great concept and implementation that's tight on resources, and doesn't have much of a plan to expand beyond its current paradigm. The P2020 team deserves way more recognition for all the work they have done.
I've done the path from SWE to SRE and back to SWE. I was always happy to do production support and diagnose and fix production problems, so I naturally moved to SRE which is always looking for people.
It was a real mistake, SRE is hugely stressful and really unrewarding compared to SWE. Yes you learn some skills and get some occasional glory, but year after year of fighting fires really didn't build any long lasting career.
After switching back to SWE I've finally got promotions and pay rises again, as well as good night sleep and much less stress.
And for anyone still considering SRE: when interviewing, ask how incentives/bonuses/promotions work for software engineers and how does that compare to SREs. A lot of promotable activities for SWE (shipping new things constantly) have negative value for an SRE role, since continually changing infrastructure ensures on-call never develops mastery of those systems.
Back to the OP, I raise a glass to your sabbatical. Most SREs end up needing a healing period from repetitive stress injury (AKA burnout).
If I may offer some completely unsolicited advice, don't put too much pressure on yourself in the next few months. People who gravitate towards SRE work tend to thrive under short-term ambiguity and emergency/urgency. However, long/medium term ambiguity without a clear productive goal can quickly feel like a crisis. OP mentions this in closing, so I'm rooting for them to rest and "sit still" for a bit.
I've been running as SRE for the last decade, running critical stuff like authentication (which mostly all services depend on). I'm a software engineer.
I cannot disagree more: our team is healthy, oncall is quite a fine activity to do (and compensated, of course), we have plenty of engineering work to do.
I've had five promotions (and tripled salary) and done so working on plenty rewarding activities over time. I've done from deployment automation, to capacity planning, distributed system design, large data migrations, designing ietf standards for auth protocols, wrote client sdks, now we even do AI for different things (including model development).
I'd recommend to not generalize from "I didn't like it / the experience wasn't a match for me" to "the role is shitty".
> oncall is quite a fine activity to do (and compensated, of course),
Overnight on call is never compensated. I know some tech companies pay but I've never seen it.
> deployment automation, to capacity planning, distributed system design, large data migrations, designing ietf standards for auth protocols, wrote client sdks, now we even do AI for different things (including model development).
To me that is mostly SWE work (capacity planning and migrations perhaps is SRE). in regulated environments SREs are explicitly forbidden from making changes to the code base.
> I'd recommend to not generalize from "I didn't like it / the experience wasn't a match for me" to "the role is shitty".
I started in basic IT, worked into SRE, and once I had established an understanding of it, immediately did whatever I could to transition into full time SWE. I went in hoping it would be cool coding to automate infrastructure, but it felt mostly like being tier 4 support.
I miss working in software that also utilizes my skills in infrastructure, but I do not miss the constant escalations, terrible on-call schedules, and only about 20% of my time being spent on the rewarding parts of the job.
Thanks for recognizing what SRE/SysAdmins have to deal with all the time.
I think in general that type of experience would help Developers be more empathetic to the operations side of things. Those fires often come from trade-offs made in development.
Yeah if you're a SRE or admin good luck. I think some companies are much better at looking after you than others so make sure you find somewhere good.
I think it made me a better developer because I've seen a lot of what can go wrong. Probably reduces my productivity but ultimate my stuff is more likely to work.
Yeah, that's the essential mechanic of it. When "stuff is more likely to work" then everyone can build upon it.
Though at some point(s) that stable foundation needs updating too, so new stuff can be built upon it. That's where the choosing the right balance for the right pieces needs figuring out.
Maybe you should clarify what company you worked in, because that's clearly not Google. In Google, SREs do 12 hour shifts. Night shifts (or rather 24 hour shifts) are (ironically enough) usually done by SWE teams. Almost every SWE team in Cloud has a 24 hour shift, and I agree, they are quite terrible.
I've had 12 hour shifts at Google with 200 pages over a week (shitty monitoring, shitty capacity constraints, shitty management). That was quite terrible.
SRE is an anti pattern that Google is unwilling to admit and is selling books on. Just like there should not be QA, release engineering, continuing engineering or DBA as separate departments/job titles, because these critical parts of software development should not be considered optional and thrown over the wall to take care of by someone with no stake in developing the product.
I've been one the other side of this (i.e. companies that have no SREs or QA, or in one case a company that had QA and got rid of it) and it has always been an unmitigated disaster.
The root cause of this disaster is that, when writing software, interruptions are the death of productivity. Having a software engineer wear too many different hats at one time, especially when some of those hats are largely real-time interrupt driven, can absolutely kill productivity.
To emphasize, I'm not at all in favor of "throwing things over the wall". Software engineers are responsible, for example, for making software that is easy to test and has good observability in place for when production problems show up. But just because you listed a bunch of things that are "critical for software development" doesn't mean that one person or role should be responsible for all of these things.
At the very least, e.g. for smaller teams I recommend that there is a rotating role so devs working on feature development aren't constantly interrupted by production issues, and instead each dev just gets a week-long stint where all they're expected to do is work on support issues and production tooling improvements.
I agree very much with you that interruptions are death of productivity. Your suggestions for weekly rotations are great.
However, I argue that if the engineers are interrupted by QA issues, they will be motivated to find ways to not have those QA issues. In absence of that, we end up with the familiar “feature complete, let QA find bugs” situation.
> However, I argue that if the engineers are interrupted by QA issues, they will be motivated to find ways to not have those QA issues.
There are institutional limitations that engineers cannot overcome, no matter how zealous or motivated. Moreover, companies also ought to remember that engineers can "find ways to not have those QA issues" by seeking employment elsewhere!
This is such a crazy take for me. Any profession that matures eventually specializes. I wouldn’t expect the same person to pour my foundation, install the plumbing, and wire the building. Yet in an ever expanding field we expect someone to be able to do it all. Also saying people who don’t code have no stake so egocentric.
Pouring foundation, installing plumbing and wiring the building is specialized by the physical necessity of these activities, which cannot be repeated without mistake at a great cost. That justifies specialization. Unlike building a bridge, compiling software is essentially free. QA, release engineering and database design can and should be repeated and iterated on by software engineers, because it is a necessary part of the development and removing it from the expected work distorts incentives.
Regardless of these fields being separate departments/job titles: people are not getting promoted for doing QA, release engineering, continuing engineering, or DBA work. It's a huge cultural problem in tech.
Only having full stack engs taking care of everything works but only until a certain org size (like in a small startup). Once the org gets larger/systems get more complex, you usually need specialisation. It’s natural, and Google didn’t really invent anything here
I've always wondered if privacy conscious engineers who work at Google do actually use Google's services for their personal lives (Google Drive, Google Photos, Gmail, Google Calendar, Google Keep, Google Docs, etc.)? And if so, do they continue to use them when after leaving the company?
I ask this question here because there seem to be quite some (ex-)Google employees in this thread.
I work at google and I use google products. Sure, some giant automated heap of code is processing your data and deciding how many grammerly adverts to show you, but your data is about as safe as it can be from loss or from humans. There are so many controls and checks in place when working with user data it's difficult to get things done some times (and quite rightly too).
I'm responding to makerofthings' comment that data is about as safe as it can be, because there are so many controls and checks in place. If there were many controls and checks in place, would data loss of such a high profile customer occur?
Note that the data loss occurred at least partly because the customer was provided with very much pre-beta offering that essentially didn't have all of the control plane done yet.
That's honestly a very hard issue to track because such legacy setups often can slip by later tooling, in this case the part where it was set to "auto expire" after certain time, but instead it became a production environment.
Google is large enough that you will hear plenty of opinions on this. In my bubble (Zurich, Security) the overall sentiment seems to be "I trust Google handling my data way more now compared to before I joined".
despite Google's recent pivot to suckling at the US defence department money teat, Google spends lots of money tellign the US (and everyone else's) government to fuck off in the courts, and and the finding out that the NSA had hacked Google's backbone led to a near instantaneous decision to encrypt all traffic on it, at a pretty large cost to the company in energy and effort.
All these things make me envious of people who get to work at Google or any other FAANG company, where they are both paid well and validated for their intelligence.
These are great benefits for sure, but one of the reasons I left is a totally ineffective and wasteful management structure that makes it extremely hard to actually do anything. It is very hard to feel like your work means anything at a company of that size, since the chances of having the goals of your work changed (or being laid off) at a moments notice makes it hard to stay motivated.
If I take another software gig it will certainly be at a small company where my daily work contributes directly to the company’s central goals.
In case of Google, it's not just the size of company. It's the fact that the company makes it's money from selling ads. No matter how well you do your job, the end result is only that more ads were sold.
Programming is a superpower that can change the world. Yet the best paying jobs for programmers are at FAANGs building systems to peddle ads. .
> It's the fact that the company makes it's money from selling ads.
this is true, but it did not at all affect how 90% of the company worked. the ads teams made money and everyone else did whatever, supported by that infinite firehose of cash.
a large part of the reason google has sucked (internally and externally) for the last few years is that this changed.
everything else also now needs to make money.. since it's impossible to directly make really any meaningful revenue compared to ads (and maaaybe cloud) the only thing remaining is to integrate ads, upsell cloud, downside, etc.
There are so many middle managers at Apple that feel the need to justify their existence within the company, and as such they schedule endless meetings and/or send "urgent" emails that they expect you to respond to immediately.
It got to a point of being almost farcical, where they were scheduling meetings at 9:30pm multiple times a week. After two years of it I had to leave, I was coming home catatonic and depressed, to a point where my wife was getting concerned.
I've spent my career on the opposite side of the fence. I work for a tiny company, and I report to "mostly" no-one.
I add value, I choose (mostly) what value to add, and my division is profitable.
I recently did some consulting for a large company. It reaffirmed for me that my path was right for me. I haven't made as much money as my corporate brethren, but the endless treadmill of meaningless work, manager meetings, measurement-by-jira and so on would have spit me out early.
I enjoy the creativity of my work, the direct interaction with customers (especially when they like me :) - the intuition to see how things could be better, and the freedom yo execute on it.
My path to joy is not for everyone, others get joy from bringing on a large team - that's OK- each person needs to find their own path.
can you please explain how these meetings got so out of hand? did you attend them? did you decline eventually? why, why not? how does it work? who was your actual boss? is there some kind of resource management? (ie. where your time is allocated?)
thanks in advance!
(I never worked at any FAANG thing, and I never worked for a US company, so this is extremely... interesting and strange.... because I am no stranger to long nights, had the occasional death march, some kind of startup momentum and expectations here or there, small teams and overtime, deadlines, but .. also headcount was less than 20 for us)
I'm not going to pretend that I really understand the psychology of a person that thinks that 4+ hours of meetings a day is a good idea.
It started when my team opened up Singapore office. That's fine, but they are 12 hours ahead of New York, and the genius middle managers on my team thought it was very important that we synchronize on a lot of these meetings, and the only times that kind-of-sort-of-not-really "worked" for everyone was between 7:30pm-9:30pm NYC time.
That was already bad enough, but this genius would bog the first 5-10 minutes of the meetings with small talk, giving his opinion on the latest keynote or something else. Small talk is generally fine, but not when everyone is looking to go to bed.
It got really out of hand once COVID started. Suddenly, since everyone was working from home and as such it could be assumed that they had access to their work computer, managers just decided that there's basically no time off limits for a useless meeting.
> did you attend them?
Yes, most of the time. We'd get in trouble if we just skipped them.
> did you decline eventually?
As many as I could, but if I did it too often I could reliably expect a phone call complaining about it.
> how does it work? who was your actual boss?
I don't want to give specific names. My direct manager was actually fine and generally only scheduled meetings that were reasonable. His boss was pretty stupid, and scheduled a few useless meetings a week . His boss was a complete moron and I think was completely incapable of scheduling a meeting that was actually useful. The chain goes up several more levels.
It was more or less like Office Space: if you made a mistake you'd get like six managers separately explaining your mistake to you.
> is there some kind of resource management? (ie. where your time is allocated?)
Apple has its own ticketing system called "Radar". It's kind of like Jira or something, but it's a GUI app instead of a web application. Tickets are more or less ranked in the same way they are in Jira, you just estimate the number of hours it will take.
A few points of fairness to Apple:
- I'm a very annoying and difficult person to manage, and I am extremely impatient, so maybe I overreacted to all this stuff (though I know that I was not the only person really annoyed by this stuff).
- Judging by the high turnover rate my team had, I suspect that I was on an exceptionally poorly run team. I did try transferring to another team, and actually did pretty well in the interview, but I was declined because I had received a poor performance review the year before [1]. I know other people who worked at Apple who really like it, so I think I just had some bad luck.
[1] Honestly, the bad review was kind of justified, much as I hate to admit it. I had become pretty frustrated over a lot of stuff happening in my life and it was reflected in my work output. I did get better but not before the review period was over.
It seems initial impressions in these huge corps are almost everything. If things are great people are willing to put in the hours, money is great after all, so one's trajectory quickly curves upward, promotions, yeey! But if it's bad, it's hard to go anywhere, even laterally, because of the baggage, so there's only down from there :(
and then, after all that bad leadership and bureaucracy, one of the top executives (Schmidt) blames google's lost lead in AI on the workers who don't want to work anymore and are just concerned with getting all the perks and work life balance.
The validated for their intelligence thing is a problem. Googlers in the early days constantly told each other that they were the "smartest people in the world". I noticed this immediately after joining and found it quietly troubling, because there's all kinds of smart and the interview processes were really only selecting for skill with computers or maths. A lot of Googlers wouldn't survive long on the streets or alone in the wilderness, as these require different kinds of smart to what they had. Some colleagues were troubled by those statements for other reasons: they didn't personally feel like one of the smartest people in the world, and this led to imposter syndrome. But we said nothing because, hey, the company was doing great and people did seem pretty smart overall back then. Plus nobody wants to call themselves out as an imposter, and Google had a certain degree of institutional humility to it as well. The company was very much about empowering everyone, no matter who or how "smart" they were.
What you're seeing from Google in the last ten years is a maybe predictable consequence of that culture, where some Googlers really do seem to think they're generically much smarter than everyone else, about everything. You started to see mass scale social engineering via manipulation of search results and products, driven apparently by the immense faith they have in their own wisdom. Is there any claim Googlers cannot immediately resolve as true or false given nothing more than a few ML models and a team of contractors in LatAm? Apparently some of them think that's all it takes.
This quasi-misanthropic culture is miles away from the trusting "make it universally available and useful" culture the company once had, but the seeds of that culture's end were clearly visible even at the start. You can't constantly validate people by telling them they're super smart before some of them come to actually believe it, and that leads naturally to the belief that if they're really the smartest people in the world then surely that means they should be running it.
I think when you're hiring people like Russ Cox, Rob Pike, Van Jacobson, Jeffrey Mogul, Jennifer Rexford, Bram Moolenar, Vint Cerf, etc. (and that's just a tiny fraction of their well known amazing talent) it's hard not to think they're hiring the smartest & brightest.
its the proximity, pedigree, profile that you have to fit to get in to Google
I'm happy for those that made it. Not everybody gets to work for Google. But the work they do are no less challenging or more important than what the rest of us do.
If anything FAANG has contributed greatly to the American Firewall of Algorithms and have destroyed an entire generation's ability to reason and value common sense.
I remember this quote which I can't remember who said but "if they are paying you a large salary, what they take from you is far greater"
is that what you tell yourself? mine was aimed at the notion that salaried compensation especially if its very large implies an asymmetrical return for the company.
for one, it would buy loyalty. Not many Googlers stood up when they realized their technology was enabling military drone strikes on children.
The traditional view is that young engineers should join startups in hope of a massive payoff if one goes big, but working for 10 years at BigCo with good salary and stock plans can set you up for life without any risk. One path to avoid is the one I took which was to work at a big tech co as a contractor. Good rates but nothing to show for it after 10 years other than the experience and whatever I had put in my 401K.
In big tech? One can pocket an avg 100k/year. Less in first few years, more later on.
That isn't "quit immediately and coast" money, but several times a juicy down payment in the bank and a guaranteed MINIMUM 200k USD/year for the rest of your life IS set for life IMO.
It is not several times a juicy down payment if you live where near any big campus. And a lifetime salary above 200k in tech is not a thing. Tech is riskier the older you get unfortunately.
A compromise would be joining a Unicorn that is post IPO but is hiring like crazy and growing fast. Hang onto the company stock rather than diversifying into index funds hoping it will grow much faster than average/split etc. No risk and big rewards.
Another path to avoid is working "W2" for a BigCo via a contractor. You don't get any of the security or compensation, and you may not even get the benefit of the name for your resume.
I try not to be jaded having never cleared the dreaded google interview multiple times, but I'm very envious of the high compensation, amenities (which their recruiters argue is compensation), and probably getting paid doing very technically challenging work. For me in Australia, it seems to be a case of choose one.
That doesn't sound too unusual. I'm more impressed that they made L6 in 9 years. I've known Googlers 10+ years forever stuck in L5. The easiest way to promo at these companies is to leave and come back at a higher level.
My experience was L5 always seemed like the happiest engineers. It has often the most interesting technical work, still great pay, and low stress (compared to higher levels at least). It seemed like when people got promoted to L6 their overall quality of life would go down. I knew a lot of people sticking at L5 on purpose (I once even talked to a CTO at a small company who said they had wished they never were promoted above technical lead).
L5 is a great terminal level (and the equivalent at other companies). you are trusted to walk and chew gum at the same time, can cause meaningful damage (good and bad), and largely “too lowly” to be in the perpetual pissing matches of mid/upper mgmt. sure you won’t control your own destiny in broader projects but it’s a good life you often don’t miss till you’re past it.
It's easy to be underleveled at a BigCo job, and to not even realize what that means because the leveling system is obtuse from the outside. You can spend many years fighting the bureaucracy just getting your title/comp to catch up to your skills.
One nice thing about being <= L5 is that your vacations are actual vacations. I can disappear for months, totally ignore my email, and have nothing be on fire when I come back.
It's great that it is terminal, but both comp and the respect you get in the company is strongly affected by level, so it's hard to just sit at L5 while you watch your peers rise the ranks
> USA centric culture, if you are not in USA at Google and don’t have a big presence in a location it’s a bit like swimming upstream, it’s easy to feel isolated, sidelined or on the flipside overwhelmed with late meetings
It's like the fourth time I read/hear this. I understand that it's a tricky one to adress.
This is US companies in general! Never work for one in another country; if you can help it. All decisions will be centralized to the US head office and require face to face. Communication is generally terrible.
I’ve worked for European and Asian owned companies, and they seem to be able to handle distributed authority much better. For the “land of the free”, it sure seems like US companies run on a feudal system.
My limited exposure in practice involved a huge japanese company, the kind that survived while still being huge even the dissolution of zaibatsu, using the approach of many semi-independent divisions. Sometimes the divisions would be even created specifically in different country to provide a place to put in a team whose leader is getting extra extra independency in hopes of delivering new products from scratch.
Sometimes you'd have people delegated out of one division/subcompany to provide help elsewhere - personally experienced this when we needed a PostgreSQL expert on a project where we subcontracted/bodyshopped for one of those subcompanies, and the headquarters delegated a postgresql core team member from japan, including having him visit for a time.
But day to day the decisions were local - the most feudal thing was that we knew one team (~50% of the company that was our direct client) was the most important one and that it's lead had actually more power than CEO - the specific division was essentially a way to park his independent team so he didn't have to deal with administrative overhead.
As a matter of fact, yes, Japanese companies operate largely on a bottom-up consensus model. Approval from the top is still required, but they rarely refuse it if everybody in the ranks is aligned.
Thanks for writing this! I work on AppEngine / Serverless as a SWE. Nice to see that you worked on it as an SRE and I can totally relate to the cognitive complexity of the systems! :)
The money alone is good enough. I can probably just retire with all those money and stocks. Let alone tons of techs and a shiny CV. Kudos for starting from a height most people can't reach for life. I guess you are going to start another company eventually?
Comp is higher at L6 and lower in London, so I don't know how that balances but…
If you spent the last 9y at Google in SF/NYC (the top comp regions), you'd have a million dollars in stock alone. It doesn't go as far as it did 9 years ago, but it's still ahead of a whole lot of people in this economy.
Exactly. Now at 40+ I can only hope for a lottery ticket to retire early and do what I really want. Anything else is just too slow or might reduce my wealth.
Having spent some time with Google as SWE, I think Google was by far the best engineering company I have worked with. Even Amazon, Microsoft were terrible when it comes to software engineering.
I am one of those engineers who do not care about culture as long as I am getting paid for the efforts I put in. Google in that sense beat others by HUGE margin.
The engineering work was however very different. We focused on right engineering solutions instead of just business aspect. While that kind of attitude hurts us in short term, it pays big in long term.
“I am one of those engineers who do not care about culture as long as I am getting paid for the efforts I put in.”
Then large companies are not for you. Navigating the culture is key to advancement and long-term satisfaction. Otherwise you will feel like an outcast and likely let go during layoff rounds or kept around at lower compensation rates.
> working 60% or 80% were fantastic for my lifestyle and building relationships outside of work
Whoah, it seems fantastic! That alone seems like a good reason to work for Google. Unfortunately, none of the companies I worked for was interested in less than 100%. I told them many times, you can keep your money, I just want to spend 20% or 30% less time at work, but they always insisted on 100%. I have a feeling they would go for 120% if legally allowed.
I'd love to know how in a fast paced office environment you can improve those, none of the trainings or anything are about this (even the leadership like ones feel like just standardized template stuff rather than actually have an environment where you can practice social skills and get the correct timely feedback to improve it)
If you don't enjoy waking up at 4am, work on mobile. Once a new app release is on the phone, nothing is going to make it suddenly break and there is no hurry to send the new release to all users advance, go through internal employee testing and then roll out slowly so that any breakage doesn't affect a lot of users.
Perhaps a condescending take but I think the author got a bit of a big head from getting promoted quickly, and the subtext is that it was due to their amazing technical competence. It’s a noteworthy feat to get recruited out of school, but SRE is a godawful position with high attrition, so it’s easier than SWE to get promoted. That they regret not moving to SWE sooner ignores that SRE is a talent sink and considered a separate ladder by most companies. At this point, the ship has sailed. Eat humble pie, embrace your skillset, and move onward and upward
One of the funniest things in HN that I love to "lose" some karma points is to join discussions about/with frustrated ex- Googlers. Google is shit, completely enshitiffied and the engineers there are responsible for that but still when you get to this kind of thread the only thing you see is them praising each other, patting back. Google is gone, my friends. Men in suits destroyed it. Gone is the time that people working there were considered smart, now only the greedy remained.
I recently left a job at a very different large company with a similar timeframe (a little under ten years). Pretty much everything this author states is related to my experience.
There is nothing all that special about Google. Maybe there was twenty years ago, but that ship has long since sailed. It’s just another large US tech company. Like Microsoft and IBM before it.
For a long time google had cachet as the most engineering friendly big tech firm (which was mostly deserved) and also the place with the highest level of talent (which is more team dependent but also somewhat deserved). You might end up working on ads or some other inane thing, but at least your engineer coworkers would be really good. They're still riding that wave to some degree because they haven't scared away all their top talent yet.
> It’s just another large US tech company. Like Microsoft and IBM before it.
This is just a hyperbolic statement that should not be taken seriously at all.
Look, Google isn't some fantasy land that some people might have lauded it as once upon a time, and it isn't unique in terms of pay or talent, but it is certainly at the top echelon.
I did an interview loop for high level IC at both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.
IBM has never a company where engineers could rise to the same ranks as directors and higher with a solid IC track.
Is Google special compared to Apple/Netflix/Meta? No. Is it special compared to Microsoft, IBM, and any non FAANG or a company that isn't a top deca-corn? Yes.
Microsoft and IBM used to have a similar extremely talented teams. IBM ran research centers full of the world's top Ph.D. The innovation that happened at those places easily rivals Google's.
It's a similar trajectory is what people are saying. When Google was small and everyone wanted to work there they could take their pick of the top talent. When you run a huge company you inevitably end up with something around the average. I.e. all those huge companies that pay similar wages and do similar work basically end up having similar talent +/- and within that there are likely some great people and some less than great people.
Yes! It’s sad how ignorant of IBM and US technology industry history some of these comments are. Then again, I suppose every generation does a lot of its own “this time we’re different” myth making. Not everyone has the wisdom to see the broader context.
Indeee. I think because for younger generation is physically impossible to have experienced it, while for the older generations it's complicated to get into a disruptive startup.
Obviously people could read about the past, but sometimes that's asking too much, they are busy creating "the future".
>I personally know people who moved up the ranks there to director and above,
I didn't mean that engineers can't become directors, I meant that IBM didn't have a track for top ICs to get paid more than directors and still not be on a manager track.
> ...both Azure and GCP simultaneously and the different in talent level (as well as pay) was astounding.
This is maybe the third time I've heard this mentioned here on HN, so now I'm curious: What specific kinds of differences?
I imagine there might be a certain kind of prejudice against Microsoft and its employees, especially for "using Windows" or whatever, which I've found often unfairly coloring the opinions of people from Silicon Valley that are used to Linux.
If you don't mind sharing, what specific differences did you notice that gave you a bad impression of the Microsoft team and such a good impression of the Google team?
Overall talent level. Almost everyone I've interviewed with at Google impressed me, as well as came across as thoughtful and kind.
I did interviews with many teams at Microsoft (9 technical interviews total) and the only person that impressed me is now at OpenAI.
Every single interview question I got at Microsoft was straight out of intro to CS /classic Leetcode.
They would straight up ask "find all anagrams", "string distance", "LCA of a tree".
Google instead disguises many classic CS questions, so it takes a lot more thinking. Microsoft seemed to just verify that you can quickly regurgitate classic algorithms in code.
I'm sure there are some great teams at Microsoft: but because each division/org is much more silo'd I think it's more likely a team has a lower overall bar.
Google makes everyone pass through a hiring committee and you're interviewed by people that have nothing to do with the team you might end up on. Meta is similar. Amazon has the team interview you, but they also have bar raisers come from other teams.
Microsoft seems the outlier here that someone can get on a team with only interviewing with people on said team.
Some 20 - 10 years ago I was seriously interested in joining Google for many of the reasons he lists. In the end I never applied because at the time they had no development in any country I would have been interested to relocate to.
However, during recent years I have turned into a Google hater. He does not mention any of those aspects. Google is an evil business IMHO. They are an advertising company. The challenges for this planet are sustainability. The goal of advertising is to waste resources. I can type this on a low end phone that soon turns 10. It works perfectly, except that no recent Android version is supported. Google is in the business that cores and memory have been doubled several times since then, for no benefit to mankind. And phones are far from the only category, advertisement is about selling a lot of stuff that does not bring any true improvement in quality of life. Video is one of the worst energy wasters in computing. 90% of Youtube is useless crap, not worth destroying the planet. Nobody would pay a realistic price for it. They are an ugly oligopolist. The list could go on and on...
I can do fewer things with my computer than I could when computers were slower. Software has more than destroyed the gains of hardware. (The only exception is numerical simulation, which I need occasionally for academic research, but not in my day-to-day life.)
Video games, supposedly the Killer App for high-end computing hardware, are among the worst offenders: your average modern 2D side-scrolling platformer (Mario clone) locks up my computer (to say nothing of AAA games). Web browsers are the second-worst offenders. Chat clients (secretly web browsers) are third: they barely work, and interoperability has gone the way of the dodo. Operating systems are fourth: they (mostly) still let you use older versions of software, but they have mostly just grown new problems (e.g. built-in adware in Windows; GNOME… well, GNOME) without fixing long-standing ones (e.g. slow domain login in Windows; most systemd misfeatures).
That claim of environmental impact is just incorrect when it comes to the junk content.
The bad ones are barely watched; sitting on a hard drive after getting transcoded isn't an environmental disaster.
You could try to argue that popular culture is fundamentally bad; people have been saying that about TV when I was a kid, dance music before that, novels in the 1800s, Christmas in Puritan New England, and the Olympics in ancient Rome.
That isn’t true, even just accounting for people expecting the same speed and quality for fringe videos, delivered to mobile devices in any corner of the world, requires mountains of technology.
It’s not that popular culture is bad, it’s that a wasteful way of living isn’t sustainable, and it’s beginning to erode the foundations of our lives.
Sounds like you're shifting the goalposts from "youtube is bad" to "the internet is bad" in the first paragraph, and then in the second paragraph to "everything after the start of the first industrial revolution is also sus".
> But my usage is so low that the current price feels horrendous per hour.
I don’t know how you can decry video streaming and advertising as destroying the planet and thus hating one of the companies that provide these services YET you use it.
Google is not even close to that kind of monopoly. Every one of their products has viable alternative, and plenty of anti-Google people get by without using them.
That certainly isn't true of youtube. It, exclusively, holds a lot of videos. It's not like I can just watch random youtuber on vimeo or some kind of gnutube.
FWIW, about a third of the high-quality creators I subscribed to on YouTube initially, also maintain a presence on Nebula.
It's not enough to fully switch, but at the very least I get their stuff without mis-gendered advertising — as seen the other day on YouTube, where I was presented with an unskippable ad for sanitary pads.
None? Though looking at my subs list, the low quality channels are mostly personal friends sharing stuff that won't have a broad reach, much like my own is now I'm not making indie games.
(They also post so infrequently that I forgot I'd subscribed to them).
No idea what you mean by "that kind of monopoly". Google is a company which performs illegal acts to preserve its monopoly. Of course that will limit the viability of competitive products and services. That's why it's illega.
I use well below 1 hour of YouTube per month, many months zero. And I never click any ads (nor are the products they advertise relevant for me, they cannot track me and I am far from the average consumer) So if it were only users like me Google would have been long out of business. Obviously I have an ecological footprint, I try to keep it small, but without being a collector and hunter living in a cave.
When I worked at Google I immediately sold my stock and diversified. In hindsight that "cost" me at least $2MM. At a subsequent employer I held the stock, determined not to make the same mistake again. That "cost" me nearly $100k before I realized that the new employer's stock was not going to do what GOOG did.
Anyway, congratulations to those Googlers who "picked" GOOG by not selling their GSUs and who made out like bandits. You certainly got lucky, and nobody should expect their own employers' RSUs to do the same.
Googlers from the past ~15 years are comparatively filthy rich. Imagine having had an opportunity to miss out on 8 figures of stock returns and only be left with 7 figures because you followed sound financial advice. Bruce Willis is no doubt dabbing his eyes on Memegen.
Different big tech but similar strategy. Sell and diversify. I think it's a reasonable one. The people that made more money took on more risk and gambled. If you went into e.g. the S&P 500 or real estate you probably did ok.
The last 15 years were good in tech but the earlier Google employees did even better.
I'm guessing neither of us made money on bitcoin or gamestop... I don't lose sleep over that.
> Imagine having had an opportunity to miss out on 8 figures of stock returns and only be left with 7 figures because you followed sound financial advice.
A bit of perspective may be warranted. There are single mothers busting ass raising kids on tips and food stamps, while you're asking for sympathy for making out with only seven figures from your Google stock options.
I'm pretty sure the luck of being born with sufficient intelligence to work at Google is orders of magnitude greater than the luck of making big ROI on Google stock.
> because you followed sound financial advice
Most financial advice is there to help the clueless masses to not lose their savings to get-rich-quick schemes and then default on their debt during their next unemployment period. Anyone smart enough to do Silicon Valley work has probably outgrown those training wheels and can think for themselves when investing.
Isn’t this the whole point of working; particularly at a FAANG?
> It’s a shallow post-mortem
I respectfully disagree. It’s an 8 minute read. Sure, it’s mostly in dot-point form, but personally I’d rather that than some massive 80,000 word blog post that I’m going to drop 1/8 of the way through.
Since when does a personal blog post need to be a well constructed and lengthy document?
I think it's shallow and not because it's short. To me, it just sounds very typical: "I joined Google back when it was fun. Now it's more bureaucratic and less fun. But I made a ton of money on the stock." I think there have been countless blog posts from Ex-Googlers like this. It's fairly shallow.
And it is worth noting that a lot of the bullet point lists do start with "I made a ton of money" in as many words, which is also just not very interesting to most folks, though it is certainly very relevant and important to the writer.
The most interesting thing is the timeline at the end, which shows what they were successful at and promoted for (management-type roles) and what happened when they tried to transition from SRE management to SWE IC (they fell back into management).
I don't see that reflected in the rest of their postmortem learning - other than them being dissatisfied with doing what they were good at / promoted for - so that kind of helps me ignore the rest of the postmortem. :)
I would rather read an 80,000 word blog post. It would satisfy my curiosity and it’s exactly the sort of material I come to HN for. It’s Hacker News. Not Digest. It’s also why I don’t read the news often posted here from big media companies - they’re often just awful hit pieces against someone or something. Or are pushing a product.
But a well articulated, technically correct post that’s evenly paced? Heavenly. Its a David Attenborough documentary for tech nerds. It’s exactly what I am after.
I wouldn’t say anything against having an upfront summary for people who don’t have time/patience.
9 years working at a top tech company of bleeding edge work reduced down to “I made money. Oh, I made money. My stocks did well, so I made even more money” is the pinnacle of intellectual laziness. That’s not a postmortem by any stretch of imagination.
Is nagios still a thing? I used it 10 years ago and loved it. I feel like people would look at me funny if I suggested it now. Datadog is all the rage.
I think that's fair. I'd like to think I have some fairly strong ethical requirements for work, but I can see those evaporate in the face of 8 figures.
L6 ICs are pretty rare - it already is a top tier of seniority in engineering
Is Google really different from other companies? I talk to a lot of Amazonians (AWS Hero, FreeBSD/EC2 maintainer) and my general impression is that developers below L7 ought to be classified as "Junior" -- my mapping is basically L4-L6 = Junior Developer, L7/L8 = Developer, and L10 = Senior Developer. Anything which doesn't have L7+ involvement gives me major "these kids need adult supervision" vibes for all the newbie mistakes they make.
Like others mentioned, Amazon levels are one level below Google levels, and I think their higher ranks are also compressed. Also IMHO Google IC SWE levels of 2024 are about 1.5 levels below Google IC SWE levels of 2009 (i.e. a mid-L6 today is about as good as someone just promoted to L5 in 2010, an L8 promotee today is about a mid-L6 from 2010, a new L4 today is the equivalent of an intern back then). So with that mapping I'd put Google L3-L4 = Junior Developer, L5/early L6 = Developer, mid-L6+ = Senior Developer today.
Fifteen years ago L5 was actually senior, L4 was a developer, L3 was a junior developer. L6+ = you owned major user-visible features with hundreds of millions of users. L9 = you did something world-class like invent BigTable or Google News, and L10 didn't exist.
> i.e. a mid-L6 today is about as good as someone just promoted to L5 in 2010, an L8 promotee today is about a mid-L6 from 2010, a new L4 today is the equivalent of an intern back then
This seems extremely surprising. I can believe that the 2010-engineers were more technically capable, but there was also a lot less non-technical complexity involved in getting things done in 2010 than there is today.
I'm referring solely to technical skills. I think it is actually harder to be an L6 today because of the sheer number of (both political and technical) constraints you face, but L6s and even L5s in 2010 would do large-scale system design of a sort that basically doesn't exist anywhere within the company today.
Nah, you're pretty far off in terms of population numbers at FAANG. Amazon's levels are largely L-1 most places in terms of comp (aka aws L7 gets paid G L6), and 7+ at Meta is ~1% of the company's employees, and even less of it's SWE.
Amazon and Microsoft also have less "alignment" at various levels compared to silicon valley due to literal geographic and historical reasons. Principal SWE at AWS is probably ~L6 at G in my experience. Obviously there are always outliers in all directions.
L3 is early career, L4 is mid-career, L5 is senior. You can hit L5 on the strength of pure technical contributions regardless of business/org needs, usually.
L6+ is staff, and tends to involve a very different skillset. (If you're not looking to lead a team, you're probably not going to have the kind of impact that gets you to L6, let alone L7 or higher at Google.)
This is all to say that ICs in the L3/L4/L5 bucket generally show a clear progression in technical skills but beyond that it's fuzzier.
My definitions are basically: Junior developers need supervision because left to their own devices they'll screw things up horribly; normal developers can produce good code independently; and Senior developers are able to catch the mistakes the Junior developers are making and set them on the right path.
I see; that's L3, L4, and L5 progression in a nutshell at Google - although leaving L3s alone doesn't _guarantee_ something will go wrong, it was more that there was no way for them to figure out optimal solutions without help thanks to the sheer complexity of Google.
I'd say the same held true at Amazon but I was in groups which were, at the time, at the periphery of the company's engineering efforts - we didn't have any associated principals to talk to, and maybe one SDE3/L6 to 10 SDE2/L5s mixed with SDE1/L4s.
I would say* under these definitions L3 is junior; what the industry calls senior is somewhere between L4 and L5. L5s at Google are expected to mentor L3s and L4s but also to design systems, break down into tasks, and coordinate tasks between teams and engineers.
If you were a senior engineer at a 50-person startup you would commonly get hired at L4.
* I left Google 18 months ago; also, Google is a large company, and while they strive for uniformity across teams, the levels aren't really quite the same company-wide.
What? Especially at Amazon a Principal engineer (equivalent to Google L6) is a pretty rare beast in some areas of the company. Sure there are some hip orgs with a lot of senior engineers, but in general it's not a common position at all. Isn't L10 a VP at Amazon? Never heard of one doing any real engineering work.
Principal Engineer is L7 at a Amazon. So if that's the same as a Google L6 I guess the scales are a bit different.
As for L10, that's Distinguished Engineer. I think if they're managers they're also called VPs? I'm not exactly sure what the deal is there; but I know plenty of Amazon L10s who are fantastic engineers.
https://www.levels.fyi/?compare=Amazon,Google&track=Software... is largely accurate. An L7 at Amazon would have an easy time getting an L6 interview at Google; an L6 at Google would not have an easy time getting an L7 interview at Amazon, barring prior experience and other modifiers.
Of note, The person who wrote this article spent the vast majority of their tenure as a SRE TL/M, from their timeline. That's not going to map cleanly into any career track at Amazon, and when this person tried being an L6 SWE, they transitioned back into management.
At Google, I knew L6/L7/L8 managers who were fantastic engineers; I knew L6/L7/L8 managers who were pure-management and excellent at that but hadn't written code in a decade and change. Varied dramatically by what the org needed - those engineer-managers tended to have a lot of lower-leveled engineers and the pure-managers had more highly leveled engineering reporting to them.
Anyways, while I was at Google, L5 was the lowest level where you could officially have a direct report (not counting interns), so yeah, anything of cross-team note was generally lead by an L6 or higher. (L5s routinely lead things that were critical _inside_ of a given group, but if you were having cross-team impact, well, that's L6 work.)
Engineering from >10 years ago seems like it was a wild west. Some truly stunning pieces of technology, strung together with duct tape. Everything had its own configuration language, workflow engine, monitoring solution, etc. Deployments were infrequent, unreliable, and sometimes even done from a dev's machine. I don't want to disparage engineers who worked there during that time, the systems were amazing, but everything around the edge seemed pretty disparate, and I suspect gave rise to the "promo project" style development meme.
Nowadays we've got Boq/Pod, the P2020 suite, Rollouts, the automated job sizing technologies, even BCID. None of these are perfect by any means, but the convergence is a really good thing. I switched team, PA, and discipline 6 months ago, and it was dead easy to get up and running because I already knew the majority of the tech, and most of that tech is pretty mature and capable.
Maybe Google has become more like other tech companies (although I doubt they have this level of convergence), but I think people glorify the old days at Google and miss that a lot of bad engineering was done. Just one example, but I suspect Google has some of the best internal security of any software company on the planet, and that's a very good thing, but it most certainly didn't have that back in the day.