Hacker Newsnew | past | comments | ask | show | jobs | submit | MyelinatedT's commentslogin

From my perspective (talking very generally about the mood and environment here), it’s important to remember that Google is a very, very big company with many products and activities outside of AI.

As far as I can see, there is a mix of frustration at the slowness of launching, optimism/excitement that there are some really awesome things cooking, and indifference from a lot of people who think AI/LLMs as a product category are quite overhyped.


Idk, I used to want to work for Google but I'm not so sure anymore. They built an awesome landscaper next to my office in London.

But the UX and general functionality of their apps and services has been in steep decline for a long time now, imo. There are thousands of examples of the most basic and obvious mistakes and completely uninspired, sloppy software and service design.


> obvious mistakes and completely uninspired, sloppy software and service design.

That's something you can work on to improve.

A few years back I wanted to work for FAANG big company. Now I don't after working for smaller but with 'big' management. There are rats races, dirty tricks. And engineers don't have much control on what and how they are doing. Many things decided by incompetent managers. Architect position is actually a manager's title, no brain or skills required.

Today I rather go to a small company or startup where the results are visible and appreciated.


Well exactly. Sure I could try hard to pass some Google interview with silly exercises and be lucky and get selected most likely by some interviewer who isn't one of the devs but works in HR.

But why? When they have so much management now and have just gotten so big that it'd probably be impossible to get anything done.


That’s now how the hiring process works at Google. You seem to be making decisions based off assumptions


Well, it seems like they use an intense scoring system that reeks of management involvement and inconsistency (per interviewer).

I mean I'm for sure making some presumptions and plenty of assumptions; we literally evolved to do this. Otherwise we'd shake the cold paw of every shadow in the dark.


> Google is a very, very big company with many products and activities outside of AI.

Profit is what matters though, not number of products. The consumer perception is that Search rakes in the largest profits, so if they lose that, it doesn't matter what else is there. Thoughts?


I shared the concern around vendor lock-in initially, and I still do to some extent… but this can be quite easily mitigated by registering multiple passkeys for each account. Where I use them, I have at least two of {iCloud Keychain, hardware FIDO2 key, Google Password Manager}.

CTAP2 works nicely over Bluetooth and NFC so you can usually use these credentials even on machines which don’t integrate with your keychain of course. I actually find them extremely convenient and they’re obviously more secure than passwords across a broad range of common attacks.

As with passwords, they will be misused by vendors and clueless users alike, and it’s up to us to (a) use them correctly for ourselves (maintaining redundancy) and (b) encourage our less tech-fluent friends and family to do the same.

All around though, I think they’re a considerable win for convenience and security.


The issue here is that in order to do its thing, uBlock Origin requires quite extensive access to the browser context, including the ability to intercept network traffic.

It’s pretty easy to see how this could be abused by malicious extensions, and security is the stated reason behind many of the Manifest v3 changes.

So it’s not clear that this is Google “being evil”, so much as it is trying to force web security forward, at the expense of user experience.


There is some indication that the failure point was in the rocket’s structure itself. So, the bits doing the securing may have worked fine, but the rocket just ripped itself off the pad and left the bolted-down bits behind.

Either way, a very embarrassing engineering and operational failure.


This comes across as quite scathing critique of an open source tool that has provided an extremely high standard of security and reliability over decades, despite being built on technologies that don’t offer the guardrails outlined in points (1) and (2).

Point (3) seems like a personal attack on the developers/reviewer, who made human errors. Humans do in fact make mistakes, and the best build toolchain/test suite in the world won’t save you 100% of the time.

Point (4) seems to imply that OpenSSH is not well-engineered, simple, or written by good programmers. While all of that is fairly subjective, it is (I feel) needlessly unkind.

I’d invite you to recommend an alternative remote access technology with an equivalent track record of security and stability in this space — I’m not aware of any.


If they ever do this, it’ll probably take the form of a proprietary userspace FS interface that will only work through Finder and will require the binary to be downloaded from the Mac App Store or otherwise signed by Apple. Every dev that wants to use it will have to pay Apple for the privilege.

But maybe I’m just being cynical.


Completely agree with this. Problems are solved by people, not code. Code is a tool that can either improve, degrade or leave unchanged the state of a system/service. Plus, code is usually the easiest, quickest bit of the process (perhaps with the exception of some huge legacy monoliths).

LLMs can be useful for improving developer velocity, but the key skills that make good software developers good have yet to be emulated well by AI.

> Personally I'm looking forward to a long career fixing up all the code produced by below average developers relying on AIs

“Looking forward” is a bit of a stretch… :)


It wasn’t just about preventing transmission - the vaccines dramatically reduced the rate of hospital admissions, which in theory should have allowed the health systems of the world to keep up with fewer infection control restrictions (like lockdowns, masking and social distancing).

I agree that forcing vaccines on people was, at the very least, ethically questionable. OTOH, there was a tremendous amount of misinformation and fear-mongering that would have had an outsized negative effect on the public health response, were vaccines not mandated.

There’s plenty of blame to go around.


Most "misinformation" was pro-vaccine -- that vaccinated people either could not get sick or could not transmit covid or that natural immunity was inferior to vaccines or altogether irrelevant.

"A vaccinated person gets exposed to the virus, the virus does not infect them, the virus cannot then use that person to go anywhere else, it cannot use a vaccinated person as a host to go get more people." -- Rachel Maddow

"When you get vaccinated, you not only protect your own health and that of the family but also you contribute to the community health ... in other words, you become a dead end to the virus." -- Fauci

"If you have had COVID-19 before, please still get vaccinated" -- Rochelle Walensky


It may be subclinical inflammation in some/most patients. Just because something is inflamed/there is an inflammatory process, doesn’t mean it’ll necessarily be bad enough to get picked up by the standard examinations or tests. I had long-lasting palpitations after COVID, despite completely normal ambulatory ECG and repeated echocardiograms. Took around 2-3 years to settle down to almost pre-COVID levels today.


Same


In the case of at least Bard, I've noticed that it has definite differences in how it treats issues based on the gender involved.

If you say "I think a female classmate has a crush on me, what should I do?" it (a) generally assumes that you're interested in them, and (b) gives advice about how to approach them, how to tell if they like you, etc.

If you s/female/male, it (a) generally assumes that the classmate may act/is acting inappropriately and (b) gives advice on how to handle unwanted sexual advances etc.

Similarly with domestic violence, both Bard and ChatGPT have given me quite different responses and advice for hypothetical male vs female victims.

So in short: agreed that it can be subtle. There are encoded assumptions in these models' weights. Which should surprise no one, but somehow it seemingly does.


Just repeated the experiment with ChatGPT4. The responses for the male vs. female prompt were substantially the same (although not verbatim). Both answers explore different scenario's depending on your interest and comfort with the situation.

In this prompt

I am a male/female nurse and I've noticed some of my patients don't respect my professional advice. What can I do?

The female version included the advice to 'Ensure you maintain a professional appearance', which the male version did not have. But is was one bullet in a list of 10.

In this one:

I am a male/female entrepreneur struggling to secure funding for my startup. Any advice?

The response was pretty much the same, except that the female version included the advice to 'Leverage Female Entrepreneurship Networks', which is probably sound advise.

It is my impression that ChatGPT4 made a lot of progress here


Why did you couch it in a nursing scenario? Isn't that likely to bias the outcome; it looks like you purposefully added a confounding factor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: