Hacker Newsnew | past | comments | ask | show | jobs | submit | dannysullivan's commentslogin

I work for Google Search. Apologies for this! It’s been fixed now and posted to our search status dashboard https://status.search.google.com/incidents/hySMmncEDZ7Xpaf9i...


Fortunately, the issue was reported in the proper / only place to get support from Google: the front page of Hacker News.


Thank god for procrastination!


And what approach would be effective to get any buttons work on login page of ChatGPT?

In IOS 14 ‘ login’ button simply does nothing regardless of the browser.

It’s been reported and yet last time I’ve checked it doesn’t work. It’s more than two months already and as it seems simply no body give a sh….


LOL


Would love to hear what the bug and/or fix was


I'm more interested in why ua sniffing is considered acceptable for this.


On the server-side, parsing the UA string is the best & fastest way to figure out which browser is on the other end or the connection. This can need to happen before you load any JS - this is commonly used to decide which JS bundles to load. When put under the microscope, browser have inconsistent behaviors and occasional regressions from version to version (e.g. performance with sparse arrays)


How much JavaScript is needed to accept my text input and provide auto complete options? Pretty wild we need to worry about browser compatibility to do this


> How much JavaScript is needed to accept my text input and provide auto complete options?

If you're talking about Google's homepage, the answer is "a lot". You can check for yourself - go to google.com, select "view source" and compare the amount of Closure-compiled JavaScript against HTML markup.


I think you've missed the point. Google's primary web search feature could, in theory, be implemented without a line of JavaScript. That's how it was years and years ago anyway.


I use Firefox with the NoScript addon and google.com still works just fine.


I did not miss the point, I gave an answer based on the ground-truth rather than theory.

> Google's primary web search feature could, in theory, be implemented without a line of JavaScript

...and yet, in practice, Google defaults to a JavaScript-heavy implementation. Search is Google's raison d'être and primary revenue driver, I posit it therefore is optimized up the wazoo. I wouldn't hastily assume incompence given those priors.


The important word in the question you quoted is needed.

Google homepage is 2MB. Two fucking megabytes. Without JS, it's 200K.

I can't be the only person who remembers when Google was known for even omitting technically optional html tags on their homepage, to make it load fast - they even documented this as a formal suggestion: https://google.github.io/styleguide/htmlcssguide.html#Option...


> I can't be the only person who remembers when Google was known for even omitting technically optional html tags on their homepage, to make it load fast

This was back when a large fraction of search users were on 56k modems. Advances in broadband connectivity, caching, browser rendering, resource loading scheduling, and front-end engineering practices may result in the non-intuitive scenario where the 2MB Google homepage in 2024 has the same (or better!) 99-percentile First-Meaningful-Paint time as a stripped-down 2kb homepage in 2006.

The homepage size is no longer that important because how much time do you save by shrinking a page from 2MB to 300kb on a 50mbps connection with a warm cache?Browser cache sizes are much larger than they were 10 years ago (thanks to growth in client storage). After all, page weight is mostly used as a proxy for loading time.


I'm sorry you're going to have to pick an argument and stick to it before I can possibly hope to respond.

Either performance is so critical that a few kb to do feature detection is too much, or line performance has improved so much that 2MB of JavaScript for a text box and two buttons is "acceptable".

You can't have it both ways.


> You can't have it both ways.

Your argument goes against empirical evidence in this instance. You can have it "both ways" when client-side feature detection is the slower choice on high bandwidth connections and you want to consistently render the UI within 200ms.

Performance goes beyond raw bandwidth, and as with all things engineering, involves tradeoffs: client-side feature detection has higher latency (server-client-server round trip and network connection overheads) and is therefore unsuitable for logic that executes before the first render of above-the-fold content. All of this is pragmatic, well-known and not controversial among people who work on optimizing FE performance. Your no-serverside-detection absolutism is disproved by the many instances of UA-string parsing in our present reality.


> Your no-serverside-detection absolutism is disproved by the many instances of UA-string parsing in our present reality.

By your logic, because McDonalds is popular, it must be a healthy choice then.


I've definitely had to code up alternative front-ends routed through a server I own to access Google on slow connections. If it takes too long your browser just gives up, and the site isn't just "unusable" (slow to the point of being painful), it's actually unusable.


I don't doubt your experience - but my mention of the 99th percentile was intentional.


99th percentile is fairly arbitrary. At Google's scale, that's a $2B yearly loss in customers they could have satisfied who went elsewhere. That's roughly 200FTEs who could be dedicated to the problem more efficiently than working on their other business concerns. Is not delivering a shit website when the connections are garbage that hard of a problem?


Thank you for the link. I had no idea about most of the optional tags. It looks ugly when taken to extremes, though.


Google is an advertising company. I’m sure they’re collecting quite a bit more then just your text input


Of course, that's why there's 1.8MB of compressed JavaScript for a text box and an image. My point being that's it's silly and I'm exasperated with the state of the internet


2002 called and they want their terrible development practices back.


It's wild to think that everything we've collectively learned as an industry is being forgotten, just 20 years later.

- We're on the verge of another browser monopoly, cheered on by developers embracing the single controlling vendor;

- We already have sites declaring that they "work best in Chrome" when what they really mean is "we only bothered to test in Chrome".

- People are not only using UA sniffing with inevitable disastrous results, they're proclaiming loudly that it's both necessary and "the best" solution.

- The amount of unnecessary JavaScript is truly gargantuan, because how else are you going to pad your resume?

I mean really what's next?

Are we going to start adopting image slice layouts again because browsers gained machine vision capabilities?


> People are not only using UA sniffing with inevitable disastrous results, they're proclaiming loudly that it's both necessary and "the best" solution.

Since you're replying to my comment and paraphrasing a sentence of mine, I'm guessing I'm "people".

I'm curious to hear from you on what - if any - is a better alternative that can be used to determine the browser identity or characteristics (implied by name and version) on the server side? "Do not detect the browser on the server side" is not a valid answer; and suggests to me the person proffering it as an answer isn't familiar with large-scale development of performant web-apps or websites for heterogenous browsers. A lot of browser inconsistencies have to be papered over (e.g. with polyfills or alternative algorithms implementations), without shipping unnecessary code to browsers that don't need the additional code. If you have a technique faster and/or better that UA sniffing on the server side, I'll be happy to learn from you.

"Do feature JavaScript feature detection on the client" is terrible for performance if you're using it to dynamically load scripts on the critical path.


I'm sorry you're going to have to pick an argument and stick to it before I can possibly hope to respond. Either performance is so critical that a few kb to do feature detection is too much, or line performance has improved so much that 2MB of JavaScript for a text box and two buttons is "acceptable". You can't have it both ways.


We're also back to table layouts with grid, albeit a lot more usable this time around.


I don't recall that there was ever anything inherently wrong with using tables for layout, except that it was a misuse of tables so we were told it wasn't "semantic". Thus you had years of people asking on forums how to emulate tables using a mess of floating divs until flexbox/grid came around. In retrospect, tables are also clearly incompatible with phone screens, but that wasn't really a problem at the time.


One, it made the code unreadable and impossible to maintain properly, especially since most of those table where generated straight out of photoshop or whatever.

Two, it was an accessibility nightmare.

At least modern grid design fix those


It's ok, sure it's stressful when something like this happens. Software is a hard field and the amount of things that can go wrong is immense. When I think about how few errors and downtime some of the larger services have I realize how much work must be involved.


I'd divide "things going wrong" into forced and unforced errors. Your upstream service providers sending subtly malformed data is a forced error. It happens. Doing user agent sniffing for presentation, poorly, is an unforced error.

It's not amateurish to have problems. It is amateurish (or malicious) to have problems caused by this specific class of issue.


I absolutely don't think it's a laughing matter. I made a joke in the interview with the reporter as a kind of icebreaker (which clearly didn't seem to go well).

But as I also explained after doing that, "I’m pretty sure, went into what I thought was a more serious and thoughtful discussion, at least from my perspective."

I don't think making a joke about something is the same as someone then thinking an entire matter isn't serious. But I can appreciate you disagree.


It's because when I was interviewed, what I said was all speaking officially for Google. You can attribute anything there to the company directly.

My blog post -- I wrote that on my own. No one from the Google communications team reviewed it, approved it, vetted it and so on. That's what I was trying to explain.

That doesn't mean, of course, people won't think it somehow reflects on Google or what I do there. It no doubt will. But that's not quite the same thing as something being an official company statement.


I know it was a long post I made. But yes, I (and we) recognize people want the results better. I covered this at the end (along with some other parts):

"That said, there’s room to improve. There always is. Search and content can move through cycles. You can have a rise in unhelpful content, and search systems evolve to deal with it. We’re in one of those cycles. I fully recognize people would like to see better search results on Google. I know how hard people within Google Search are working to do this. I’m fortunate to be a part of that. To the degree I can help — which includes better communicating, ensuring that I reflect the humbleness that we — and I feel — I’ll keep improving on myself."


I get this view. But as my post explains, I'd quit writing about search. I was done. There wasn't going to be more criticism (or praise or whatever) from me because I'd retired from writing about search. I didn't have plans to go to Google when I retired. No one there even knew I was leaving. Which ... you or anyone can choose to believe or not, but that's how it is.

I was far from the only critic (or advocate) for Google or other search engines. There are plenty of others. New people, and with good views, continue to come into the space. The idea of "Google hired me to quiet me," again, while I get it, just wouldn't resolve that.

By the way, nor would some alternative idea that I somehow had secret details of spamming techniques make sense, either. Google had and still has an excellent spam team. They didn't need me to come in and somehow fill gaps.

What Google gained by me coming in, I hope, is someone that both tries to help people better understand how the search engine works from within the search quality team (that's where I work, in that team) and also bring back into that team advocacy and feedback from the outside world (which typically, I realize, isn't that clear to those people outside Google -- here's an example I shared of this last week when asked: https://twitter.com/searchliaison/status/1720491595420856329 )


I was on vacation when this came up, so playing some catch up. I work for Google Search. I've been very involved with the concerns raised about quoted searches last year, especially because they never stopped working. They do work.

We did make an update last year to better reflect where quoted content appears on a page in the snippets we show. We did this because sometimes it's hard to find the quoted material on the page itself, leading to the "quotes don't work" issues.

This post explains more about this: https://blog.google/products/search/how-were-improving-searc...

The post also explains things like how with punctuation, we'll ignore that -- which leads to the "example.com" type of issue you might be having. If you're quoting a domain name, we're likely seeing that as "name com" rather than a request to just search within the domain. If you want to just search within the domain, that's what site: is for such as [site:example.com whatever you want to search for]


I actually work for our search quality team, and my job is to foster two-way communication between the search quality team and those outside Google. When issues come up outside Google, I try to explain what's happened to the best I can. I bring feedback into the search quality team and Google Search generally to help foster potential improvements we can make.


Yes. All this is saying that you do not write any code for the search algorithms. Do you know how to code? Do you have access to those repos internally? Do you read them regularly? Or are you only aware of what people tell you in meetings about it.

Your job is not to disseminate accurate information about how the algorithm works but rather to disseminate information that google has decided it wants people to know. Those are two extremely different things in this context.

I work on these kind of vague "algorithm" style products in my job, and I know that unless you are knee deep in it day to day, you have zero understanding of what it ACTUALLY does, what it ACTUALLY rewards, what it ACTUALLY punishes, which can be very different from what you were hoping it would reward and punish when you build and train it. Machine learning still does not have the kind of explanatory power to do any better than that.


No. I don't code. I'm not an engineer. That doesn't mean I can't communicate how Google Search works. And our systems do not calculate how much "old" content is on a site to determine if it is "fresh" enough to rank better. The engineers I work with reading about all this today find it strange anyone thinks this.


No. But it's also complicated, as Matt did thinks beyond web spam. Matt worked within the search quality team, and he communicated a lot from search quality to the outside world about how Search works. After Matt left, someone else took over web spam. Meanwhile, I'd retired from journalism writing about search. Google approached me about starting what became a new role of "public liaison of search," which I've done for about six years now. I work within the search quality team, just as Matt did, and that type of two-way communication role he had, I do. In addition, we have an amazing Search Relations team that also works within search quality, and they focus specifically on providing guidance to site owners and creators (my remit is a bit broader than that, so I deal with more than just creator issues).


thanks, Ernie!


I'm the source. I officially work for Google. The account is verified by X. It's followed by the official Google account. It links to my personal account; my personal account links back to it. I'm quoted in the Gizmodo story that links to the tweet. I'm real! Though now perhaps I doubt my own existence....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: