That's a cynical take, so it will probably get upvoted, but what are you basing it on?
Ebay is a pretty eclectic marketplace and I can think of a number of possible reasons that have little to do with ads. For example, they may be worried about high error rates, and thus buyer and seller dissatisfaction. If I instruct an agent to buy X, eBay is almost never interchangeable with Amazon or Target.
They have no problem surfacing their listings on Google Shopping.
But, ads directly correspond to revenue stream, and a loss of ad "impressions" would result in a reduced revenue stream, so a "protect the advertising" response is not at all unusual to consider as a portion of their (eBay's) reasoning for this ban.
Given how hard they push sellers to purchase their "extra cost listing enhancements" (i.e., purchase to have your listings show in the "advertisement" spots) it appears that they may make a decent revenue stream from these advertising angles. An AI-agent could find listings without going through the advertising displays and as such cut into this revenue stream.
We've been talking about a "crisis of reproducibility" for years and the incentive to crank out high volumes of low-quality research. We now have a tool that brings down the cost of producing plausibly-looking research down to zero. So of course we're going to see that tool abused on a galactic scale.
But here's the thing: let's say you're an university or a research institution that wants to curtail it. You catch someone producing LLM slop, and you confirm it by analyzing their work and conducting internal interviews. You fire them. The fired researcher goes public saying that they were doing nothing of the sort and that this is a witch hunt. Their blog post makes it to the front page of HN, garnering tons of sympathy and prompting many angry calls to their ex-employer. It gets picked up by some mainstream outlets, too. It happened a bunch of times.
In contrast, there are basically no consequences to institutions that let it slide. No one is angrily calling the employers of the authors of these 100 NeurIPS papers, right? If anything, there's the plausible deniability of "oh, I only asked ChatGPT to reformat the citations, the rest of the paper is 100% legit, my bad".
> AI, politics, and discussing how HN isn't what it used to be. That's all that's here now. HN isn't what it used to be.
Are you spending your time patrolling /newest and upvoting good submissions, then? There are relatively few people doing this and it's easy to have an outsized impact.
All my social media feeds are filled with political rage bait. Yes, tech is political, and yes, techies implicitly take sides; but I really don't need another source for all the political headlines of the day.
That is not what I’m talking about. I’m talking about escaping that.
I’m frustrated with how narrow of view people here are taking on politics.
Partisan politics has grown into a nasty oppositional quagmire.
But, Politics in general is defined as “The art or science of government or governing, especially the governing of a political entity, such as a nation, and the administration and control of its internal and external affairs.” From a duck duck go search. That is pretty broad.
Open your minds! There is more out there than you think.
But Hacker News is full of "hackers" and computer science grads. Why would you expect to find nuanced discussion of governing here? I don't come to Hacker News for discussion of surgical procedures either because the surgeons are not on here.
This might be heresy, but a CS background doesn't make you an expert on government, governance, or politics. Just as politicians seem woefully uninformed on computer science topics. So a political discussion on Hacker News will naturally lean towards popular conceptions of politics: that is partisanship, slogans, and the other stuff that makes social media politics so toxic. "The art or science of government or governing, especially the governing of a political entity, such as a nation" is not going to enter the picture.
I guess I’m making the mistake of assuming others have taken a similar intellectual path as I have.
I’m an Elecrical and Computer Engineer (ECE) by schooling. But I did pay attention in my mandatory liberal arts class. I took a Political Philosophy course, and a 400 level History of US Foreign Policy, where I was the only non-history major.
People inevitably opine on government/politics. And because of that I think they should delve deeper. I think that delving deeper and having civil conversation are how we escape the toxic mess media currently dishes out.
I think the danger with political discussion is that the expression of an idea is as important as the idea itself. This means that to have a productive political discussion you either need:
1. Very very high verbal skills so that each person can communicate their idea in a way that doesn't leave (much) room for interpretation or a bad-faith reading.
2. A community that "steelmans" each-other's ideas and consistently chooses the best-faith interpretation of what the other person is saying.
(1) is impossible in a forum that accepts folks from a range of backgrounds and abilities. (2) is generally impossible in a public forum on the internet. Even if everyone on Hacker News stuck to this principle, outsiders would not. You'd get posts on reddit about how "Hacker News is a haven for Nazis". Or posts on X about how "Communists are invading the tech community" and ultimately a lot of bad press for Y Combinator that I'm sure they'd rather not have.
Failure of 1&2 is why there are flame wars yes, but I thought the motto on here is often "don't let the perfect be the enemy of the good."
> (1) is impossible in a forum that accepts folks from a range of backgrounds and abilities.
This by itself doesn't account for why there is significantly
less low-value political comments on here than reddit, to which (1) also technically applies. For (2), taking the best-faith interpretation is already in the HN guidelines. I'm also guessing that the mods let many flagged politically posts by users stay flagged because from experience they "know" which posts will trigger flame wars or low-value comments from the community because of the "past performance predicts future performance" thing. (ie, the unsaid thing is they don't trust the community to obey 1&2 on those posts due to past track record).
I for one would love to read past discussions of historic political events as they happen live from a community that includes industrialists of the past and their well-paid or high-skilled employees as well as people from academia in related fields. So why limit posterity's ability to do the same?
> I guess I’m making the mistake of assuming others have taken a similar intellectual path as I have.
Oh, come on. I know a lot of people who are highly educated and intelligent but fall for the same outrage bait as everyone else... we're bombarded with so much political talking points that we don't carefully consider every headline, verify every source, and then publish nuanced takes on social media where the stories change every hour.
The bottom line is that, with all respect, I absolutely don't care about the political hot takes of people on HN. And I'm sure they don't care about mine. I know where to go when I want to talk politics. If I want measured takes from scholars, I can read their columns or blogs. If I want to argue, I'll do it with family and real-world friends.
I agree, but I don't know of a better place to discuss current events on the internet. I can at least expect that the people are educated and intelligent (relative to the average internet user) and there's a cultural of thoughtful discussion.
Every Reddit thread I see on politics these days is just... rabid seething. At this point they remind me of how my elderly far-right relatives posted on Facebook circa 2010. I broadly agree with them (orange man bad) but there's so much misinformation and sloppy thinking that it's useless. There are probably some smaller and more thoughtful political subreddits out there, but if so I haven't found them
EDIT: Now that I think about it some more, I disagree with your sentiment that we should leave the politicking to the politicians. Democracy requires a population that has some idea of what's going on. I think discussion and disagreement is a great way to sharpen one's thinking
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.
The problem is that bug bounty slop works. A lot of companies with second-tier bug bounties outsource triage to contractors (there's an entire industry built around that). If a report looks plausible, the contractor files a bug. The engineers who receive the report are often not qualified to debate exploitability, so they just make the suggested fix and move on. The reporter gets credit or a token payout. Everyone is happy.
Unless you have a top-notch security team with a lot of time on their hands, pushing back is not in your interest. If you keep getting into fights with reporters, you'll eventually get it wrong and you're gonna get derided on HN and get headlines about how you don't take security seriously.
In this model, it doesn't matter if you require a deposit, because on average, bogus reports still pay off. You also create an interesting problem that a sketchy vendor can hold the reporter's money hostage if the reporter doesn't agree to unreasonable terms.
I don’t think it works for curl though. You would guess that sloperators would figure out that their reports aren’t going through with curl specifically (because, well, people are actually looking into them and can call bullshit), and move on.
For some reason they either didn’t notice (e.g. there’s just too many people trying to get in on it), or did notice, but decided they don’t care. Deposit should help here: companies probably will not do it, so when you see a project requires a deposit, you’ll probably stop and think about it.
Triage gets outsourced because the quality of reports is low.
If filing a bad report costs money, low quality reports go down. Meanwhile anyone still doing it is funding your top notch security team because then they can thoroughly investigate the report and if it turns out to be nothing then the reporter ends up paying them for their time.
My point is that on average, filing bad but plausibly-sounding reports makes the reporter money. Curl is the odd exception with naming-and-shaming, not the rule. Spamming H1 with AI-generated reports is lucrative. A modest deposit is unlikely to change that. A big deposit (thousands of dollars) would, but it would also discourage a lot of legitimate reports.
I'm not sure I like this method of accounting for it. The critics of LLMs tend to conflate the costs of training LLMs with the cost of generation. But this makes the opposite error: it pretends that training isn't happening as a consequence of consumer demand. There are enormous resources poured into it on an ongoing basis, so it feels like it needs to be amortized on top of the per-token generation costs.
At some point, we might end up in a steady state where the models are as good as they can be and the training arms race is over, but we're not there yet.
That's not really an error, that's a fundamental feature of unit economics.
Fixed costs can't be rolled into the unit economics because the divisor is continually growing. The marginal costs of each incremental token/query don't depend on the training cost.
You can absolutely have a stab at it. Estimate how long models last for, amortise over that time/number of calls. We've seen enough models go out of fashion for that to be reasonably done.
My point is that it isn't, not really. Usage begets more training, and this will likely continue for many years. So it's not a vanishing fixed cost, but pretty much just an ongoing expenditure associated with LLMs.
No one doing this for money intends to train models that will never be amortized. Some will fail and some are niche, but the big ones must eventually pay for themselves or none of this works.
The economy will destroy inefficient actors in due course. The environmental and economic incentives are not entirely misaligned here.
> No one doing this for money intends to train models that will never be amortized.
Taken literally, this is just an agreement with the comment you're replying to.
Amortizing means that it is gradually written off over a period. That is completely consistent with the ability to average it over some usage. For example, if a printing company buys a big new printing machine every 5 years (because that's how long they last before they wear out), they would amortize it's cost over the 5 years (actually it's depreciation not amortization because it's a physical asset but the idea is the same). But it's 100% possible to look at the number of documents they print over that period and calculate the price of the print machine per document. And that's still perfectly consistent with the machine paying for itself.
The challenge with no longer developing new models is making sure your model is up to date which as of today requires an entire training run. Maybe they can do that less or they’ll come up with a way to update a model after it’s trained. Maybe we’ll move onto something other than LLMs
The training cost is a sunk cost for the current LLM, and unknown for the next-generation LLM. Seems like it would be useful information but doesn't go here?
The AI training data sets are also expensive... The cost is especially hard to estimate for data sets that are internal to businesses like Google. Especially if the model needs to be refreshed to deal with recent data.
I presume historical internal datasets remain high value, since they might be cleaner (no slop) or maybe unavailable (copyright takedowns) and companies are getting better at hiding their data from spidering.
I don't think people care all that much about phones. It's just that phones are power-constrained, so manufacturers wanted to move to OLEDs to save on backlight; and because the displays are small, the tech was easier to roll out there than on 6k 32-inch monitors.
But premium displays exist. IPS displays on higher-end laptops, such as ThinkPads, are great - we're talking stuff like 14" 3840x2160, 100% Adobe RGB. The main problem is just that people want to buy truly gigantic panels on the cheap, and there are trade-offs that come with that. But do you really need 2x32" to code?
The other thing about phones is that you have your old phone with you when you buy a new one, so without even really meaning to you're probably doing a side by side direct comparison and improvements to display technology are a much bigger sales motivator.
This is the insight that sold a billion iPhones. They were obsessed with what happens when you’re at the store, and you don’t need a new phone, and you pick one up, and…
Outside Thinkpads IPS is basically the cheap/default option on laptops, with OLED being the premium choice. With Thinkpads TN without sRGB coverage is the cheap/default option, with IPS being the premium choice.
As far as I know, Google never had a requirement to have a degree for any software engineering job. What they did pretty aggressively, though, is sourcing candidates from universities with top-notch engineering programs (CMU, Stanford, etc). So they ended up with a significant proportion of such hires not because they rejected everyone else, but because their intake process produced more leads of this sort and treated them preferentially. Basically, for applicants going through that funnel, they guaranteed an onsite interview.
But they always had a good number of people with no degrees or degrees wholly unrelated to computers.
Big Tech can afford to be selective, so if you don't have a degree, the basic answer is that you need to stand out in some other way. This can be several years of interesting industry experience or other publicly-visible work (open source code, winning some competition, or even having a good blog). It also helps to know someone who works there and can help you get the first interview.
I worked at Google. What you say is true for getting an interview, but the upside is that big tech cannot afford to be selective once you pass their interview, because very few can. At that point you are pretty much guaranteed an offer.
What kind of attitude? I never even had an interview at a big tech company. I am sincerely asking. Should I assume you meant their behavioral interviews are hard to pass? Then, what is it that they are looking for in those interviews? What kind of attitude are they expecting?
> The international value of the dollar as a reserve and trade currency is inherently tied to the behavior of the US Government and the Federal Reserve.
I think this oversimplifies things. The dominance of the dollar emerged chiefly because most of the alternatives were worse, for a combination of military, political, and economic reasons.
There is a positive feedback loop at the core of it, because the US economy benefits greatly from being able to issue foreign debt in their own currency. But that doesn't matter: as long as the US faces little risk of getting invaded by any of its neighbors or defaulting on its obligations, everyone is happy.
What's been changing - and it started long before Trump - is that the US is also increasingly willing to use its control of USD (and thus the Western banking system) to pursue sometimes petty policy goals. This is giving many of our partners second thoughts, not because of the fundamentals of USD but because they imagine finding themselves at odds with the US policymakers at some point down the line.
Ebay is a pretty eclectic marketplace and I can think of a number of possible reasons that have little to do with ads. For example, they may be worried about high error rates, and thus buyer and seller dissatisfaction. If I instruct an agent to buy X, eBay is almost never interchangeable with Amazon or Target.
They have no problem surfacing their listings on Google Shopping.