Hacker Newsnew | past | comments | ask | show | jobs | submit | burnhamup's commentslogin

In practice it means Steam reviews the key requests you make for third party bundles and sales. If they decide the deal is too generous, they may deny the key requests until you've offered the game for a comparable price on Steam.


> Here is a fun tip: Just transfer or deposited money in small amounts, whenever you feel like it. Avoids the questioning like your some kind of criminal.

Breaking up deposits into smaller amounts is a crime called structuring.

I wouldn't recommend doing this as an alternative to dealing with the reports and scrutiny on larger transactions.


Point taken. It was really annoying to try to read one of the slides of the carousel because it kept moving.


I found it very readable w/ my default Javascript-diasbled configuration. It wasn't until I viewed the page w/o my plugins loaded that I got the message.


People who are going to disable JavaScript, are going to disable JavaScript.

People who won't, wont.

Neither camp needs to proselytize the other, nor is it ever very effective.

And bragging about which side you're on is weird.


"I disable JS" always felt to me like the "I don't own a TV" elitism-brag.


That's an amusing reference, been a while since I've seen one, I assume because computer screens make tvs pointless.

I wonder if there's a more current version? Not having a smartphone perhaps?


2005: I don't have a TV

2015: I don't have a smartphone

2020: I don't have social media

2025: I don't have friends


> computer screens make TVs pointless

...if you live alone in a dorm room?


To be fair, if you're going to just be a genuinely superior person than other people are, you might as well just brag about that superiority since there's nothing else it's useful for.


I guess the rest of the sentence is "... if TVs were a fundamental pre-requisite for modern life"

> Yeah, bro, I rub two sticks together to cook my own deer meet, because Big Grocery is tracking me


Clearly I hit a nerve. I was just trying to make the point that w/o Javascript the example kinda fell flat. I didn't think I was bragging, but apparently I was. (I just prefer to turn on Javascript when I need it-- I find a lot of sites a lot less distracting w/o it.)


I think options are limited from the switch. You can't connect to arbitrary servers from the console version - just some curated public servers or by paying for Realms. You can hack around this with a DNS server that redirects the curated servers, but starts to get sketchy.

You also need the paid switch online for any sort of network play.


Brickfest is more of a traveling attraction. The local convention was Bricks by the Bay - they had the GBC in 2023, but the convention is on an indefinite hiatus. I wish it were easier to see where upcoming GBC's would be,


The patch system I worked with generated signatures of each build. The signature had the hash of each block of the build. The client has the signature for their version (1.0) and they download the signature of the new version (1.2) and diff the two. Then they download each block that has changed.

I think it was the `electron-updater` for my electron app, but I don't quite remember now.


It was just a couple years ago that I learned that the Unix compression libraries have a flag to make them “rsync friendly”. They do something with compression blocks to make them more stable across changes. Normally a small change in the middle of a file could change the rest of the output, due to bit packing.

I should really figure out how that works.


The theory I've heard is related to 'crawl budget'. Google is only going to devote a finite amount of time to indexing your site. If the number of articles on your site exceeds that time, some portion of your site won't be indexed. So by 'pruning' undesirable pages, you might boost attention on the articles you want indexed. No clue how this ends up working in practice.

Google's suggestion isn't to delete pages, but maybe mark some pages with a no index header.

https://developers.google.com/search/docs/crawling-indexing/...


But as that linked guide explains, that's only relevant for sites with e.g. over a million pages changing once a week.

That's for stuff like large e-commerce sites with constantly changing product info.

Google is clear that if your content doesn't change often (in the way that news articles don't), then crawl budget is irrelevant.


Google crawls the entire page, not just the subset of text that you, a human, recognize as the unchanged article.

It’s easy to change millions of pages once a week with on-load CMS features like content recommendations. Visit an old article and look at the related articles, most read, read this next, etc widgets around the page. They’ll be showing current content, which changes frequently even if the old article text itself does not.


I'm pretty sure Google is smart enough to recognize the main content of a page, and ignore things like widgets and navigation. That's Search Engine 101.


Yes, of course, but that analysis happens after the content has been visited by the bot. It’s still a visit, and still hits the “crawl budget.”


So they should stop doing this on pages that they are deleting now.


It’s possible they examined the server logs for requests from GoogleBot and found it wasting time on old content (this was not mentioned in the article but would be a very telling data point beyond just “engagement metrics”).

There’s some methodology to trying to direct Google crawls to certain sections of the site first - but typically Google already has a lot of your URLs indexed and it’s just refreshing from that list.


To determine whether content changes Google has to spend budget as well, hasn't it? So it has to fetch that 20-years old article.


> So it has to fetch that 20-years old article.

It doesn't have to fetch every article (statical sampling can give confidence intervals), and it doesn't have to fetch the full article: doing a "HEAD /" instead of a "GET /" will save on bandwidth, and throwing in ETag / If-Modified-Since / whatever headers can get the status of an article (200 versus 304 response) without bother with the full fetch.


There’s an obvious way this can be exploited. Bait and switch.


If the content is literally the same, the crawler should be able to use If-Modified-Since, right? It still has to make a HTTP request, but not parse or index anything.


If the content is dynamic (e.g. a list of popular articles in a sidebar has changed), then the page will be considered "updated".


This is not correct. It’s up to the server, controlled by the application to send that or other headers. Similar to sending a <title> tag. The headers take priority and similar to what another person said they will do a HEAD request first and not bother with a GET request for the content.


> The theory I've heard is related to 'crawl budget'. Google is only going to devote a finite amount of time to indexing your site.

Once a site has been indexed once, should it really be crawled again? Perhaps Google should search for RSS/Atom feeds on sites and poll those regularly for updates: that way they don't waste time doing to a site scrape multiple times.

Old(er) articles, once crawled, don't really have to be babysat. If Google wants to double-check that an already-crawled site hasn't changed too much, they can do a statistical sampling of random links on it using ETag / If-Modified-Since / whatever.


The SiteMap, which was invented by Google and designed to give information to crawlers, already includes last-updated info.

No need to invent a new system based on RSS/Atom, there is already an actually existing and in-use system based on SiteMap.

So, what you suggest is already happening -- or at least, the system is already there for it to happen. It's possible Google does not trust the last modified info given by site owners enough, or for other reasons does not use your suggested approach, I can't say.

https://developers.google.com/search/docs/crawling-indexing/...


I can imagine a malicious actor changing an SEO-friendly page to something spammy and not SEO-friendly. Since E-Tag and If-Modified-Since are returned by the server, they can be manipulated.

Just a guess though.


This should be what sitemap.xml provides already.


Even if that rule were true, why wouldn’t everything in the say, top NNN internet sites get an exemption? It is the Internet’s most hit content, why would it not be exhaustively indexed?

Alternatively, other than ads, what is changing on a CNN article from 10 years ago? Why would that still be getting daily scans?


Probably bad technology detecting a change. Things like current news showing up beneath the article, which changes whenever a new article is added. I've seen this happen on quite a few large websites. It might be technologically easier to drop old articles than the amount of time to fix whatever they use to determine if a page has changed. You would think a site like CNET wouldn't have to deal with something like that, but sometimes these sites that have been around for a long time have some serious outdated tech.


That's a good point about the static nature of some pages. Is there any way to tell a crawler to crawl this page, but after this date don't crawl again, but keep anything you previously crawled.


the ads are different.

i am tracking rss feeds of many sites, and on some i get notifications for old articles because something irrelevant in the page changed.


CNET* not CNN. But everything you say is still true.


How does Wikipedia manage to remain indexed?


Google is paying Wikipedia through "Wikimedia Enterprise." If Wikipedia weren't able to sucker people into thinking that they're poverty-stricken, Google would probably prop it up like they do Firefox.


Google search still prefers to give me at least 2-3 blogspam pages before the Wikipedia page with exactly the same keywords in the title as my query.


If I were establishing a "crawl budget", it would be adjusted by value. If you're consistently serving up hits as I crawl, I'll keep crawling. If it's a hundred pages that will basically never be a first page result, maybe not.

Wikipedia had a long tail of low-value content, but even the low-value content tends to be among the highest value for its given focus. e.g., I don't know how many people search "Danish trade monopoly in Iceland", and the Wikipedia article on it isn't fantastic, but it's a pretty good start[0]. Good enough to serve up as the main snippet on Google.

[0] https://en.wikipedia.org/wiki/Danish_trade_monopoly_in_Icela...


Wikipedia’s strongest SEO weapon is how often wiki links get clicked on result pages, with no return.

They’re just truly useful pages, and that is reflected in how people interact with them.


Purely speculating, Wikipedia has a huge number of inbound links (likely many more than CNet or even than more popular sites) which crawler allocation might be proportionate to. Even if it only crawled pages that had a specific link from an external site, that would be enough for Google to get pretty good coverage of Wikipedia.


Very likely Google special-cases Wikipedia


Your site isn’t worthy of the same crawl budget as Wikipedia.


They could specify in the sitemap how often do old articles change. Or set a indefinite caching header.


Google might not trust the sitemap because it sometimes is wrong.


It could be better to opt those articles out of the crawler. Unless that's more effort. If articles included the year and month in the URL prefix, I would disallow /201* instead.


I saw the Colorado excluded in job postings in the immediate aftermath.

I'm starting to see more and more companies listings a Colorado salary range in recent listings.


I'm not so sure that's true. They might have come to the Google Play store as part of their plan to sue Google. Their Google lawsuit has complaints about Google interfering with their third party deals with phone manufacturers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: