Hacker Newsnew | past | comments | ask | show | jobs | submit | farnarkle's commentslogin

That only says that Google discourages such actions, not that such actions are not beneficial to SEO ranking (which is equal to the aforementioned economic incentive in this case).


So whose word do we have to go on that this is beneficial, besides anonymous "SEO experts" and CNET leadership (those paragons of journalistic savvy)?

Perhaps what CNET really means is that they're deleting old low quality content with high bounce rates. After all, the best SEO is actually having the thing users want.


In my experience SEO experts are the most superstitious tech people I ever met. One guy wanted me to reorder HTTP header fields to match another site. He wanted our minified HTML to include a linebreak just after a certain meta element just because some other site had it. I got requests to match variable names in minified JS just because googles own minified JS had that name.


> In my experience SEO experts are the most superstitious tech people I ever met.

And some are the most data-driven people you'll ever meet. As with most people who claim to be experts, the trick is to determine whether the person you're evaluating is a legitimate professional or a cargo-culting wanna-be.


I’ve always felt there is a similarity to day traders or people who overanalyze stock fundamentals. There comes a time when data analysis becomes astrology…


> There comes a time when data analysis becomes astrology.

Excellent quote. It's counterintuitive but looking at what is most likely to happen according to the datasets presented can often miss the bigger picture.


This. It is often the scope and context that determines logic. It is easy to build bubbles and stay comfy inside. Without revealing much, I asked a data scientist whose job it is to figure out bids on keywords and essentially control how much $ is spent on advertising something at a specific region about negative criteria. As in, are you sure you wouldn’t get this benefit even if you stopped spending the $ and his response was “look at all this evidence that our spend caused this x% increase in traffic and y% more conversions” and that was 2 years ago. My follow up question was - okay, now that the thing you advertised is popular, wouldn’t it be the more organic choice in the market, and we can stop spending the $ there? His answer was - look at what happened when we stopped the advertising in this small region in Germany 1.5 years ago! My common sense validation question still stands. I still believe he built a shiny good bubble 2 years ago, and refuses to reason with wider context, and second degree effects.


The people who spend on marketing are not incentivised to spend less :)


> There comes a time when data analysis becomes astrology...

or just plain numerology


Leos are generally given the “heroic/action-y” tropes, so if you are, for example, trying to pick Major League Baseball players, astrology could help a bit.

Right for the wrong reasons is still right.


Right for the wrong reasons doesn't give confidence it's a sustainable skill. Getting right via randomness also fits into the same category.


My data driven climate model indicates that we could combat climate change by hiring more pirates.


Some of the most superstitious people I've ever met were also some of the most data-driven people I've ever met. Being data-driven doesn't exclude unconscious manipulation of the data selection or interpretation, so it doesn't automatically equate to "objective".


The data analysis I've seen most SEO experts do is similar to sitting at a highway, carefully timing the speed of each car, taking detailed notes of the cars appearance, returning to the car factory and saying that all cars need to be red because the data says red cars are faster.


One SEO expert who consulted for a bank I worked at wanted us to change our URLs from e.g. /products/savings-accounts/apply by reversing them to /apply/savings-accounts/products on the grounds that the most specific thing about the page must be as close to the domain name as possible, according to them. I actually went ahead and changed our CMS to implement this (because I was told to). I'm sure the SEO expert got paid a lot more than I did as a dev. A sad day in my career. I left the company not long after...


Unfortunately though, this was likely good advice.

The yandex source code leak revealed that keyword proximity to root domain is a ranking factor. Of course, there’s nearly a thousand factors and “randomize result” is also a factor, but still.

SEO is unfortunately a zero sum game so it makes otherwise silly activities become positive ROI.


But that's wrong... Do breadcrumbs get larger as you move away from the loaf? No!


It's just a URL rewrite rule for nginx proxy.


If you want all your canonical urls to be wrong and every navigation to include a redirect, sure.


Even if that measurably improves the ranking of your website, it still would be a bullshit job. Also cries for side effects, especially on the web.


I think you're largely correct but Google isn't one person so there may be somewhat emergent patterns that work from an SEO standpoint that don't have a solid answer to Why. If I were an SEO customer I would ask for some proof but that isn't the market they're targeting. There was an old saying in the tennis instruction business that there was a bunch of 'bend your knees, fifty please'. So lots of snakeoil salesman but some salesman sell stuff that works.


That's a bit out there, but Google has mentioned in several different ways that pages and sites have thousands of derived features and attributes they feed into their various ML pipelines.

I assume Google is turning all the site's pages, js, inbound/outbound links, traffic patterns, etc...into large numbers of sometimes obscure datapoints like "does it have a favicon", "is it a unique favicon?", "do people scroll past the initial viewport?", "does it have this known uncommon attribute?".

Maybe those aren't the right guesses, but if a page has thousands of derived features and attributes, maybe they are on the list.

So, some SEO's take the idea that they can identify sites that Google clearly showers with traffic, and try to recreate as close a list of those features/attributes as they can for the site they are being paid to boost.

I agree it's an odd approach, but I also can't prove it's wrong.


Considering their job can be done by literally anyone, they have to differentiate somehow


>our minified HTML

Unreadable source code is a crime against humanity.


Is minified "code" still "source code"? I think I'd say the source is the original implementation pre-minification. I hate it too when working out how something is done on a site, but I'm wondering where we fall on that technicality. Is the output of a pre-processor still considered source code even if it's not machine code? These are not important questions but now I'm wondering.


Source code is what you write and read, but sometimes you write one thing and people can only read it after your pre processing. Why not enable pretty output?

Plus I suspect minifying HTML or JS is often cargo cult (for small sites who are frying the wrong fish) or compensating for page bloat


It doesn't compensate bloat, but it reduces bytes sent over the wire, bytes cached in between and bytes parsed in your browser for _very_ little cost.

You can always open dev tools in your browser and have an interactive, nicely formatted HTML tree there with a ton of inspection and manipulation features.


In my experience usually the bigger difference is made by not making it bloated in the first place... As well as progressive enhancement, nonblocking load, serving from a nearby geolocation etc. I see projects minify all the things by default while it should be literally the last measure with least impact on TTI


It does stuff like tree shaking as well; it's quite good. If your page is bloated, it makes it better. If your page is not bloated, it makes it better.


Tree-shaking is orthogonal to minification tho.


That's true.


and does an LLM care … it feels like minification doesn’t stop one form explaining the code at all.


The minified HTML (and, god forbid, JavaShit) is the source from which the browser ultimately renders a page, so yes that is source code.


"The ISA bytecode is the source from which the processor ultimately executes a program, so yes that is source code."


I suppose the difference is that someone debugging at that level will be offered some sort of "dump" command or similar, whereas someone debugging in a browser is offered a "View Source" command. It's just a matter of convention and expectation.

If we wanted browsers to be fed code that for performance reasons isn't human-readable, web servers ought to serve something that's processed way more than just gzipped minification. It could be more like bytecode.


I find myself using View Source sometimes, too, but more often I just use devtools, which shows DOM as a readable tree even if source is minified.

I'm actually all for binary HTML – not just it's smaller, it can also be easier to parse, and makes more sense overall nowadays.


Let's be honest, a lot of non-minified JS code is barely legible either :)

For me I guess what I was getting at is that I consider source the stuff I'm working on - the minified output I won't touch, it's output. But it is input for someone else, and available as a View Source so that does muddy the waters, just like decompilers produce "source" that no sane human would want to work on.

I think semantically I would consider the original source code the "real" source if that makes sense. The source is wherever it all comes from. The rest is various types of output from further down the toolchain tree. I don't know if the official definition agrees with that though.


>If we wanted browsers to be fed code that for performance reasons isn't human-readable,

Worth keeping in mind that "performance" here refers to saving bandwidth costs as the host. Every single unnecessary whitespace or character is a byte that didn't need to be uploaded, hence minify and save on that bandwidth and thus $$$$.

The performance difference on the browser end between original and minified source code is negligible.


Last time I ran the numbers (which admittedly was quite a number of years ago now), the difference between minified and unminified code was negligible once you factored in compression because unminified code compresses better.

What really adds to the source code footprint is all of those trackers, adverts and, in a lot of cases, framework overhead.


I was thinking transfer speed, although even then, the difference is probably negligible if compressing regardless.


The way I see it, if someone needs to minify their JavaShit (and HTML?! CSS?!) to improve user download times, that download time was horseshit to start with and they need to rebuild everything properly from the ground up.


> It could be more like bytecode.

Isn’t this essential what WebAssembly is doing? I’ll admit I haven’t looked into it much, as I’m crap with C/++, though I’d like to try Rust. Having “near native” performance in a browser sounds nice, curious to see how far it’s come.


If you need to use prettiefy to even have a chance to understand the code, is it still source code?

About the byte code: You mean wasm? (Guess that's what you're alluding to.)


If you need syntax highlighting and an easy way to navigate between files to understand a large code base, is it still source code?


Turrles all the way down


Nobody tell this guy about compilers.


Minifying HTML is basically just removing non-significant whitespace. Run it through a formatter and it will be readable.

If you dislike unreadable source code I would assume you would object to minifying JS, in which case you should ask people to include sourcemaps instead of objecting to minification.


So I guess you think compiled code is even worse, right?


I mean, isn't that precisely why open source advocates advocate for open source?

Not to mention, there is no need to "minify" HTML, CSS, or JavaShit for a browser to render a page unlike compiled code which is more or less a necessity for such things.


Minifying code for browsers greatly reduces the amount of bandwidth needed to serve web traffic. There's a good reason it's done.

By your logic, there's actually no reason to use compiled code at all, for almost anything above the kernel. We can just use Python to do everything, including run browsers, play video games, etc. Sure, it'll be dog-slow, but you seem to care more about reading the code than performance or any other consideration.


I already alluded[1] to the incentives for the host to minify their JavaShit, et al., and you would have a point if it wasn't for the fact that performance otherwise isn't significantly different between minified and full source code as far as the user would be concerned.

[1]: https://news.ycombinator.com/item?id=37072473


I'm not talking about the browser's performance, I'm talking about the network bandwidth. All that extra JS code in every HTTP GET adds up. For a large site serving countless users, it adds up to a lot of bandwidth.


Somebody mentioned negligible/deleterious impacts on bandwidth for minified code in that thread, but they seemed to have low certainty. If you happen to have evidence otherwise, it might be informative for them.


>JavaShit

Glad to see the diversity of HN readers apparently includes twelve year olds.

Anyway, you do realise plenty of languages have compilers with JS as a compilation target, right? How readable do you think that is?


>Glad to see the diversity of HN readers apparently includes twelve year olds.

The abuse of JavaScript does not deserve the respect of being called by a proper name.

>Anyway, you do realise plenty of languages have compilers with JS as a compilation target, right? How readable do you think that is?

If you're going to run "compiled" code through an interpreter anyway, is that really compiled code?


>In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language).


Well, no, but it can speed up loading by reducing transfer.


Is that still true with modern compression?


If someone releases only a minified version of their code, and licenses it as free as can be, is it open source?


According to the Open Source Definition of the OSI it's not:

> The program must include source code [...] The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor [...] are not allowed.


The popular licenses for which this is a design concern are careful to define source code to mean "preferred form of the work for making modifications" or similar.


It’s also a crime against zorgons from Planet Zomblaki.


Google actually describes an entirely plausible mechanism of action here at [1]. old content slows down site crawling, which can cause new content to not be refreshed as often.

Sure, one page doesn’t matter, but thousands will.

[1] https://twitter.com/searchliaison/status/1689068723657904129...


It says it doesn't affect ranking and their quote tweet is even more explicit

https://twitter.com/searchliaison/status/1689297947740295168


This is the actual quote from Google PR:

>Removing it might mean if you have a massive site that we’re better able to crawl other content on the site. But it doesn’t mean we go “oh, now the whole site is so much better” because of what happens with an individual page.

Parsing this carefully, to me it sounds worded to give the impression removing old pages won’t help the ranking of other pages without explicitly saying so. In other words, if it turns out that deleting old pages helps your ranking (indirectly, by making Google crawl your new pages faster), this tweet is truthful on a technicality.

In the context of negative attention where some of the blame for old content being removed is directed toward Google, there is a clear motive for a PR strategy that deflects in this way.


The tweet is also clearly saying that deleting old content will increase the average page rank of your articles in the first N hours after it is published. (Because the time to first crawl will decrease, and the page rank is effectively zero before the first crawl).

CNet is big enough that I’d expect Google to ensure the crawler has fresh news articles from it, but that isn’t explicitly said anywhere.


And considering all the AI hype, one could have hoped that the leading search engine crawler would be able to "smartly" detect new contents based on a url containing a timestamp.

Apparently not if this SEO trick is really a thing...

EDIT : sorry my bad it's actually the opposite. One could expect that a site like CNET would include a timestamp and a unique ID in their URL in 2023. This seems to be the "unpermalink" of a recent cnet article.

Maybe the SEO expert could have started there...

https://www.cnet.com/tech/mobile/samsung-galaxy-z-flip-5-rev...


I did the tweet. It is clearly not saying anything about the "average page rank" of your articles because those words don't appear in the tweet at all. And PageRank isn't the only factor we use in ranking pages. And it's not related to "gosh, we could crawl your page in X hours therefore you get more PageRank."


It's not from Google PR. It's from me. I'm the public liaison for Google Search. I work for our search quality team, not for our PR team.

It's not worded in any way intended to be parsed. I mean, I guess people can do that if they want. But there's no hidden meaning I put in there.

Indexing and ranking are two different things.

Indexing is about gathering content. The internet is big, so we don't index all the pages on it. We try, but there's a lot. If you have a huge site, similarly, we might not get all your pages. Potentially, if you remove some, we might get more to index. Or maybe not, because we also try to index pages as they seem to need to be indexed. If you have an old page that doesn't seem to change much, we probably aren't running back ever hour to it in order to index it again.

Ranking is separate from indexing. It's how well a page performs after being indexed, based on a variety of different signals we look at.

People who believe removing "old" content aren't generally thinking that's going to make the "new" pages get indexed faster. They might think that maybe it means more of their pages overall from a site could get indexed, but that can include "old" pages they're successful with, too.

The key thing is if you go to the CNET memo mentioned in Gizmodo article, it says this:

"it sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results."

Maybe CNET thinks getting rid of older content does this, but it's not. It's not a thing. We're not looking at a site, counting up all the older pages and then somehow declaring the site overall as "old" and therefore all content within it can't rank as well as if we thought it was somehow a "fresh" site.

That's also the context of my response. You can see from the memo that it's not about "and maybe we can get more pages indexed." It's about ranking.


Suppose CNET published an article about LK99 a week ago, then they published another article an hour ago. If Google hasn’t indexed the new article yet, won’t CNET rank lower on a search for “LK99” because the only matching page is a week old?

If by pruning old content, CNET can get its new articles in the results faster, it seems this would get CNET higher rankings and more traffic. Google doesn’t need to have a ranking system directly measuring the average age of content on the site for the net effect of Google’s systems to produce that effect. “Indexing and ranking are two different things” is an important implementation detail, but CNET cares about the outcome, which is whether they can show up at the top of the results page.

>If you have a huge site, similarly, we might not get all your pages. Potentially, if you remove some, we might get more to index. Or maybe not, because we also try to index pages as they seem to need to be indexed.

The answer is phrased like a denial, but it’s all caveated by the uncertainty communicated here. Which, like in the quote from CNET, could determine whether Google effectively considers the articles they are publishing “fresh, relevant and worthy of being placed higher than our competitors in search results”.


You're asking about freshness, not oldness. IE: we have systems that are designed to show fresh content, relatively speaking -- matter of days. It's not the same as "this article is from 2005 so it's old don't show it." And it's also not what is being generally being discussed in getting rid of "old" content. And also, especially for sites publishing a lot of fresh content, we get that really fast already. It's essential part of how we gather news links, for example. And and and -- even with freshness, it's not "newest article ranks first" because we have systems that try to show the original "fresh" content or sometimes a slightly older piece is still more relevant. Here's a page that explains more ranking systems we have that deal with both original content and fresh content: https://developers.google.com/search/docs/appearance/ranking...


Dude, like who is google? The judicial system of the web?

No. Google has their own motivations here, they are a player not a rule maker.

Don’t trust SEOs as no one actually knows what works, but certainly dont think google is telling you the absolute truth.


Ha, I actually totally agree with you, apparently my comment gave the wrong impression. I was just arguing with the GP's comment which was trying to (fruitlessly, as you point out) read tea leaves that aren't even there.


While CNET might not be the most reliable side, Google telling content owners to not play SEO games is also too biased to be taken at face value.

It reminds me of Apple's "don't run to the press" advice when hitting bugs or app review issues. While we'd assume Apple knows best, going against their advice totally works and is by far the most efficient action for anyone with enough reach.


Considering how much paid-for unimportant and unrelated drivel I now have to wade through every time I google to get what I am asking for, I doubt very much that whatever is optimal for search-engine ranking has anything to do with what users want.


Wrong, the best SEO is having what users want and withholding it long enough to get a high average session time.


And I suppose a corollary is: "claim to have what the users want, and have them spend long enough to figure out that you don't have it"?


See: every recipe site in existence.


> That only says that Google discourages such actions

Nope. It says that Google does not ding you for old content.

"Are you deleting content from your site because you somehow believe Google doesn't like "old" content? That's not a thing!"


Do the engineers at Google even know how the Google algorithm actually works? Better than SEO experts who spend there time meticulously tracking the way that the algorithm behaves under different circumstances?

My bet is that they don't. My bet is that there is so much old code, weird data edge cases and opaque machine-learning models driving the search results, Google's engineers have lost the ability to predict what the search results would be or should be in the majority of cases.

SEO experts might not have insider knowledge, but they observe in detail how the algorithm behaves, in a wide variety of circumstances, over extended periods of time. And if they say that deleting old content improves search ranking, I'm inclined to believe them over Google.

Maybe the people at Google can tell us what they want their system to do. But does it do what they want it to do anymore? My sense is that they've lost control.

I invite someone from Google to put me in my place and tell me how wrong I am about this.


Yeah,

Once upon a time, Matt Cutts would come on HN give a fairly knowledgeable and authoritative explanation of how Google worked. But those days are gone and I'd say so are days of standing behind any articulated principle.


I work for Google and do come into HN occasionally. See my profile and my comments here. I'd come more often if it were easier to know when there's something Google Search-related happening. There's no good "monitor HN for X terms" thing I've found. But I do try to check, and sometimes people ping me.

In addition, if you want an explanation of how Google works, we have an entire web site for that: https://www.google.com/search/howsearchworks/


Google Alerts come to mind.


The engineers at Google do know how our algorithmic systems work because they write them. And the engineers I work with at Google looking at the article about this found it strange anyone believes this. It's not our advice. We don't somehow add up all the "old" pages on a site to decide a site is too "old" to rank. There's plenty of "old" content that ranks; plenty of sites that have "old" content that rank. If you or anyone wants our advice on what we do look for, this is a good starting page: https://developers.google.com/search/docs/fundamentals/creat...


>The engineers at Google do know how our algorithmic systems work because they write them.

So there's zero machine learning or statistical modeling based functionality in your search algorithms?


There is. Which is why I specifically talked only about writing for algorithmic systems. Machine learning systems are different, and not everyone fully understands how they work, only that they do and can be influenced.


It's really hard to get a deep or solid understanding of something if you lack insider knowledge. The search algorithm is not something most Googlers have access too but I assume they observe what their algorithm does constantly in a lot of detail to measure what their changes are doing.


"Are you deleting content from your site because you somehow believe Google doesn't like "old" content? That's not a thing!"

I guess that Googler never uses Google.

It's very hard to find anything on Google older than or more relevant than Taylor Swift's latest breakup.


I think in this context, saying that it's not a thing that google doesn't like old content just means that google doesn't penalize sites as a whole for including older pages, so deleting older pages won't help boost the site's ranking.

This is not the same as saying that it doesn't prioritize newer pages over older pages in the search results.

The way it's worded does sound like it could imply the latter thing, but that may have just been poor writing.


"poor writing" is the new "merely joking guys!"


That Googler here. I do use Google! And yeah, I get sometimes people want older content and we show fresher content. We have systems designed to show fresher content when it seems warranted. You can imagine a lot of people searching about Maui today (sadly) aren't wanting old pages but fresh content about the destruction there.

Our ranking system with freshness is explained more here: https://developers.google.com/search/docs/appearance/ranking...

But we do show older content, as well. I find often when people are frustrated they get newer content, it's because of that crossover where there's something fresh happening related to the query.

If you haven't tried, consider our before: and after: commands. I hope we'll finally get these out of beta status soon, but they work now. You can do something like before:2023 and we wouldn't show pages from before 2023 (to the best we can determine dates). They're explained more here: https://twitter.com/searchliaison/status/1115706765088182272


"Taylor Swift before:2010"

With archive search, the News section floats links like https://www.nytimes.com/2008/11/09/arts/music/09cara.html


Maybe not related to the age of the content but more content can definitely penalize you. I recently added a sitemap to my site, which increased the amount of indexed pages, but it caused a massive drop in search traffic (from 500 clicks/day to 10 clicks/day). I tried deleting the sitemap, but it didn't help unfortunately.


Ye. I am flabbergasted by people that are gaslighting people into not being "superstitious" about Google's ranking.


How many pages are we talking about here?


100K+. Mostly AI and user generated content. I guess the sudden increase in number of indexed pages prompted a human review or triggered an algorithm which flagged my site as AI generated? Not sure.


Just because someone says water isn't wet doesn't mean water isn't wet.


The contrived problem of trusting authority can be easily resolved by trusting authority


Claims made without evidence can be dismissed without evidence.


"The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command."

-George Orwell, 1984


it seems incredibly short-sighted to assume that just because these actions might possibly give you a small bump in SEO right now, they won't have long-term consequences.

if CNET deletes all their old articles, they're making a situation where most links to CNET from other sites lead to error pages (or at least, pages with no relevant content on them) and even if that isn't currently a signal used by google, it could become one.


No doubt those links are redirected to the CNET homepage.


Isn’t mass redirecting 404s to the homepage problematic SEO-wise?


Technically, you're supposed to do a 410 or a 404, but when some pages being deleted have those extremely valuable old high-reputation backlinks, it's just wasteful, so i'd say it's better to redirect, to the "next best page" like maybe a category or something, or the homepage, as the last resort. Why would it be problematic? Especially if you do a sweep and only redirect pages that have valuable backlinks.


I was only talking about mass redirecting 404s to the homepage, which I've heard is not great, I think what you're saying is fine -- but that sounds like more of a well thought out strategy.


Hi. So I'm the person at Google quoted in the article and also who shared about this myth here: https://twitter.com/searchliaison/status/1689018769782476800

It's not that we discourage it. It's not something we recommend at all. Not our guidance. Not something we've had a help page about saying "do this" or "don't do this" because it's just not something we've felt (until now) that people would somehow think they should do -- any more than "I'm going to delete all URLs with the letter Y in them because I think Google doesn't like the letter Y."

People are free to believe what they want, of course. But we really don't care if you have "old" pages on your site, and deleting content because you think it's "old" isn't likely to do anything for you.

Likely, this myth is fueled by people who update content on their site to make it more useful. For example, maybe you have a page about how to solve some common computer problem and a better solution comes along. Updating a page might make it more helpful and, in turn, it might perform better.

That's not the same as "delete because old" and "if you have a lot of old content on the site, the entire site is somehow seen as old and won't rank better."


Your recommendations are not magically a description of how your algorithm actually behaves. And when they contradict, people are going to follow the algorithm, not the recommendation.


Exactly this, not any different than how they behave with youtube, it seems deceptive at best


Yeah, Google’s statement seems obviously wrong. They say they don’t tell people to delete old content, but then they say that old content does actually affect a site in terms of it’s average ranking and also what content gets indexed.


"They say that old content does actually affect a site in terms of it’s average ranking" -- We didn't say this. We said the exact opposite.


Sorry if I’m misconstruing what was said, but then it seems that what was said isn’t consistent with what actually happens.


What the Google algorithm encourage/discourage and what google blog or documentation encourage/discourage are COMPLETELY different things. Most people here are complaining about the former, and you keep responding about the latter.


No one has demonstrated that simply removing content that's "old" means we think a site is "fresh" and therefore should do better. There are people who perhaps updated older content reasonably to keep it up-to-date and find that making it more helpful that way can, in turn, do better in search. That's reasonable. And perhaps that's gotten confused with "remove old, rank better" which is a different thing. Hopefully, people may better understand the difference from some of this discussion.


I think you have misread the tweet. It says it does not work _and_ discourages the action.


Exactly. Google also discourages link buildning. But getting relevant links from authority sites 100% work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: