pittendrigh — 2013-06-06T12:45:20-04:00 — #1
I have a (test) domain name I use for development. Perhaps I should be using a sub-domain name. But I don't.
I know of no published link to this domain anywhere. Sometimes--while developing a new website--I forget to put a meta NOFOLLOW element in the headers of the new pages I'm generating.
Interestingly, keyword searches on the content of those new and not-yet-published websites often end up indexed on Google. How did Google ever know about that domain in the first place, when I never (ever) published a link to it?
I suppose it only has to be spidered once and then it's a known entity for ever. But I am still curious to know how they ever found out about my test domain in the first place. Is NOFOLLOW the only way to have a test domain? Or are sub-domains a better way to go?
smanaher — 2013-06-06T14:17:25-04:00 — #2
Very good question pittendrigh,
I was not able to find a definitive answer on this but have seen the same activity when I build websites. These sites are not linked to anything (that I know of or that I can find) and yet they appear in Google search results despite the claim that Googlebot finds pages based on other links.
Here is an interesting article that may point you in the right direction.
Would be interesting to hear what you find out.
pittendrigh — 2013-06-06T14:42:22-04:00 — #3
Ah. The Google toolbar. I have never used it. But some of my customers probably have. So when I sent my customer an email that said: "look at my test domain and tell me what you think" ......... then their browser makes Google aware of that URL. Once aware once they know about it forever.
force — 2013-06-06T18:46:55-04:00 — #4
Sometimes WHOIS records are replicated, so eventually google crawls finds the linked A records and follows them.
If you don't want them to be indexed or visited, the only sure way to prevent access is to throw up an htaccess username/password prompt. Most hosts offer this through the control panel named something like "protected folder".
pittendrigh — 2013-06-06T18:53:29-04:00 — #5
Hadn't thought about passwords. That would be pretty bomb proof.
felgall — 2013-06-07T03:37:31-04:00 — #6
A simple Deny request in the robots.txt file will work with the legitimate search engines without requiring a password.
pittendrigh — 2013-06-07T07:50:43-04:00 — #7
This (robots.txt) is the best solution. NOFOLLOW in the header is too hard to control with generated pages that rely on config files or database values......at least in the chaos of development time. Robots.txt tends to be stable throughout all of that.
belansus — 2013-08-24T17:21:37-04:00 — #8
Not sure if it what your looking for, but I redirect all IPs apart from my own.