Google Webmaster Tools has flagged 343 occurrences of "Duplicate title tags" all due to search requests.
So what is the best practice for setting Title, Description and Canonical Link for the search results?
Or is it best to use the "Remove URLs" and also to DisAllow the URL in robots.txt?
You mean your internal search page is ranking? It may be best to set up a dynamic Title Tag, description, and url based on the search phrase.
You don't want your internal search results pages appearing in Google's results – that just puts an extra barrier between your visitors and your website. They want to go straight to your page, not to a search results page where they then have to find the appropriate link (which may not be the top one). I would recommend 'noindex'ing your search results pages, as opposed to disallowing them, because that still allows Google to follow the links within them.
Yes GWT is picking up the search results in there Optimization/HRML Improvements.
So I thought about your suggestion, create a table, log all the searches, create a URL mySite.com/search-what-is-the-meaning-of-life.html. This would find all the keywords and display the relevant links and perhaps snippets of the content. I could also dynamically create a Sitemap-Searches.xml from the table entires.
A bonus would be that new pages could easily be produced, complication that the page would not be informative to anyone landing on that particular page
I could create the results page similar to the Sitepoint Forum page with the title and either displayed or hovered page snippets...
but I prefer your suggestion of noindex,nofollow and have implemented with the following header script.
$lFollow = 'jotd'==$this->uri->segment(1) || 'search'==$this->uri->segment(1);
$sFollow = ($lFollow) ? 'noindex,nofollow' : 'follow';
<meta name="robots" content="<?php echo $sFollow;?>" />
I will monitor GWT's HTML Suggestions.
Whoa there! I said 'noindex', I specifically did not say 'nofollow' ... the reason I gave for doing it this way rather than through
robots.txt was that you can allow Googlebot to follow the links if it does find itself on an internal SERP.
It would probably be easier to use PHP to dynamically call the search phrase and make that the Title Tag. You could do the same thing for some generic META Description just to make it unique.
Many thanks once again for the clarification, point noted about nofollow meta tag and now removed.
Yes I use a PHP Framework and instead of a 404 page, route the URL to a search controller where the string is searched for an exact table title match or fall-through to searching for the all the words in the URL string. Complication arose when GWT found the results page
So rather than heed @Stevie_D;'s advice about using the noindex meta tag it would be better to have the search items relate to a unique title and unique description? Sounds good and will endeavour to modify the controller's title and description.
Just had another thought; what would be the best Canonical link for this search: "What is the meaning of life?"
- " www.mySite.com/search.html"
- " www.mySite.com/search/what-is-the-meaning-of-life.html"
- " www.mySite.com/what/is/the/meaning/of/life.html"
All three URLs would be routed to the Controller and then onto the same search page with a unique and/or relevant title and description.
Looks like I have opened a can of worms and all I wanted was to keep GWT happy
That is probably what I would personally recommend, unless you have a good reason to not want people to see the page. It will probably not outrank any of your pages that rank for other keywords, so anything it ranks for would be new traffic. You would probably want to put some call to action or some ads somewhere on the search page template just to make sure you have a good shot at converting some of the traffic on some level.
Of the links you listed: www.mySite.com/search/what-is-the-meaning-of-life.html is the most search engine friendly.
I hope these changes will appease the great Google
@alabamaseo and @Stevie D,
Many thanks for your advice.
After quite a few modifications and waiting for another crawl the "Duplicate title tags" increased to 583 then down to 12 now back up again to 554
So tried simplifying the script once again to return a search result with a count and set Meta Robots = "noindex, follow" and will wait for another crawl.
The original controller was quite simple, has had numerous amendments to cater for all URL possibilities and is now a headache to maintain.
Now that the concept has proved successful a complete controller and view rewrite is imminent.
@alabamaseo and @Stevie D,
Google Webmaster Tools has just crawled and now the "Duplicate Title Tags" count has increased to 2,256
These are typical links, one with .html and one without.
Using Health->Fetch as Google Bot for the one without the .html extension shows:
Fetch as Google
This is how Googlebot fetched the page.
Date: Tuesday, April 23, 2013 at 9:45:49 AM PDT
Googlebot Type: Web
Download Time (in milliseconds): 267
HTTP/1.1 301 Moved Permanently
Date: Tue, 23 Apr 2013 16:45:50 GMT
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Set-Cookie: PHPSESSID=70b448b533d272edff1fb8df8537bb44; path=/
What more can I do? The canonical link is correct and "HTTP/1.1 301 Moved Permanently" is set. Is Google reporting incorrectly showing the "Duplicate title tags"?
The correct link show "HTTP/1.1 200 OK" and the canonical link is correct.
This topic is now closed. New replies are no longer allowed.