Is it bad practice to use ajax to load nav pages into your index?

Howdy, I’m back with another silly question, just looking for re-affirmation that what I’m trying to do is not totally ridiculous.

I’m building my portfolio site and I had just learned a bit about ajax and thought I’d play around with using it (though maybe not in a meaningful way as it turns out). So I used it in such a way, that when you click a link on the nav, it loads the contents of that page directly into the index’s main section. For example, clicking contact on the menu will, instead of going to the contact page, load a contact form in between the header and footer of the index already loaded.

Is there any particular reason I shouldn’t be doing this? I’ve followed the rule of graceful degradation, meaning, if JS is not on, the default link still goes to the proper page. The only downside I can see is that I have to effectively have two pages for each section. For instance, Contact.html is the default html link and includes the nav, header, contact form and footer, and then there is the page (contact_main.html) needed for the ajax load, which ONLY has the contact form.

I guess this could have SEO, search index implications possibly but could I just put a rule with robots to tell google which pages to index? In other words, so that contact.html might get indexed instead of the page used for ajax (contact_main.html).

Is this really bad practice to do? And, like, will a potential employer notice it and go, what a stupid way to do things?

Probably the question you might ask is, why would I do it? Well, I figured it’d be fun to play around with learning more ajax at first but then once I did it, I was kind of amazed by how much faster the site felt. The header/nav and footer don’t have to re-load every page and it just felt really snappy and smooth to me.

I recognise it does feel like doing things the hard way but I wondered, since I have graceful JS failure built-in, is there any harm in doing it? Maybe I’m making it out to be more than it is but since I’m new at this and don’t know a lot of common practices and do’s and don’ts, I thought I’d ask. Opinions?

I’ve done it, just playing around, not for any project. As I understand it, it’s a “one page” process. I’ve heard arguments both pro and con, but I don’t really know enough about it to comment on whether or not it’s a good practice.

V/r,

:slight_smile:

My understanding is that the search engines are only going to index pages that they can get to by following links, so the ‘ajax’ versions of the pages that only get loaded in via JS are going to be invisible to the crawler.

Another problem with what you’re doing though is useability. If you’re loading in the content via AJAX, the page URL won’t be changing, which means users won’t be able to bookmark or link ot specific pages. You can solve this by updating the URL that the browser displays via JS, using the HTML5 History API.

My understanding is that the search engines are only going to index pages that they can get to by following links, so the ‘ajax’ versions of the pages that only get loaded in via JS are going to be invisible to the crawler.

That suits me fine. I’d rather the full Contact.html gets indexed. That way if someone jumped to it via google, They’d get a full page, while the ajax page would only have the contact form and not the nav.header and footer, etc…

Thanks! This is the kind of thoughts I was looking for, being new at this kind of thing. I’d definitely want people to bookmark pages as well as use the browser back and forward navigation, etc… I’ll check out that link you provided. I guess part of the point of me doing it like this is a chance to learn. I’m not quite up to building web apps yet but it’s certainly fun playing around with JS and jQuery to manipulate the DOM. Hopefully I can take something from these experiments and use them in a more meaningful way later on.

My understanding is that the search engines are only going to index pages that they can get to by following links, so the ‘ajax’ versions of the pages that only get loaded in via JS are going to be invisible to the crawler.

Hasn’t this changed or am I misunderstanding their blog post? (I know very little about SEO)

Google Blog:

In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on.

Yeah I had heard that Google was adding some JavaScript capabilities to their web crawler, but I’ve not been able to find much information about just what it’s able to do (such as whether it will activate onclick handlers).

Here’s the official announcement.
It’s a bit thin on the details, though.

You shouldn’t need two pages - simply test if the page is being loaded from Ajax or not so as to decide what to send to the browser.

Oh, duh. I didn’t realize I could just load certain sections from other html documents until you said that. I googled and right away came across:

$( "#result" ).load( "ajax/test.html #container" );

How easy is that?
Thanks for the tip.

Google likes to keep everyone on their toes. I learned everything I know about SEO from here:

1 Like