« Back to Blog Home

Search Engine Optimization

There exists a common misconception in the web design world that GoogleBot (Google’s lovable pet spider) doesn’t process JavaScript, and that any good designer should avoid hiding content with JavaScript to avoid the negative impact on SEO. This led to all sorts of fun workarounds, like using JavaScript to hide elements on page load instead of using CSS to declare “display:none;”. This also led to a breed of black hat SEOs that realized they could feed different information to GoogleBot using JavaScript redirects. Unfortunately, this method does not actually work. Per Matt Cutts of Google’s Webspam team:

“For a while, we were scanning within JavaScript, and we were looking for links. Google has gotten smarter about JavaScript and can execute some JavaScript. I wouldn’t say that we execute all JavaScript, so there are some conditions in which we don’t execute JavaScript.”

So moving forward, this begs one very important question:

How Much JavaScript Does GoogleBot Process?

In the process of building our most recent Thought Space redesign, we attempted a new trick to SEO a one page website. I planned on having panels on the single page home screen. I made separate HTML files for each panel, complete with our sites header and footer. I linked these HTML files together like a normal site (home, contact, etc…). The single page nav scrolled to other parts of the screen instead of linking out like the other files. Essentially, we had a normal site structure as well as a single page that combined all of the other pages into a series of panels. No content was changed at all between the versions, simply structure.

We placed a JavaScript redirect at the top of each page and added some cookie and jScript scrolling foolery. Our intent was to have Google index the individual pages and not see the single page scroller because it would stop at the JavaScript. We would have all of our site indexed in nice pages on Google, perfectly set up and prepared for site links. When the user clicked on one of the site links in the result, if they had JavaScript enabled, they would redirect to the single page site and automatically scrolled to the panel associated with the page they clicked.

Unfortunately, everything did not work as well as planned. We shortly realized that Google was indexing our main single scroller page (which ultimately led to this blog post). All hope was not lost on the previous work, as it still serves as a no JS version of the site. This led me to make one strong realization, however

GoogleBot processes JavaScript that doesn’t require triggers

For example, my main page got indexed because I placed a redirect on page load. This did not require any user interaction to trigger the action. GoogleBot happily followed along straight to my single version and indexed it instead. The entire plan for site links with Google had been shattered. I also happened to realize that some of the content on my home page wasn’t being indexed by Google. After looking into it further, the obvious came about

Interactions involving clicks, hovers, or other user interaction are not processed by GoogleBot

So since I had multiple blocks of content that were hidden until a certain button was clicked, GoogleBot overlooked them. The stupid simple solution to this turned out to be hiding the elements with JavaScript on page load rather than using “display:none;” through CSS.

This entire experiment has provided me with some great insight as to just how far GoogleBot will reach into your JavaScript. Truly understanding how this spider crawls your code can help you set your site up accordingly to ensure maximum spider visibility where it’s wanted, and not where it isn’t. The effects of improperly used JavaScript can be devastating to SEO, so hopefully you’ll be able to use this knowledge to prevent turmoil with your site.


Leave a Reply

*

Comments On This Post

  • Connor
    Connor

    Great article!

    You could also perform the feature-based forwarding in the opposite way: Set a delayed meta-redirect for non-JS users, and employ JS to scrub the redirect from the page on-load.

    • Jareth Tackett
      Jareth Tackett

      Good suggestion! never thought of it that way. Regardless, this method still probably would not spoof Googlebot as the detection would occur and be processed on page load with no triggers required to set it off. It appears this method of page redirection spoofing may never work.

  • Tobias Mirwald
    Tobias Mirwald

    Great article Jareth! Maybe worth to repeat that experiment from time to time? Did you also check if googlebot carries out Ajax?

    • Jareth Tackett
      Jareth Tackett

      I definitely agree, but I’ve been too busy to retest this. Feel free to test it again and post your findings!

      And regarding Ajax, I did not test, however there is a good bit of documentation for how to set up Ajax to be properly read by Google. Try searching around a bit and I’m sure you’ll turn something up.

      • Tobias Mirwald
        Tobias Mirwald

        Ok, we will see.. Maybe I will set up some test cases myself and get back to you to share my findings 😉 Regarding Ajax: Many online shops use Ajax for its navigation and often there´s just too many links from seo perspective. Sometimes only a party of links are written in the code, too. The rest is put in via Ajax loading. If googlebot is processcing Ajax it will probably be counting the whole number of links. And that might affect the “flow of link juice” to important pages… That´s why I want to find out if googelbot is executing Ajax;)