Effect of Client Side Redirects on Analytics and SEO: Meta Refresh and JavaScript

Post by on August 28, 2012

Note: this article is written from the perspective of tracking analytics using a client-side script.

There are two main ways to redirect a user agent client-side:

  • meta refresh
  • JavaScript redirect

For those who are unfamiliar with these two, WikiPedia has a great article on this topic. However, as many of us know, using a meta element to redirect is strongly opposed by the W3C. Alternatively, there are at least 5 ways to redirect the browser using JavaScript, though each have use cases.

Either method of client-side redirection has potential for causing significant issues with both analytics and SEO.

Client Side Redirects and Analytics

First, regarding analytics, I could not sum up the issue better than yahlec does in his or her answer on StackOverflow. Essentially there are two issues:

  1. page view bloat
  2. unclean referrer

The first issue with analytics, page view bloat, happens when the client-side redirect fires after the analytics code has already tracked the page view. There are ways around this, such as using a meta refresh of 0, but think twice before committing--there are many related factors besides analytics, take performance for example.

The second issue with analytics, unclean referrer, means that the final destination has an unclean referrer. For example, if a user at page A clicks on a link to page B which then redirects to page C, the referrer at page C will be page B if using a client-side redirect. However, a server-side redirect would result in page A being the referrer. Like the first issue, there are workarounds, such as the utm_nooverride for Google Analtyics, as a StackOverflow thread discusses.

Client Side Redirects and SEO

Second, regarding SEO, the issues are more extreme. Without going into detail, there are at least two key issues:

  1. page rank devaluation/reset
  2. content mismatch

The classic article written by Sebastian regarding the use of a meta refresh and how crawlers see it is still a worthy read though it is dated. In short, most crawlers support a meta refresh of 0 and treat it as a server-side redirect, and a JavaScript redirect is simply ignored since crawlers don't read JavaScript.

However, regarding the first issue in the case of Google, any meta refresh value will be treated as a 302 redirect. Yet, a 302 redirect has disadvantages since it is seen as only a temporary redirect and the page ranking of each location will be tracked separately. In most cases, this is not desirable. This is why, though Google still supports meta tags, it is highly recommended to use a server-side 301.

Regarding the second issue, there is the risk of a crawler landing on a page and indexing it instead of the page your client-side redirect goes to. Imagine seeing a key page on your site show as a blank page in Google's search preview window.

Pros of Client Side Redirects

  1. flexibility (you can choose when, where, and whether or not to redirect after the page is rendered)
  2. portability (you can have code that uses redirects that is server-side agnostic, i.e. it is portable between ASP and PHP)

Cons of Client Side Redirects

  1. complexity in tracking analytics (must use work-arounds, i.e. utm_nooverride)
  2. potential SEO penalties
  3. usability (i.e. sticky back button, infinite redirect loop)
  4. defies standards (the HTTP protocol specifies a 3XX status code be returned if there is a redirect)

 

Older Posts »