Is the Web heading toward redirect hell?
Google is doing it. Facebook is doing it. Yahoo is doing it. Microsoft is doing it. And soon Twitter will be doing it.
We’re talking about the apparent need of every web service out there to add intermediate steps to sample what we click on before they send us on to our real destination. This has been going on for a long time and is slowly starting to build into something of a redirect hell on the Web.
And it has a price.
The overhead that’s already here
There’s already plenty of redirect overhead in places where you don’t really think about it. For example:
Every time you click on a search result in Google or Bing there’s an intermediate step via Google’s servers (or Bing’s) before you’re redirected to the real target site.
Every time you click on a Feedburner RSS headline you’re also redirected before arriving at the real target.
Every time you click on an outgoing link in Facebook, there’s an inbetween step via a Facebook server before you’re redirected to where you want to go.
And so on, and so on, and so on.
This is, of course, because Google, Facebook and other online companies like to keep track of clicks and how their users behave. Knowledge is a true resource for these companies. It can help them improve their service, it can help them monetize the service more efficiently, and in many cases the actual data itself is worth money. Ultimately this click tracking can also be good for end users, especially if it allows a service to improve its quality.
But…
Things are getting out of hand
If it were just one extra intermediary step that may have been alright, but if you look around, you’ll start to discover more and more layering of these redirects, different services taking a bite of the click data on the way to the real target. You know, the one the user actually wants to get to.
It can quickly get out of hand. We’ve seen scenarios where outgoing links in for example Facebook will first redirect you via a Facebook server, then a URL shortener (for example bit.ly), which in turn redirects to a longer URL that in turn will result in several additional redirects before you FINALLY reach the target. It’s not uncommon with three or more layers of redirects via different sites that, from the perspective of the user, are pure overhead.
The problem is that that overhead isn’t free. It’ll add time to reaching your target, and it’ll add more links (literally!) in the chain that can either break or slow down. It can even make sites appear down when they aren’t, because something on the way broke down.
And it looks like this practice is only getting more and more prevalent on the Web.
A recent case study of the “redirect trend”: Twitter
Do you remember that wave of URL shorteners that came when Twitter started to get popular? That’s where our story begins.
Twitter first used the already established TinyURL.com as its default URL shortener. It was an ideal match for Twitter and its 140-character message limit.
Then came Bit.ly and a host of other URL shorteners who also wanted to ride on the coattails of Twitter’s growing success. Bit.ly soon succeeded in replacing TinyURL as the default URL shortener for Twitter. As a result of that, Bit.ly got its hands on a wealth of data: a big share of all outgoing links on Twitter and how popular those links were, since they could track every single click.
It was only a matter of time before Twitter wanted that data for itself. And why wouldn’t it? In doing so, it gains full control over the infrastructure it runs on and more information about what Twitter’s users like to click on, and so on. So, not long ago, Twitter created its own URL shortener, t.co. In Twitter’s case this makes perfect sense.
That is all well and good, but now comes the really interesting part that is the most relevant for this article: Twitter will by the end of the year start to funnel ALL links through its URL shortener, even links already shortened by other services like Bit.ly or Google’s Goo.gl. By funneling all clicks through its own servers first, Twitter will gain intimate knowledge of how its service is used, and about its users. It gets full control over the quality of its service. This is a good thing for Twitter.
But what happens when everyone wants a piece of the pie? Redirect after redirect after redirect before we arrive at our destination? Yes, that’s exactly what happens, and you’ll have to live with the overhead.
Here’s an example what link sharing could look like once Twitter starts to funnel all clicks through its own service:
Someone shares a goo.gl link on Twitter, which automatically gets turned into a t.co link.
When someone clicks on the t.co link, they will first be directed to Twitter’s servers to resolve the t.co link to the goo.gl link and redirect it there.
The goo.gl link will send the user to Google’s servers to resolve the goo.gl link and then finally redirect the user to the real, intended target.
This target may then in turn redirect the user even further.
It makes your head spin, doesn’t it?
More redirect layers to come?
About a year ago we wrote an article about the potential drawbacks of URL shorteners, and it applies perfectly to this more general scenario with multiple redirects between sites. The performance, security and privacy implications of those redirects are the same.
We strongly suspect that the path we currently see Twitter going down is a sign of things to come from many other web services out there who may not already be doing this. (I.e., sampling and logging the clicks before sending them on, not necessarily using URL shorteners.)
And even when the main services don’t do this, more in-between, third-party services like various URL shorteners show up all the time. Just the other day, anti-virus maker McAfee announced the beta of McAf.ee, a “safe” URL shortener. It may be great, who knows, but in light of what we’ve told you in this article it’s difficult not to think: yet another layer of redirects.
Is this really where the Web is headed? Do we want it to?
Back to the games boys and girls