Faruk At.eş


Lest We Forget (Or How I Learned What’s So Bad About Browser Sniffing)

When I first started as a full-time web developer, a large part of my job revolved around making sites work between Internet Explorer and Netscape Navigator, which meant lots of menial sifting through messy HTML. The frustration came with the job; browsers implemented different features of HTML and CSS entirely, and of the features they did have in common a lot were implemented slightly differently from one another still. It was the era wherein the Web Standards Project was gaining traction with their efforts in convincing browser manufacturers to support these “web standards” as outlined by the W3C. And slowly as the years went by, browser makers heeded their call.

Yet unless you were there, you mightn’t realize that web developers weren’t collectively doing everything right, either. On the other side of the Web Standards Project’s coin were their efforts at educating web designers & developers about the best practices for building websites. A lot of bad practices ruled across the digital lands: table-based designs, splash pages, a complete lack of accessibility, and userAgent (UA) sniffing.

Most, but not all, of those bad practices emerged out of necessity: prior to the first semblances of CSS support across multiple browsers, table-based designs were the only way to do any kind of effective column-based design at all. And UA sniffing was often the easiest way to determine whether we were in Internet Explorer 4, 5, 5.x Mac, 5.5 or 6, or using some version of Netscape or Mozilla. As a web developer, your job wasn’t to tell clients “No” simply because good means to an end were not available; your job was to build sites using whatever tools and techniques were. The knowledge gleaned from UA sniffing and other, similar hacks, was vital to building the best site you could make.

But that never meant UA sniffing was a good idea.

In fact, from the History of the browser user-agent string we learn that UA sniffing was the cause of tremendous problems for, initially, browser vendors, who were quick to start mimicking each other’s UA string in order to bypass the incomplete UA sniffing-routines deployed on countless of websites. It was because of us web developers doing so much UA sniffing everywhere that browser vendors were forced to include each other’s strings, leading us to the situation of today where my browser’s UA string is this:

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.6 Safari/534.16

I count: four different browsers, two different rendering engines, and two variations of one Operating System string. Oh, and a lot of cruft. Tell me again how this is considered progress on the Web?

Lies, Damn Lies, and Web Browsers

When I set out creating what eventually became Modernizr in late 2008 and early 2009, I had three specific goals in mind:

  1. To give web designers and developers the opportunity to reliably use new CSS3 and HTML5 features without the fear of losing control in browsers that don't support those features yet;
  2. To discourage and diminish the practice of sniffing User Agent strings, still quite prevalent among web developers at that time;
  3. To encourage broad adoption of these new features which, in turn, gives browser vendors more feedback to improve their support and establish these upcoming Web Standards.

Around that same time, jQuery 1.3 was released and was the first major JavaScript library to completely drop all forms of browser/UserAgent sniffing internally. The jQuery people had come to the same conclusions as I had: that the many assumptions inherent to doing UA sniffing, combined with the proliferation of different userAgent strings on the web (in particular on mobile), would only create increased challenges going forward into the future, and make our code increasingly difficult to maintain. Or, to quote their release notes:

Browser sniffing is a technique in which you make assumptions about how a piece of code will work in the future. Generally this means making an assumption that a specific browser bug will always be there - which frequently leads to code breaking when browsers make changes and fix bugs.

The biggest problem with UA sniffing is the “UA” part, because browsers lie. A lot. They started lying with the release of Microsoft Internet Explorer 2.0, and they continue to lie to this very day. Browsers lie about who they are and what they can do all the time. Sometimes it’s not even the browsers themselves who do the lying, but proxies adjusting UserAgent strings along the way without the browser’s or the user’s knowledge.

In the past few years, more often than not the lying isn’t intentional, but that makes no difference to a website doing UA sniffing. It’s especially noticeable when new browsers show up on the scene, based on an open-source rendering engine that’s been around for a while. While egregious inconsistencies between the browser’s claims of supported features and the reality of those claims are often quickly fixed in subsequent updates, UA sniffing strings are updated far less frequently. And then there’s the legacy.

Remember IE6?

That browser that we all loathed but had to continue supporting because a huge portion of Internet users still used it?

It wasn’t just Microsoft’s fault for freezing development on IE after version 6 reached a staggering 88% share of the world’s browser market. The fault lay just as strongly with people who had built corporate intranets and content management systems with such poor web development techniques, enabled only by (often equally poor) UserAgent sniffing, that those systems broke in myriad ways the moment you threw a more modern browser at them. Too many people using such systems were prevented from upgrading their browsers, ever, by their corporate IT departments—even Microsoft couldn’t convince them otherwise—and as a result, all of us on the outside Web, the open, public Web, had to continue supporting IE6 for a very long time.

UA sniffing enabled the use and continuation of bad practices and outdated techniques, and as a result, played a very real part in keeping the Web from progressing as fast as it could have.

It ain’t all bad, son

The world of browsers today is almost nothing like it was just five years ago: not a single browser has more than 35% market share—worldwide or in any major market; IE6 only has about 5% market share worldwide; mobile browsers are on the rise and inspiring far better techniques for building great, future-proof websites with; the landscape today is really quite exciting, all things considered. More importantly, though: we have largely consistent support for web standards across every major browser in use today, and we have a huge number of shims and polyfills to bring some support and features back to browsers that don’t.

There are still somewhat-legitimate circumstances wherein the combined power of CSS3 Media Queries and feature detection cannot produce a specific enough subset of browsers for certain needs and use cases. The legitimacy of those circumstances or needs aside, these are the situations where doing some UA sniffing can make sense. However, we must not misconstrue the existence of these situations as being an argument for the practice in general.

The big problem with advocating UA sniffing as a practice is that it adds a certain air of credibility to it; one which could very easily tempt a budding young web gun to employ it. But UA sniffing is not suitable for those who aren’t intimately familiar with the intricacies of browsers and their features, and the scope of user agent strings in use on the Web today. And I really do mean intimately familiar. As in, you know that there are some 637 userAgent strings used on the Internet (desktop & mobile), and you know regular expressions really well, and you know off the top of your head which browser versions support what features, and you know all the complications involved in supporting fourteen flavors of mobile Webkit browsers. And so forth.

This stuff is far from easy to understand; even just the basics of feature detection versus browser detection are quite confusing to some people. That’s why we make libraries for this stuff (and, use browser inference instead of UA sniffing). These are the kind of efforts that we need, to help move the web forward as a platform; what we don’t need is more encouragement for UA sniffing as a general technique, only to save a couple of milliseconds. Because I can assure you that the Web never quite suffered, technologically, from taking a fraction of a second longer to load.

So if you really know what you’re doing, then you may have a legitimate case for doing UA sniffing. You may even be skilled enough to make a robust bit of UA-sniffing code for it not to be wrong half the time. But even if that’s the case, I ask you to reconsider advocating the practice, lest we forget the damage the Web has suffered at the hands of UA sniffing already.

If you liked this, you should follow me on Twitter!


About me

Faruk Ateş

Faruk Ateş is a designer, developer, and entreprenerd. He is the creator of Modernizr, and co-founder of Presentate. He lives in Vancouver, B.C. and writes and speaks about technology, social justice, design and business.

Read more about Faruk, or .

Upcoming talks

Here on My own website

Subscribe to this site

There

Elsewhere