Faruk At.eş


Archive for 2011

  1. January 12
  2. February 35
  3. March 16
  4. April 29
  5. May 10
  6. June 4
  7. July 9
  8. August 6
  9. September 3
  10. October 4
  11. November 9
  12. December 11

Showing 35 posts from

The Cost of Feature Testing

In three quick screencasts, JD Dalton investigates Alex Russell’s poorly-backed claims about the cost of feature detection & feature testing, by doing some great research and producing actual data (instead of, what I did, explain why his proposed alternative is such a bad idea). Also check out JD’s closing screencast about the cost of pre-DOM load reflow.

Spoiler: these costs are negligible compared to pretty much anything that makes your website or app actually slow.

Don't Let Your Menu Take Over

Stu Robson with a valid complaint regarding CSS3 Media Queries-optimized site designs showing only navigation and no content at the top.

Amount of Profanity in git Commit Messages Per Programming Language

Andrew Vos:

Last weekend I really needed to write some code. Any code. I ended up ripping just under a million commit message from GitHub.

The plan was to find out how much profanity I could find in commit messages, and then show the stats by language. These are my findings.

Tab Closing in Google Chrome and Safari

Basil Safwat with an excellently detailed analysis of why the UI for closing tabs is superior on Chrome:

What has happened here? Well, while Safari resizes the width of the remaining tabs to fill the newly available space each time a tab is closed – Chrome does not. So when a tab is closed from the left in Chrome, no resizing takes place, the rest of the tabs move along one, lining up the next close button directly under the mouse pointer, ready for the next click.

Now, Chrome will resize the tabs to fit the remaining space, but only after the mouse has left the functional area at the top of the browser; that is, after the user has finished interacting with the tabs and has moved their attention elsewhere.

When I switched to Chrome some time ago it was for several reasons, but the better tab management in it is a big one.

How HashbangsPoor Execution And Practices Break The Web

Gawker is loading an article…

Earlier today I tweeted a link to a Sitemeter page showing a massive drop in Gawker traffic. At first I thought the Sitemeter system was just incapable of accurately measuring traffic in a hashbang-world, but then Mike Davies noticed that it’s probably just to do with them dropping Sitemeter on their redesign and leaving it only in place on their UK site.

Then I went to visit Gawker.com itself to verify this, during which I spotted an article whose headline caught my interest. I clicked on it in the right sidebar. I saw it load up AJAX-style in the left pane, with the above-shown loading spinner still on top of it, blanking out the article by making it only 10% opaque. As I write this, many minutes and a couple discussions later, the Gawker site is still waiting to “load” the article. The article that loaded up just fine, but was never revealed because of…

A simple JavaScript error.

I then copied the URL from the address bar and emailed it to myself, opening it up on my iPhone to see what happened there. Well, not much happened: I was simply redirected to the main page of the Gawker mobile site. Not the article I requested, just the main page. Had it been an article no longer near the front page, I wouldn’t have been able to find it back and open it on my iPhone, ever.

To verify that bold assessment, I looked up some old Gawker article from May 2010, searched (on my iPhone) for the exact title string, found the exact article, tapped on it, and thanks to Gawker using a combination of the generally wrong practice of UserAgent sniffing and a broken Hashbang-implementation, I was again returned to m.gawker.com, not seeing the article I tapped to see. The mobile site also doesn’t allow you to search their archives, so my bold assessment turned out to be entirely correct.

As Tim Bray explained with his 5-step run-through of what happens when using hashbangs, a web server is incapable of accessing the fragment identifier (the part after “#”) and thus can only be told what content to show if the client-side JavaScript tells it something. And the discussion over at Aaron Gustafson’s blog points out that third-party advertising scripts can break that client-side JavaScript well beyond anything you could ever do on your site to make it work.

Hashbangs may not be quite the most evil thing on the Web today, but since you simply can never do a hashbang-implementation perfectly right, you probably should not be doing one at all. There are situations where it could be considered acceptable, e.g. if you’re building a sophisticated web app that wouldn’t make sense not to rely on JavaScript in the first place, but for any general content-driven website, the fragility of the Hashbang-approach makes it a disproportionately huge barrier to people wanting to access that content. And if you then combine it with bad practices to boot, you’re really just doing yourself and your audience a disservice. And if you’re a publication like Gawker, with many writers working for you writing the content on these sites, you’re not just doing yourself and your audience a disservice, you’re actually hurting the Web and our society by extension. As Jaron Lanier argues:

“The crucial question,” he said, ”ultimately has to do with power. If the future is one in which writers are not paid, then it also is one in which writers lack clout. And if it’s a future in which writers lack clout, then what we have is a lack, basically, of an intellectual middle class. Instead we have a sort of volunteer intellectual class, which in terms of clout starts to resemble peasants.”

The more fragile access to your content becomes, the less likely you’re able to make money off of it, the less likely your writers will get paid what they should be. The implications of extending that are significant. Sooner or later, Gawker will reverse their decision on using hashbangs—I guarantee it. I just hope that they’ll do so before more sites follow their poor example and start eroding the foundation of their own business in an attempt at being “hip, modern and cool” as well.

What is a Modern Browser?

The other day, Mozilla Tech Evangelist Paul Rouget explained why he doesn’t consider IE9 to be a modern browser. The IE team at Microsoft, fairly, responded to Paul’s critiques, albeit with a bit more marketing fluff than needed. Superficially, this thing basically comes across as a teenager’s spat, but who’s right? Or rather, more importantly, what should we consider a “Modern browser” in today’s landscape, anyway?

As the Project Lead and original creator of Modernizr, and in general as a web developer who lives on the cutting edge of web-based technologies, I have a pretty specific opinion of what I consider to be a modern browser. In a nutshell, I would describe it as this:

A modern browser is any browser that: successfully renders a site that you just built using web standards, testing only in your browser of choice along the way, with all the essentials functioning well; without you having written any browser-specific hacks, forks or workarounds; and shows great performance as you navigate it.

That may not explain it clearly enough, so let’s break it down:

Successfully rendering a site

This is an obvious one. The layout needs to look right for everything that is essential. Rounded corners are not essential (though a pretty big nice to have these days). No, what I mean is that columns, grids, headers and footers, it all needs to be in the right place; not overflowing or being off by many pixels due to incorrect CSS handling. If I put in a detail using rgba(), then as a good web developer I either 1) double it up in CSS with a non-rgba() property declaration first, 2) use Modernizr for a workaround, or 3) acknowledge and accept that it won’t be rendered by a non-rgba() supporting browser. This kind of granularity does not a modern browser make, but if something like rgba() is implemented, it shouldrender exactly as I specified it.

The above (and below) assumes that my markup and CSS are correct, which is not always the case of course but when it’s not, I may be at fault (and so won’t hold it against the browser) but I still expect the browser to do its best rendering it anyway. Error handling is one of those things that really highlighted the differences between browsers in the early days; fortunately, today’s browsers are all pretty great and consistent in this area.

When I finish building a site’s first pass and start opening it up in other browsers to begin my testing, a modern browser will render it so accurately close to my main-use browser that it either is completely correct, or the things that are different are very hard to spot at first glance. All matters of positioning and flow should be as-specified, with no deviation.

A big part of what makes this happen in modern browsers is support for CSS3 Selectors. IE9 does quite well in this area, meaning most everything I do won’t require the use of Selectivizr.

Testing in your browser of choice

My main-use browser is Chrome (version 10), but feel free to substitute whichever latest-release browser you prefer. A good web developer builds & tests with the most standards-compliant browser available, compromising on small details only for the sake of the development tools they prefer. Disputes (the aforementioned small details) will always persist on which browser is the most standards-compliant, but it’s pretty irrelevant as long as you’re using the latest release, and your browser is capable of all the features you’re implementing in the site.

Platform independence is a consideration I have to acknowledge, which is where IE falls short (and Safari a little bit, but given that Chrome and Webkit nightlies are available on Linux, I consider this negligible). That said, among really cutting-edge web developers the overwhelming majority seem to use Macs. That’s not to say you can’t be a really cutting-edge web developer if you use Linux or Windows (e.g., some of the best I know run Linux on their Mac hardware), but odds are you’re using a Mac, and of the Windows-using segment, odds are you use Chrome or Firefox.

This is not some sort of roundabout evidence that IE9 is not a modern browser, or not good enough for cutting-edge developers to use as their main browser. Rather, it simply reflects how long IE.x has historically lagged behind other browsers in terms of standards-support and, very importantly, developer tools. IE9 could well be the first of the IE range to be modern enough to compete again, in fact.

All essentials functioning well

I take my craft seriously, so I make sure my sites’ content remains accessible with JavaScript disabled. I’d almost say “don’t break with JavaScript disabled,” but then I’d have to make a pretty strong exception, as I no longer care for browsers that don’t understand new (HTML5) elements without JavaScript. I’m now very close to the point where browsers, not just IE8 and below, will get a near-unstyled page if they have JavaScript disabled. Content can—and should!—always be functional and accessible, but style and behavior are “luxuries” we cannot support being absent in the Web of tomorrow. With HTML5 and CSS3 having so much integration and reliance on JavaScript for decent (event) handling and processing, I feel it is not reasonable to rely on one but not the other when building for tomorrow’s Web.

So presuming we have a modern browser with JavaScript enabled, all of my progressively-enhanced interactions and behaviors should function as expected. This doesn’t mean niceties like having no UI blocking with large operations passed to Web Workers, or dragging & dropping files from the desktop into the site. It does mean I don’t have to resort to polyfills and shims just to make the site usable for its audience in this modern browser.

No browser-specific hacks, forks or workarounds

The last item already touched upon it, but to be more precise: I should not have had to write any browser-specific code, or use an IE-only stylesheet or anything like that, for this modern browser to do all the above.

There is a component here that is a little bit difficult to quantify, which is forking code based on Modernizr’s feature detections. I frequently use the if/else construct that Modernizr enables me to use in both JavaScript and CSS, and one might be inclined to think that this is browser-specific forking—except it’s not. It is very much feature-specific; just because we may internally have the knowledge to map certain features as being present or absent in certain browsers, doesn’t mean that forking code based on features is, in and of itself, browser-specific forking. Hence, I don’t consider a browser “not modern” just because it uses some of my Modernizr-directed forks.

Great performance

Provided I’m not doing something dramatically complex with multiple background images and/or CSS gradients, or obvious rendering challenges like an over-abundant use of text- and/or box-shadow, opacity or fixed-position backgrounds, scrolling around the site should be smooth. Interactions with buttons, forms and JavaScript-handled events should be snappy. Animations should be fluid. This is all in the context of the afore-mentioned complexities being handled and rendered properly, and a modern browser should perform well on all of these.

GPU hardware acceleration is great, especially if or when it powers everything that animates, but battery consumption on mobile devices is a real concern that puts this solidly outside of the realm of being a requirement for a modern browser, to me. There isn’t much yet that we do on the Web that is so dependent on being GPU-powered, that we as developers can legitimately expect every modern browser to have it. Those types of features are what I consider the cutting edge today, not the modern standard. Which brings me to my closing point…

Cutting Edge vs. Modern

Rouget’s criticism includes a list of features that IE9 doesn’t support, but what he blatantly omits to disclose is that most of those features are not supported in Firefox 3.6 either—still the latest stable, public release of that browser. Sure, IE9 is in beta and so is Firefox 4, but that doesn’t mean they are on equal footing. In today’s browser landscape, there is an almost madman-like obsession with being the first to implement every latest concoction the WHAT WG and W3C come up with that seems useful. While that’s great for us web developers who want to use these features and let our imaginations and creativity run wild, it puts inordinate amounts of pressure on browser vendors to do rush jobs with perfect execution. This is unreasonable, and the bigger the browser’s installed base is, the more unreasonable it is.

Case in point: Firefox 4, which is itself catching up to the baseline set by Safari and Chrome, in some cases two years ago already. And in some (if rarer and less widely used) cases, Opera leads the pack of them all.

I’m really glad that Firefox 4 is adding support for so many things I consider essential components in my arsenal for building a fast, lean, clean and richly semantic website with a great user experience, I really am. But the stuff I’m referring to, I consider to be more cutting-edge than a baseline for making a modern website. Modern, to me, is supporting all the things I explained in great detail above. Web Workers, File API, Drag & Drop, CSS3 Flexbox, HTML5 ApplicationCache… these things are all cutting edge. That doesn’t mean we should be hesitant to use them, or that we have no legitimate use cases for using them; it means that when we use them, we’re building cutting-edge sites (and apps), not just modern sites. And that’s great! But we mustn’t conflate things.

Features that haven’t yet been supported / implemented in at least the last one public and one nearly-done beta release of each of the majority of today’s five main browsers (IE, Firefox, Safari, Chrome, Opera), are plain and simple cutting edge features. If you use them daily, that’s great, but you’re in avery small minority of people who do. With that in mind, I came up with a semi-serious “Modernity Rule of Thumb” for browser vendors:

When three or more different browsers support a feature in their latest public release, your next version should start supporting it as well.

Using that rule of thumb, we can eliminate most things from Paul Rouget’s list, because Webkit-aside, most all of it is a new feature for Firefox 4 or Opera 11, although a few were in Firefox 3.6. Still, how, then, does IE9 stack up to all the browsers?

Pretty good, in fact. But it’s not quite on par yet. For instance, it curiously supports the new box-shadow property, but still does not support text-shadow, even though that originally emerged in the CSS2 specification and was later punted (for only marginally-valid reasons) to CSS3. Nonetheless, text-shadow is one of those features that greatly aid us in making rich type really powerful, and it baffles me that so much time and effort was spent implementing and highlighting IE9’s typography support when such a simple yet nearly-crucial aspect to it was left behind. Also, IE aside, the last holdout on text-shadow support was Firefox, which now has two versions supporting it.

Features like CSS Transitions are obviously a great thing to have available to us in every major browser, but even there, Firefox itself is only adding it in its next release, the still-in-beta 4. So while long-available in Webkit browsers, CSS Transitions are still very much a cutting edge feature.

Same with CSS Gradients, which I think is extra fair to be lenient about because that spec has four different implementations in browsers today (three if you only count non-proprietary syntaxes).

An exception might’ve had to be made for the HTML5 History API, however. Especially in light of the recent challenges major websites are discovering inherent to the use of hashbangs, broad support for the fantastic new HTML5 History API would put that web-breaking practice to a swift end.

All those features that Rouget listed are nice to have available to us as web developers, but few are so widely supported across the public releases of multiple browsers that it’s folly to consider them the modern baseline. Some may argue I place that baseline too low, too basic; I would argue that theirs is unrealistic, even misguided.

It’s good for there to be a difference in what is modern and what is cutting edge; it’s okay if features from the latter are volatile and occasionally break; it’s not okay if something we consider “modern” suddenly breaks or becomes inaccurate because the specification changed. I for one am quite happy with the progress IE9 has made. There may be a couple of things I think should really be added before it becomes final—HTML5 History and text-shadow in particular—but I won’t cry foul much when (if?) that doesn’t happen.

Websites needn’t look the same nor be experienced equally in every different browser. If you’re looking for a world wherein the stuff you build will look and work the exact same in each browser, I wish you good luck because you’ll never live in one. Our web of today, and the web of tomorrow, is a place where the constraints are not defined by screens, browsers, features or devices, but our own (in)ability to build adaptive, flexible and naturally-evolving products that live online.

Motorola Xoom To Cost $1,199?

Seems the Motorola Xoom is available for pre-order at Best Buy for the insane-but-unconfirmed price of $1,199. You can buy an 11" MacBook Air with 128GB SSD drive and 2GB RAM for that money. Or, of course, two iPads. I hope for Motorola’s sake this price is a mistake.

UPDATE: As I mentioned on Twitter, I suspect this is a fake price to gauge market reactions and “surprise and delight” crowds with the actual price being far less. That’s sort of what happened with the iPad (whether Apple started that rumor or not is irrelevant in hindsight); Motorola might just be doing the exact same thing here, and then offer it at the very real price of $699 or $799.

Streamlines: The Social Network iPhone Client That Never Was

Steve Streza demos his iPhone Twitter client and explains why it almost certainly won’t ever be. It sounds quite interesting, though, and I think it has huge potential. For example:

On top of that, you can merge multiple timelines together, across all types of timelines, accounts, and services. For example, if you use Lists on Twitter and Facebook to organize, say, your family members, you can create one contiguous timeline which combines both those lists and shows you what your family members are doing, regardless of where they posted it to. Or, you can combine your Twitter followers, mentions, and direct messages together, similar to how Twitterrific works. This saves you time, as it lets you create your own timelines which show you new perspectives on your social networks that you simply can't get with most Twitter clients.

In memoriam: Microsoft’s previous strategic mobile partners

Horace Dediu goes over Microsoft’s previous strategic mobile partners. It’s embarrassing how utterly catastrophic the results have been with all of them, but it’s not surprising if you look at Windows Mobile: it’s a lousy, outdated OS and always has been. I think the key difference in today’s new Nokia/Microsoft announcement is Windows Phone 7: most people I know who’ve used it prefer it over Android. Great User Experiences win customers, and whilst I have no personal experience with WP7, it looks to me like a proper contender—unlike Windows Mobile (or Symbian).

Should be interesting to see how this will play out in the market. Looks like quite a few Nokia employees aren’t too pleased with it, though.

Parrotfish: A Browser Extension to Embed More Content on Twitter

If you’re like me and quite enjoy the new Twitter.com layout, with its embedding of content straight into the site allowing you to see pictures and videos from various sources without having to open new pages, you’re going to love Embed.ly’s new Parrotfish extension. It’s like content embedding on steroids, with really smart algorithms to determine the most likely content from a referenced page. Images, videos, text—their supported content sources are really quite extensive.

I installed it this morning and I’m already loving it. Works in Chrome, Safari and Firefox (3.6; 4.0 not tested).

Going Postel

In response to the Gawker website debacle, Jeremy Keith has some keen thoughts on what makes the Web so great:

Though it may not seem like it at times, we’re actually in a pretty great position when it comes to front-end development on the web. As long as we use progressive enhancement, the front-end stack of HTML, CSS, and JavaScript is remarkably resilient. Remove JavaScript and some behavioural enhancements will no longer function, but everything will still be addressable and accessible. Remove CSS and your lovely visual design will evaporate, but your content will still be addressable and accessible. There aren’t many other platforms that can offer such a robust level of loose coupling.

Javascript: Breaking the Web with hash-bangs

Mike Davies on Gawker’s new “website” concoction:

Gawker, like Twitter before it, built their new site to be totally dependent on JavaScript, even down to the page URLs. The JavaScript failed to load, so no content appeared, and every URL on the page was broken. In terms of site brittleness, Gawker’s new implementation got turned up to 11.

Don't Make Me Steal

I don’t necessarily agree with all the criteria listed, nor do I feel qualified to say whether those price suggestions are fair for the amount of work involved on the studios’ part, but this is basically the point a lot of people, myself included, have tried to explain over the years.

Update: I should perhaps disclose that I neither support this exact manifesto, nor condemn its goals. I’m aware of copyright laws being different across countries around the world, but my own point is, bits and bytes aren’t—so go the extra mile.

Isotope: jQuery Masonry + CoreAnimation

David DeSandro’s new Isotope plugin is a must-see thing of beauty. While it doesn’t quite give us the full richness of CoreAnimation as Cocoa developers know it, it does give us some of the UI slickness.

NoteSlate

Horribly copy and design of the page aside, the NoteSlate looks like it could be a very useful gadget. $99 simple electronic note-taking tablet, also useful for doodling. Expected to hit markets this June.

A Short History of the Earth

Fascinating history summary by physicist John Baez:

During the Late Heavy Bombardment, the Moon was hit by 1700 meteors that made craters over 100 kilometers across. The Earth could easily have received 10 times as many impacts of this size, with some being much larger. To get a sense of the intensity of this pummeling, recall the meteor impact that may have killed off the dinosaurs at the end of the Cretaceous period 65 million years ago. This left a crater 180 kilometers across. Impacts of this size would have been routine during the Late Heavy Bombardment.

(via Kottke)

New Creation: jQuery Runloop Plugin

TL;DR links for the eager:

Earlier this week I was working on a project that involved one larger animation during which several separate animations would trigger, but not all at the same time, and not all on the same elements. As jQuery provided insufficient functionality to suit my needs, I went on a hunt across the Web in search for a runloop plugin, with these requirements:

  1. It should allow for executing custom code at specified keyframes, not just run a jQuery effect chain like .animate();
  2. It should be easy to rearrange keyframes or adjust keyframe locations without having to deal with adjusting callbacks;
  3. It should play well with jQuery’s Effects Queue (ruling out doing a custom setInterval thing);
  4. It should support triggering animations that run independently from the main animation runloop, so that it's possible to have a main loop of 5 seconds with a keyframe at 90% triggering a 2-second animation, and not having that cut off; in other words: separate timings of the main runloop to animations triggered at specified keyframes.

Alas, my traipsing across the Web yielded no results. However, after deciding to write my own plugin, I came across this concept by Ben Nadel which inspired me, and gave me a clear idea for what to do. I then adopted Ben’s idea of using the .animate() function’s step: method and turned it into a full-fledged runloop system, and am now making it available as a plugin for jQuery. I’m pleased to give you: jQuery Runloop. Suffice to say, it satisfies all of my requirements listed above.

Want a demo before reading on? That’s cool, go check out the demo.

Basic usage

Runloop is not a common jQuery plugin in that it is not chainable. This is very much on purpose, because it was designed to support customized animations running across many different elements, so chaining this to one element made no sense.

There is one very important thing to keep in mind: jQuery Runloop only supports keyframes at 5% intervals, and only at 10% intervals if you give it a duration of < 500ms!

The reason for this is that it runs one .animate() call on a div in nodespace, triggering at every step. However, steps are not round integers by nature, and animation timings will often cause certain single integers to be skipped over when rounded. That’s why it reduces each step to its nearest mod-5 value, and in the case of < 500ms (main) animations, to its nearest mod-10 value.

Now, making a custom animation stack with jQuery Runloop is quite easy. Here’s a quick example:

<!-- After including jQuery, include the plugin: -->
<script src="jquery.runloop.1.0.js"></script>
<script>
var loop = jQuery.runloop();

// Note: only use 5% intervals (10% for <500ms durations)!
loop.addKey('25%', function(){ // Some code or animations here  });
loop.addKey('50%', function(){ // Different code/animations  });
loop.addKey('75%', function(){ // Even more different code/animations!  });

loop.play(1000); // duration set in milliseconds
</script>

A full documentation of all the features will be available on the Runloop github page soon. In the meantime, you can learn much from the source of the demo and the plugin code itself; I’ve documented it pretty extensively.

Lastly, I want to thank:

Now go play with jQuery Runloop.

Update: more sophisticated frameworks such as Sproutcore have pretty robust runloop and events queues already, so you might be asking yourself, why did I not just use something like that? Two main reasons: simplicity & accessibility. With jQuery and Runloop it’s easy to layer in complex animations on top of a nice, clean, semantic page that would still work just fine with JavaScript disabled. Not so much with any of those frameworks.

Growing is Forever

A beautiful little film by Jesse Rosten. Any time I visit these Redwoods, I share Jesse’s sense of reverence for them. He captured it beautifully.

How Much Does Bing Borrow From Google?

Nate Silver:

Imagine that you opened an Italian restaurant across the street from Mario Batali’s Lupa. It would be one thing if you merely took inspiration from Lupa’s spaghetti carbonara — if you tried to use some of the same ingredients and some of the same techniques. Maybe you’d even go so far as to track down the butcher who sells Mr. Batali his pancetta. All of this would be in the spirit, most of us would think, of good ol’ American competition.

But this is more like, when a customer orders the carbonara, sending a runner across the street to order a plate of it at Lupa, reheating it, and then maybe adding some mushrooms or snow peas.

Limitations of layering HTML5 Audio

Alexander Chen, on his discoveries made while making MTA.me:

The first thing I tested was the ability to multishot a single <audio> sample. Doesn’t work. The currently playing sample gets cut off. Next, while Flash only seems concerned with how many sounds are actually currently playing, HTML5 doesn’t even like mere existence of too many <audio> elements. (I also tried using pureAudio objects instead – same is true.) In Chrome, starting around the 9th <audio> element, they simply won’t play().

“The next chapter in digital books”

Mike Matas’ new company Push Pop Press aims to reinvent books. It’s a subject I find very interesting as I’ve had startup ideas/urges in this very same realm for a long time now, but have opted to focus on other things first. If you have any interest at all in (the future of) digital publishing, keep an eye on these guys.

On Market Share As Metric of Success

Lessien (who you should follow on Twitter):

The lesson of the PC era has been misremembered; it was always to earn the disproportionate share of industry profits. Market share, in and of itself, does not matter.

Via Gruber, who follows it up with some good comments, too.

Boilerplate

Dave Shea shares his own personal HTML5 boilerplate template, which focuses more on using the HTML5 semantics and does away with all the extras. Great if you’re looking for just a small, very light-weight template to start out with.

Bing copies results from Google searches

Danny Sullivan, for Search Engine Land:

Google has run a sting operation that it says proves Bing has been watching what people search for on Google, the sites they select from Google’s results, then uses that information to improve Bing’s own search listings. Bing doesn’t deny this.

Lest We Forget (Or How I Learned What’s So Bad About Browser Sniffing)

When I first started as a full-time web developer, a large part of my job revolved around making sites work between Internet Explorer and Netscape Navigator, which meant lots of menial sifting through messy HTML. The frustration came with the job; browsers implemented different features of HTML and CSS entirely, and of the features they did have in common a lot were implemented slightly differently from one another still. It was the era wherein the Web Standards Project was gaining traction with their efforts in convincing browser manufacturers to support these “web standards” as outlined by the W3C. And slowly as the years went by, browser makers heeded their call.

Yet unless you were there, you mightn’t realize that web developers weren’t collectively doing everything right, either. On the other side of the Web Standards Project’s coin were their efforts at educating web designers & developers about the best practices for building websites. A lot of bad practices ruled across the digital lands: table-based designs, splash pages, a complete lack of accessibility, and userAgent (UA) sniffing.

Most, but not all, of those bad practices emerged out of necessity: prior to the first semblances of CSS support across multiple browsers, table-based designs were the only way to do any kind of effective column-based design at all. And UA sniffing was often the easiest way to determine whether we were in Internet Explorer 4, 5, 5.x Mac, 5.5 or 6, or using some version of Netscape or Mozilla. As a web developer, your job wasn’t to tell clients “No” simply because good means to an end were not available; your job was to build sites using whatever tools and techniques were. The knowledge gleaned from UA sniffing and other, similar hacks, was vital to building the best site you could make.

But that never meant UA sniffing was a good idea.

In fact, from the History of the browser user-agent string we learn that UA sniffing was the cause of tremendous problems for, initially, browser vendors, who were quick to start mimicking each other’s UA string in order to bypass the incomplete UA sniffing-routines deployed on countless of websites. It was because of us web developers doing so much UA sniffing everywhere that browser vendors were forced to include each other’s strings, leading us to the situation of today where my browser’s UA string is this:

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.6 Safari/534.16

I count: four different browsers, two different rendering engines, and two variations of one Operating System string. Oh, and a lot of cruft. Tell me again how this is considered progress on the Web?

Lies, Damn Lies, and Web Browsers

When I set out creating what eventually became Modernizr in late 2008 and early 2009, I had three specific goals in mind:

  1. To give web designers and developers the opportunity to reliably use new CSS3 and HTML5 features without the fear of losing control in browsers that don't support those features yet;
  2. To discourage and diminish the practice of sniffing User Agent strings, still quite prevalent among web developers at that time;
  3. To encourage broad adoption of these new features which, in turn, gives browser vendors more feedback to improve their support and establish these upcoming Web Standards.

Around that same time, jQuery 1.3 was released and was the first major JavaScript library to completely drop all forms of browser/UserAgent sniffing internally. The jQuery people had come to the same conclusions as I had: that the many assumptions inherent to doing UA sniffing, combined with the proliferation of different userAgent strings on the web (in particular on mobile), would only create increased challenges going forward into the future, and make our code increasingly difficult to maintain. Or, to quote their release notes:

Browser sniffing is a technique in which you make assumptions about how a piece of code will work in the future. Generally this means making an assumption that a specific browser bug will always be there - which frequently leads to code breaking when browsers make changes and fix bugs.

The biggest problem with UA sniffing is the “UA” part, because browsers lie. A lot. They started lying with the release of Microsoft Internet Explorer 2.0, and they continue to lie to this very day. Browsers lie about who they are and what they can do all the time. Sometimes it’s not even the browsers themselves who do the lying, but proxies adjusting UserAgent strings along the way without the browser’s or the user’s knowledge.

In the past few years, more often than not the lying isn’t intentional, but that makes no difference to a website doing UA sniffing. It’s especially noticeable when new browsers show up on the scene, based on an open-source rendering engine that’s been around for a while. While egregious inconsistencies between the browser’s claims of supported features and the reality of those claims are often quickly fixed in subsequent updates, UA sniffing strings are updated far less frequently. And then there’s the legacy.

Remember IE6?

That browser that we all loathed but had to continue supporting because a huge portion of Internet users still used it?

It wasn’t just Microsoft’s fault for freezing development on IE after version 6 reached a staggering 88% share of the world’s browser market. The fault lay just as strongly with people who had built corporate intranets and content management systems with such poor web development techniques, enabled only by (often equally poor) UserAgent sniffing, that those systems broke in myriad ways the moment you threw a more modern browser at them. Too many people using such systems were prevented from upgrading their browsers, ever, by their corporate IT departments—even Microsoft couldn’t convince them otherwise—and as a result, all of us on the outside Web, the open, public Web, had to continue supporting IE6 for a very long time.

UA sniffing enabled the use and continuation of bad practices and outdated techniques, and as a result, played a very real part in keeping the Web from progressing as fast as it could have.

It ain’t all bad, son

The world of browsers today is almost nothing like it was just five years ago: not a single browser has more than 35% market share—worldwide or in any major market; IE6 only has about 5% market share worldwide; mobile browsers are on the rise and inspiring far better techniques for building great, future-proof websites with; the landscape today is really quite exciting, all things considered. More importantly, though: we have largely consistent support for web standards across every major browser in use today, and we have a huge number of shims and polyfills to bring some support and features back to browsers that don’t.

There are still somewhat-legitimate circumstances wherein the combined power of CSS3 Media Queries and feature detection cannot produce a specific enough subset of browsers for certain needs and use cases. The legitimacy of those circumstances or needs aside, these are the situations where doing some UA sniffing can make sense. However, we must not misconstrue the existence of these situations as being an argument for the practice in general.

The big problem with advocating UA sniffing as a practice is that it adds a certain air of credibility to it; one which could very easily tempt a budding young web gun to employ it. But UA sniffing is not suitable for those who aren’t intimately familiar with the intricacies of browsers and their features, and the scope of user agent strings in use on the Web today. And I really do mean intimately familiar. As in, you know that there are some 637 userAgent strings used on the Internet (desktop & mobile), and you know regular expressions really well, and you know off the top of your head which browser versions support what features, and you know all the complications involved in supporting fourteen flavors of mobile Webkit browsers. And so forth.

This stuff is far from easy to understand; even just the basics of feature detection versus browser detection are quite confusing to some people. That’s why we make libraries for this stuff (and, use browser inference instead of UA sniffing). These are the kind of efforts that we need, to help move the web forward as a platform; what we don’t need is more encouragement for UA sniffing as a general technique, only to save a couple of milliseconds. Because I can assure you that the Web never quite suffered, technologically, from taking a fraction of a second longer to load.

So if you really know what you’re doing, then you may have a legitimate case for doing UA sniffing. You may even be skilled enough to make a robust bit of UA-sniffing code for it not to be wrong half the time. But even if that’s the case, I ask you to reconsider advocating the practice, lest we forget the damage the Web has suffered at the hands of UA sniffing already.


Upcoming talks

Here on My own website

Subscribe to this site

Years

2014

2013

2012

2011

2010

2009

2008