Faruk At.eş


Archive for 2009

  1. January 6
  2. February 11
  3. March 5
  4. April 9
  5. May 4
  6. June 2
  7. July 6
  8. August 29
  9. September 10
  10. October 14
  11. November 11
  12. December 15

Showing 11 posts from

Further Thoughts on Web Apps

In the ongoing saga over (iPhone) native apps versus (iPhone and cross-platform) Web apps—read parts one, two, three, four, five and six—there are a few more things I want to point out. First, however, let’s detour quickly to Martin Pilkington’s rebuttal to Peter-Paul Koch’s original rant:

It also supports JavaScript geolocation, which is (I hope) only the first step towards true device APIs that will give JavaScript developers access to phone functionality such as the camera, text messaging, the address book, and more. I’m assuming Apple is working on all that because it’s the next logical step.

That is a very poor assumption for one very important reason. Most people don't want to have their address book read in and text messages sent (at their expense) to all their contacts asking them to visit a website, after they [themselves] visited that website. There's a very good reason websites don't get access to system APIs and this is it.

Ironically, this is a poor assumption on Martin’s part. Privacy and security apply just as much to a desktop or native iPhone app as they do to websites and web apps. Most people don’t want to have their address book read in and text messages sent to all their contacts no matter what platform the cause is on. There were a couple of iPhone apps that took advantage of the Address Book API and those were revoked as soon as that was found out. Yes, it would be harder for a company like Apple to suddenly block a single website than it is for them to pull an app from the AppStore, but this isn’t why such APIs don’t exist in the browser. They don’t exist because it’s all just too new.

The web as a platform for building apps on has only really been around for a half dozen years, and most of that time was spent focused on AJAX and preventing page loads between each action. Compare that to the desktop, where system APIs have been around for decades, and it makes perfect sense that browser vendors are only now just starting to explore opening system APIs to their browser. The security aspect makes it more difficult and more challenging to implement, and that undoubtedly plays a part in there not being an Address Book API just yet, but to claim there never will be one is shortsighted.

Another thing to keep in mind is that web technologies tend to be open standards that a lot of competitors need to agree on before they become Technical Recommendations. This makes progress happen very, very slowly. On the other hand, the Geolocation system API has gone from initial Working Draft to its current Last Call WD status in less than a year.

Safari’s support of appcache makes it possible to store the Web app’s core files on the iPhone itself, so that it only has to download the data. Thus mobile connection problems can be avoided.

But then you need a cache of the data stored locally and the ability to modify the data locally, meaning your web app needs to be run locally. What we have there is a mobile app built with web technology.

Which is kind of what PPK is calling out for, so I’m not sure what Martin’s point is, here.

A web app that runs entirely locally, even a web app game like Neven Mrgan’s Pie Guy, is a perfectly fine app on its own right. There’s nothing inherently wrong with doing that, and native developers should definitely not frown upon it. In fact, Neven’s Pie Guy is a great example of an app that alleviates the big hosting cost argument (which was Martin’s next argument): once installed, all of Pie Guy’s game code is on the device, and it only checks the source website to see if there is newer code to update itself with. That’s hardly a hosting concern, so again: native developers should not frown upon this.

For the rest, though, Martin’s post has a lot of good points, but my favorite is this:

Most X developers (for any non-Web value of X) live in mortal fear of the browser as a development platform.

That depends. Can I offer the best user experience while supporting these multiple platforms? Building a general web app that will run well in any browser is much like building a desktop app using Java, it often leads to an inferior user experience as you can't provide a consistent feature set and use advanced features of each OS it runs on. Personally I'd rather develop for one platform and build a kick ass application than develop for a wide audience on multiple platforms and build a mediocre application.

In my experience simply as a user of many applications, desktop, web and iPhone, this holds true for every great developer out there. They aren’t the kind of people to blindly shun one platform in favor of another, they simply choose the platform(s) that they can offer the best user experience on that will also generate them money. For mobile apps, the iPhone and its native apps platform is currently the best at offering developers that.

Some further thoughts

Returning briefly to Neven’s Pie Guy, one argument that PPK makes is that web apps can provide the same quality as native apps. Pie Guy, whilst only a first release, doesn’t exactly perform well on a non-3GS phone. In fact, on my 3G it is so slow that it’s pretty much unplayable. I downloaded Pac-Man Lite, the closest equivalent of a game with no heavy graphics, and that performs perfectly fine on my 3G phone and has sound.

But then, that’s kind of the same for also all the visually stellar, audio-rich—and native—games I’ve played on my 3G. My point is, performance plays a big part in quality and user experience, and the performance of web apps is simply nowhere near close enough yet to compare them casually to native apps.

One reader wrote in to say that the JavaScript performance article I linked to in my first response to PPK was two years old, and that “JavaScript perf has come a long way since then.” Yes, it has—but not so much that it compares to native, compiled Objective-C code. Not even close.

Then there’s the tools argument. PPK’s latest post finally mentions the Cocoa Touch framework, but:

John Gruber wants me to mention the Cocoa Touch framework. He feels that its excellence is an important factor in the success of native iPhone apps.

Point is, although Gruber’s probably right, he ought to be wrong.

My take-away from PPK’s three posts on this subject is that he has never spent as little as five minutes looking into the iPhone SDK and Cocoa Touch. It seems that PPK suspects that iPhone developers have never bothered looking into HTML, CSS and JavaScript—and since he himself has never explored Cocoa and/or Cocoa Touch, that would explain why. Case in point:

If all you have is a hammer, every problem becomes a nail. If all you know is the Cocoa Touch framework, every app you make will become a native iPhone one, whether that’s a good idea or not.

The reality is, I don’t know of a single iPhone developer who doesn’t know the core web technologies at the very least. They may not be Web Standards experts like PPK or yours truly, but they certainly aren’t oblivious to them. They’ve explored them, certainly rudimentary, and clearly have not seen much of interest there so far. This makes sense, as there isn’t much of interest in the Web’s three main technologies compared to Cocoa / Cocoa Touch.

My personal experience with the iPhone SDK has only been very superficial, but already I can tell what a great gap exists between it and the Web as a platform. My plan for the remainder of this calendar year is to explore the SDK and start building some apps to get more first-hand experience in that area. My progress will be documented here, of course.

Getting back to the tools argument: Martin Pilkington claimed that “web idealists deplore [technologies like] Silverlight and Flash”, which offer the tools that compare to desktop apps frameworks and are far superior, tool-wise, to the basic HTML, CSS and JavaScript of web apps. The problem is that Silverlight and Flash aren’t exactly hallmarks of great User Experience; their performance is quite horrendous, they are historically rife with security problems, and—a big issue for web idealists who care about an interoperable and open web—they are entirely proprietary. “Web idealists” don’t like undermining some of the best aspect about the Web by throwing their lot in with proprietary technologies like Silverlight and Flash; there’s a time and place for proprietary platforms, but the open Web isn’t one of ‘em.

(I’m sure Aral will have some comments about that last paragraph, but I’m moving on for now)

ITunes matters too

In addition to all this, web apps have another distinctive disadvantage: no integration with iTunes. My apps from the AppStore get backed up automatically once I connect my phone, there’s a nice interface in iTunes that allows me to organize my home screens with the mouse, and when I beta test an app that the phone I just connected isn’t provisioned for, iTunes knows not to remove the installed (public) version. The latter may not be applicable for most every iPhone owner out there, but it is one more example of how seamless the experience is, in part thanks to iTunes.

Web apps, until or unless they get a different, better treatment from Apple, won’t have those kind of benefits. Especially when it comes to paying for an app, the web loses out tremendously in offering a quick, simple and seamless experience.

Where web apps would win out, in an ideal world, is device independence and interoperability, but again: that’s not the real situation we have right now. Devices differ too much, even those that all run a Webkit browser, for any web app of great significance to be immediately interoperable across all platforms. In fact, the past eight years of my life have been spent trying to make the browser situation more consistent; I can assure you that we are far from a world where that is the case.

Lastly, there is this:

What do your users want you to pick, superior user experience or vastly bigger reach? Do you need device APIs, and is there a way to get paid? Those are the questions that matter right now.

I don’t know what numbers PPK has in his head, but if the iPhone platform outsells games up to 400 to 1 against the Android platform, that “vastly bigger reach” argument falls flat on its face.

I’d like to propose something to PPK here: before saying anything else about this entire issue, either man up and try porting a native app as a web app to back up your claims, or try building a native app with the free iPhone SDK (you don’t have to join the paid program to download the SDK, just a registered free Developer account will do).

Without doing either, you’re just increasingly embarrassing web developers here.

Project Diana

A beautiful project by Tony Delgrosso:

What would you shoot if I asked you to take a photograph of “summer”?

That’s a question I posed last June, when I solicited 12 random Twitter followers to participate in a photography assignment called “Project Diana”.

Web Apps vs Native, Continued

Peter-Paul Koch has written a great follow-up piece to his article of yesterday about iPhone apps, comparing native vs. Web app. He admits to his mistakes but urges us to continue the discussion over the web apps and the mobile web platform, something I wholeheartedly support. In fact, I’ve been writing a separate article about that since the discussion started, and I’ll be publishing that later today.

Meanwhile, there are a couple of points in PPK’s follow-up that I want to address directly. He writes:

I feel that the mobile operators have the strongest cards when it comes to mobile payments because they are already billing everybody and are already able to identify people through their SIM card. Their system has to be extended in order to accommodate online payments, but that seems doable. In fact, Vodafone is already doing it.

This is not unique to Vodafone; all carriers that have some semblance of apps for sale for their phones do this, the issue with it is that for developers, it’s not great: it means they have to deal with each carrier separately to get their money, and carriers don’t often play nice. Apple may not be perfect either, but I’ve heard more developers being pleased with them over the financial aspects than otherwise.

The real big problem here, though, is that if the payments of the hypothetical “WebAppStore” go through the carriers, then whoever owns and operates the WAS can’t guarantee any (timely) payment to developers unless they front it themselves. Additionally, dealing with carriers around the world—dozens of carriers—is a painful ordeal, whereas dealing with only a couple of credit card companies and/or Paypal is much more manageable (if not that much less painful).

Then comes the discoverability argument. Apparently, getting attention through the App Store is the superior way of disseminating your app. I’d like some more data on that point.

In my response yesterday, I linked to this article on GigaOM which has charts from AdMob indicating that the primary way people find apps is through browsing the AppStore rankings. The second-most common way is people searching for a type of app, which is good news for a possible WebAppStore. The caveat is that having two stores means that searching for a type of app becomes twice the work.

Geolocation is accessible from the browser already. That’s a start, but it’s not enough.

Indeed it’s not enough, as you can’t easily do maps in conjunction with Geolocation in Safari. That’s a Big Deal™.

Let’s chalk up one inevitable point for Web apps. They beat native apps hands down when it comes to interoperability.

To my way of thinking this is an extremely important point. A large part of my previous post was born out of exasperation that I had to explain interoperability yet again.

Interoperability is definitely a big plus for Web apps, but ironically, PPK himself did the research showing that (mobile) WebKits are not the same across platforms. The argument of interoperability is strong, but not without flaws: there are still many kinks the developer will need to work out on their own for real interoperability. More importantly, if/when device APIs are added to mobileSafari, there’s little chance that they’ll be implemented exactly the same by the Palm Pre, Android phones, etc.

Then, on the topic of UX disadvantages that Web apps have, PPK writes:

But if it can’t be solved, would that matter? Shouldn’t we treat it as the equivalent of the dotted outline; a sure-fire way of letting the user know he has in fact clicked on something? Should we deliberately decide to leave the effect alone, because it’s a platform usability and/or accessibility feature?

Yes, that does matter. As Gruber pointed out, the User Experience of apps on the iPhone matters a lot. They are the defining feature separating the iPhone from other phones. Apple, for their part, will always try to leverage that better UX to set themselves apart from the cross-platform Web apps.

Which, by the way, will be a tricky hurdle for the hypothetical WebAppStore to cross.

Nonetheless, the trade-off between interoperability and UX is definitely worth consideration for each iPhone app developer:

But there’s a trade-off involved here. Do you want perfect UX, or do you want decent interoperability? Does it make sense for every single app to choose UX over interoperability? As I said above, I feel there’s a category of apps where the latter might be more important.

On this, I agree. Whilst PPK and I will probably disagree on the specifics of that “category of apps”, I can easily imagine a slew of apps where the trade-off can be worthwhile for the developer.

Lastly, there’s this:

But the thinking bit is what I have my doubts about. Chris Heilmann tweeted:

I'm just saying, I've been to the iphone developer camp and 1 of 40 hacks used web standards. It is just not on the radar.

That’s what I’m afraid of: iPhone developers not even considering Web apps.

It’s an iPhone DevCamp. I suspect that many of the 39 hacks’ developers did consider Web apps, but they did so prior to going to the DevCamp. After all, you tend to do your research before you start hacking away.

Either way, the discussion will continue and more iPhone developers will pick up on it and investigate. If they don’t start using the Web as a platform for apps, then that really just means it isn’t ready yet.

Time will change that.

iPhone Developers Aren't Stupid, PPK

Today, PPK posted a rather ill-informed and childish rant titled Apple is not evil. IPhone developers are stupid. Across the bitterness-laden 1,519 words he argues that iPhone developers are stupid for choosing to stick with Apple’s AppStore review & approval process, when instead they should choose web technologies to write their apps with.

Along the way, PPK claims that web technologies today, certainly those supported by the iPhone’s Safari browser, are just as capable of being used as tools to make most of the apps on the AppStore as the native CocoaTouch framework is which the developers actually did use. The use of the native framework and the AppStore, unlike web technologies, forces any developer to go through Apple’s process and, as any technologist who keeps up to date will know, that process has created a lot of disgruntlement, leaving a lot of iPhone app developers to openly complain or talk about leaving the iPhone behind altogether.

One of PPKs arguments is that a lot of iPhone apps have mediocre quality anyway, and thus wouldn’t lose on much if done as a web app. Whilst surely not his point, he consequently implies that an application done as a web app will almost certainly be of lesser quality than an equivalent native app. This, however, is decidedly true, as we’ll see in a moment.

It’s All In The Framework

I mentioned the CocoaTouch framework before not just because that’s the name of the framework in the iPhone SDK—the one that developers use in some capacity to develop the applications that PPK claims can be done with web technologies—but also simply because PPK’s entire post does not even mention this framework. Why not? Because he arrogantly presumes that this framework is irrelevant.

Throughout his post, PPK makes claims left and right about iPhone developers being “too arrogant” or “too stupid” to look into web technologies, which would allow them to sidestep Apple’s review process, but he never acknowledges the real reason iPhone developers stick with the platform: the SDK.

Over the past couple of years I’ve become good friends with a lot of people in the Mac OS X and iPhone app development community, as well as becoming more familiar with a large variety of JavaScript frameworks out there. I can tell you quite simply: ask any iPhone app developer who has looked into web technologies why they stick to the native SDK, and the nearly-unanimous answer will be: the SDK is just that much better. Even long-time Mac OS X developers were thoroughly impressed by the quality of the iPhone SDK, and developers who work across platforms are often found complaining about other SDKs being (very) inferior.

When it comes to comparing the iPhone SDK to web technologies, there just is no comparison. You can’t properly compare the two without first explaining that the “web SDK”, if you will, was designed originally for an entirely different purpose. As a result, HTML is not that well-suited for applications—hence why the WhatWG is shoehorning a lot of application-oriented stuff into HTML5; CSS is just not that great for laying out interfaces, and JavaScript is just not that performant compared to native Objective-C code.

So purely from a development perspective, there is a reason that many developers stick to the iPhone SDK even despite Apple’s app review process making it a painful experience, and that reason is the superior SDK. The web technologies also cannot compete whatsoever with OpenGL for sophisticated audio/video pretty much required for every single game on the iPhone, and games are a Big Deal™ on the AppStore.

Additionally, there is the fact that an open web application is cloned a lot more easily than a compiled application binary found on the AppStore. One reason why a lot of developers—in general, not just iPhone ones—have been hesitant over the years about web apps is because they fear that someone will come in, clone their work and somehow make (more) money with it. The open-ness of web technologies is just too apparent compared to the perceived closeness of binary apps.

So iPhone developers aren’t arrogant?

Oh, for sure they are. Many of the Mac and iPhone developers I know are also some of the most arrogant people I know[1], but you know what else they are? Really, really picky about the tools they use. Far pickier than we web developers are, who whine about CSS being a lousy tool for doing layouts with until the moment someone else criticizes it.

But as arrogant as iPhone developers may be, they certainly aren’t stupid. There are, of course, multiple reasons that so many of them stick to the iPhone SDK despite the AppStore process. The superior tools is just one one; money is another.

When you develop an application for the iPhone and you get it on the AppStore, you get some incredible benefits:

  • Your app is immediately available to every iPhone and iPod Touch customer out there, for not a single marketing penny spent;
  • Your app is found where most users look for apps: in the AppStore;
  • You don’t have to set up a payment interface for customers of your app to pay you, Apple has taken care of that for you;
  • You get a free marketing boost by Apple whenever they run another ad promoting the apps available on the iPhone;
  • If your app is good enough, you might even get featured by Apple, in which case, you’ve really got it made.

The big difference between publishing an app on the AppStore and going the “indie route” of doing a web application and marketing it yourself is money. Money, money, money.

One of the greatest values of the AppStore is how Apple has made it so easy for a lot of apps to get noticed immediately by the people looking for a specific type of app. Not all apps, certainly, as the AppStore is grossly over-saturated in some ways, but nonetheless this is not to be discounted. Apple has made it possible, almost easy, for anyone with a bit of talent and a willingness to put in the work to make a lot of money. And most iPhone developers long to be the next one that gets featured in Wired for their amazing success story.

For developers, even those who get burned by Apple’s review process, it’s just good business to buck up and sit through it.

Good business… for now

I should point out that the above applies to the situation right now, but not necessarily to the situation in the future. As it stands, a lot of developers are disgruntled about the review process and they’re looking for alternatives. One thing that may happen is Apple allowing the submission of web apps to the AppStore, though this seems unlikely to me. Much more likely, I would say, is a group of individuals building a “WebAppStore” (full disclosure: I’ve been asked to participate) to live side-by-side by Apple’s official AppStore. The specifics of that would involve a lot of tricky stuff in making sure Apple doesn’t shut it down, but I think it’s very possible.

Until things change, though, here’s a tip for those thinking about alternatives to the standard-fare iPhone app process: make a native app that serves as an overlaying UI to a back-end that runs as a web app, leveraging the Offline Web Applications spec that’s implemented (partially) in mobileSafari. You get the benefit of the AppStore goldmine, are unlikely to suffer much from the review process, and you still have the benefits of the web app allowing you easy service updates.

It’s a thought, in any case.

Update: Dion Almaer also wrote a response to PPK's post, and is worth a read.

Full Disclosure: yours truly is a Web Designer & Web App developer first and foremost.

  1. I say this lovingly as I’m plenty arrogant myself, and make no illusions about that.

Internet Vices

Patrick Moberg compares Web 2.0 social networks/services to drugs and alcohol.

An Early Look At IE9 for Developers

Notable: significant increases in JavaScript performance (the chart shows it lagging only slightly behind Safari 4 and Firefox 3.5), border-radius support (but no mention for almost anything else on the Modernizr list), CSS3 selectors and diverting the graphics processing to DirectX for improved rendering speeds and quality. The latter one, coincidentally, made me remember just how ugly font rendering in IE currently is.

Statistical significance & other A/B test pitfalls

Cennydd Bowles with words of caution about A/B test results. Great intro:

Last week I tossed a coin a hundred times. 49 heads. Then I changed into a red t-shirt and tossed the same coin another hundred times. 51 heads. From this, I conclude that wearing a red shirt gives a 4.1% increase in conversion in throwing heads.

In Defense of Sequels

“But people like sequels!” they say, time and time again when the complaints hit Hollywood over the increasing number of sequels they churn out, year over year. Film critics certainly don’t, but then Hollywood is not in a business to please film critics—they’re in the business to make money, just like every other business is. How do they make money? By making movies that sell tickets and DVDs. And what sells tickets and DVDs?

Well, sequels. Apparently.

Last weekend the Guardian reported on research done by academics from the Cass Business School in London which produced a straightforward business formula for the likeliness of a good Return On Investment for movie sequels. Meanwhile, there is still little to no research into why people en masse enjoy sequels as much as they do.

Even the aforementioned research, to be published later this month in the Journal of Marketing, does not explore the “why”s behind the public’s appetite for movie sequels. Mark Batey, chief executive of the Film Distributors' Association, is quoted in the Guardian saying:

"There is clearly a public appetite for new stories taking favourite characters on new adventures and from an industry point of view, there is arguably less risk in investing in the production and release of a property which has a proven track record,"

Whilst not aiming to deny or disagree with Batey’s statement, the words “there is clearly” have left me dissatisfied, wondering instead about the root cause behind this phenomenon. As I subscribe to the Kaizen school of thought for most things in my life, I explored this with some basic research in an attempt to unearth the reasons for the public’s appreciation of sequels.

I started out with this hypothesis and pseudo-formula:

People like sequels because they are an equal amount of entertainment for a less costly input value, compared to original movies.

This warrants some elaboration on the terms: by “equal amount of entertainment”, I’m referring to the fact that a decent movie—sequel or original—will deliver on an average of two hours of entertainment to the viewer, and the amount of entertainment being no different between sequel and original movie. We’re not here to assess the quality of sequels versus original movies[1], so this part of the premise is sound.

The second part is where it gets interesting: “a less costly input value”—what is meant by that? In a word, it refers to the investment that you, the viewer, make into the movie. You invest your time, energy and attention into watching the movie, and the product of your input is the entertainment you receive in return. This investment, however, is not equal between a movie sequel and an original movie.

With an original movie, you have to invest your attention into three factors: the characters, the setting (world, universe), and the plot. With a sequel movie, your investment for the first two—characters and setting—is almost nil: you already know them, after all. This leaves more room in your mental investment for the plot—but is that the reason why? Is it a desire for more plot that has people enjoying sequels?

Given the traditionally mediocre quality of plots in sequels, even compared to their own original movie, I suspected that this was likely not the root cause after all. People like plot; people love plot, even. But sequels aren’t liked because of their somehow-presumed-superior plot.

I then turned my focus to the cognitive load of this movie investment instead. I’ve written about cognitive load before, then in the context of interfaces for software programs and devices, but cognitive load studies apply to all aspects in life. Originally, cognitive load theory was formed in correlation to the learning ability of humans, but as entirely independent research such as Barry Schwartz’s Paradox of Choice (also known as the Tyranny of Choice [PDF]) effectively points out, there is a strong correlation in the real world between the availability of many options and the way humans interact with these options. Schwartz’s research applied to consumer choices for products, explaining that some choice is better than none, but that more is not always better than less. The sweet spot, he argues, is “somewhere in the middle”.

Applying these principles of cognitive load and the paradox of choice to movies and sequels, I researched the number of movie releases of the past 60 years versus the population growth in the same time period. These figures come from the IMDB Yearly Archive, which may not necessarily be 100% complete for each of the years, but is certainly the most complete archive available.

Starting with Table 1, we look at the movie releases by decade, with the last decade marked by 2000 and 2009:

Table 1 shows the number of movies released for each ten-year mark starting at 1950 and ending with 2009.

YearMoviesX
2009 27,927 12.0
2000 13,842 6.0
1990 7,123 3.1
1980 5,341 2.3
1970 4,884 2.1
1960 3,434 1.5
1950 2,320 1 (baseline)

The third column shows the multiplication of movie releases per year from our baseline in 1950. As you can see, there are twelve times as many movies that have come out or are coming out in 2009 than there were in 1950. This is not particularly surprising—the movie industry has grown immensely over the past decades, and especially in today’s Youtube-led Internet Video landscape where making and distributing a short film can be done with just a tiny budget, this explosive growth will only continue.

That aside, twelve times as many movies also mean twelve times as many options to choose from. This adds a serious amount of cognitive load to the average moviegoer, who now has to do a lot more pre-movie investment (e.g. choosing) before even getting to the movie they eventually choose.

This increase in cognitive load could theoretically be offset by an increase in the population, but as you can see from Table 2[2], the numbers don’t break down equally between movies and population:

Table 2 shows the world population marked in decades since 1950, the increase factor for each decade taking 1950 as the baseline, and the same statistic and metric using the U.S. population numbers in the last two columns.

* in billions
** in millions

YearWorld Population*XU.S. Population**X
2009 6.79 2.7 305.529 2.0
2000 6.06 2.4 282.171 1.9
1990 5.27 2.1 249.438 1.6
1980 4.44 1.8 227.224 1.5
1970 3.70 1.5 205.052 1.3
1960 3.02 1.2 180.671 1.2
1950 2.52 1 (baseline) 152.271 1 (baseline)

The world’s population has grown by a factor of 2.7x since 1950, and in case that might not have been a big enough influence for U.S.-centric Hollywood—where the bulk of these movies still get made—the U.S. Population has grown only by a factor of 2x in that period.

Combining these numbers, we’ve established that between twice the number of people since 1950, we now have six times the number of movies to choose from per person[3]. Put another way, if you were to go see a movie once a week in 1950, you’d have a (theoretical) choice of 44 different movies each week. Now, in 2009, you have a choice of 537 different movies each week, or 76 different movies each day.

Worldwide movie production, distribution and differing film durations aside, it’s not even remotely possible to watch that many movies in a single day.

With so much more choice in movie offerings, the cognitive load for the average moviegoer increases, and the movie satisfaction goes down. Sequels, as established above, produce a smaller cognitive load on the moviegoer, thereby increasing movie satisfaction simply by demanding less of the viewer.

Coincidentally, this cognitive load analysis also explains why there is such a strong demand (among the public) for movies based on existing franchises like comic books or childhood toys: much like how you don’t have to invest heavily to enjoy a sequel, your investment is not as significant for the first movie if it is based on an existing set of characters and a universe that you’re already familiar with. Transformers, G.I. Joe, Spider-Man, Batman—all these require far less of a cognitive load (or investment) to watch if you’re already familiar with them from their original products.

This cognitive load/tyranny of choice aspect is of course not the only reason behind people’s predilection for sequels; the investment in the original movie’s characters has a continued influence in our decision-making process. The more we like the characters in a movie, the more likely we are to want to see them experience new adventures together. This, I believe, is one of the reasons the Pirates of the Caribbean sequels were so widely successful in the box offices: despite not living up to the first movie in terms of writing, plot or character development, people downright hungered for more adventures of Captain Jack Sparrow and his frenemies.

That’s not to say that sequels are always worse than their originals; that’s something that’s left entirely up to Hollywood. We as viewers can only hope that Hollywood produces more sequels like Empire Strikes Back, and fewer like Transformers 2.

Movies are enjoyed by millions of people daily, but the primary reason why people watch movies is to be entertained. We care more about this entertainment than about the choices we have to make to get that entertainment, for as the number of choices grows, our willingness to invest into brand-new characters and worlds decreases. The sequel is not just an industry-efficient means for a movie studio to make more money, it is something that we, the public, are subliminally asking for by increasingly choosing to see the movies that don’t require as much of an investment on our part.

The sequel, in other words, is here to stay.

  1. Given that “quality” of a movie is a highly subjective characteristic to begin with, I suspect no amount of research would be able to produce satisfactory results for that.
  2. Population figures come from the following sources: UN publication on population of the planet [PDF], Wikipedia: World population, NPR: U.S. historical population and U.S. Census Bureau Statistics.
  3. Note that this ignores broken down statistics on the percentages of people who actually go see movies in theaters, people who buy DVDs and who don’t, or the number of movies people watch per year. While those statistics would undoubtedly portray a slightly different picture overall, it’s a well-established fact that people watch more movies on average now compared to 1950, not less.

Adobe's Open Government: Money Grab or Utter Incompetence?

It's rare that I'll make strong accusations like in the title of this post without adding an extensive explanation for it, but as short on time as I may be, I can't help but express some of my outrage over Adobe's new "Open Government" advocacy website1.

This "Open Government" website is made all in Flash and is utterly inaccessible from a Web Accessibility perspective. In fact, I couldn't even imagine a more diametrically opposed website implementation to that very same website's stated purpose.

Wait, that's a lie. It could have been built using both Microsoft Silverlight and Adobe Flash in some weird kind of "Schrödinger's Website" abomination. Anyway, not the point.

Chris Foresman, writing for Ars Technica, had this to observe:

After just a cursory browsing, here are some of the usability and data accessibility issues we observed. You can't select, copy, or paste any text. Your browser's font override features won't work, so you can't adjust the font or its size to be more readable. Your browser's built-in in-page search won't work, and you can't use the keyboard to scroll through the text. You can't parse or scrape the data in any way; the design is fixed-width, so it's not going to work well on different screen sizes; and browser plugins, like Greasemonkey, can't adjust anything. Basically when it comes to text at all, if you don't like the style or are visually impaired, you're screwed.

What's so bizarre about this entire thing is that Flash is technically capable of many of these things. Done the right way, it is absolutely possible to make a Flash website that is a lot more user-friendly and accessible. You can even ensure that search engines can index the content if you do it right. Adobe, however, seems unaware of how their own flagship web product works.

This led me to wonder: is Adobe simply trying to con tech-unsavvy Government decision makers into signing a number of contracts tying future Government websites and/or data to Adobe's proprietary data formats and products, ensuring a healthy new stream of revenue for the next couple of years, or are they actually so incompetent that they couldn't make their own advocacy website adhere to the very principles it purportedly advocates?

Perhaps even worse is that this website really doesn't call for the use of Flash at all, except perhaps for all those superfluous little animations it's littered with. The interface, content and navigation is about as straightforward as it gets.

I also managed to break the first page's introduction paragraph simply by attempting to select some of the text; I've posted screenshots of the breakage on my Flickr.

All said and done, if the new Obama administration truly wants to create a more Open Goverment, they'd do well to stay away from Adobe's products and data formats; as Clay Johnson of Sunlight Labs wrote: if the data format has an ® by its name, it probably isn't great for transparency or open data.

  1. Hyperlink deliberately limited to the meaningless word "website". Adobe does not deserve any link-cred for this piece of inaccessible garbage.

Upcoming talks

Here on My own website

Subscribe to this site

Years

2014

2013

2012

2011

2010

2009

2008