The Device vs Web Tug o’ War

If history teaches us anything, it’s this: we’ve seen it all before, we’ll see it all again. I’ve been pondering the “app vs Web” tug o’ war for quite some time, and since my early days with computers (some 30 years ago, as I write this) there have been constant cycles of centralisation and decentralisation.

Short history lesson

It started long before I got involved. This is one case where we know whether it was chicken or egg. In the absence of anything like a working network, computing solutions were centralised. As late as the 1970s, computers were designed for just one task each (a single app on a single computer). You might look at this as “on-device applications”. Computers that could carry out multiple tasks still operated on-device. People only stopped coming to the computer when it could reach out to them, thanks to the introduction of reliable networks. Cost and reliability limited access to dumb terminals, while applications remained centralised. The revolution of the workstation and personal computers of the 1908s redefined the idea of “on-device”. Now the device could be something closer to hand. Applications moved to the users’ side of the network, while centralisation was relegated to storage. Large legacy systems tended to remain with the models prevalent at the time of their inception, so mainframes were still around, but the “new thinking” was predominantly “Distributed” with an emphasis on the network edge. The Web, in its initial incarnation, was little more than the dumb-terminal model where the central server had been replaced by a cloud of servers. Browsers merely followed the embedded pointers to find the next bundle of wonderful information. The dominant device was the PC and on-device applications ruled.

That all changed in the mid-90’s when the Web found a way to distribute the execution of applications. When applets, scripting support and plug-ins were introduced to browsers, people took notice. Here was a way to have centralised control and management, while still pushing the processing load out to all those PCs. The on-device application had a serious challenger. Now we could speak of “in-browser” applications and eventually “Web Applications”. The distinction between an application executing mainly on the Web server and an application executing mainly in the browser is easy to blur. Consequently the term “Web Application” can be misleading. To add to the confusion, while a Web server can delegate certain procession to a script in the browser, that script can in turn delegate processing to the server. This design pattern, coined Ajax in 2005, is a key factor behind the Web’s prominence in online services.

By the time Ajax got its name, another device had already grabbed the limelight from the common PC: the “mobile”. The smartphone, PDA or whatever you want to call it, had grown up from the days when it had bare-bones capability and was now sporting mainstream features previously considered the sole domain of the desktop. Like the PC, many of the eye-catching features were initially only accessible to native code. Eventually, as the hardware improved, virtual environments enabled even more developers to target the mobile devices. We saw a flurry of “cool apps”, but the Web was still struggling to deal with the diverse contexts presented by mobile devices. On-device apps were once again King. The mobile Web was still in the Dumb Terminal space, but echoing the history of the desktop Web, browser scripting appeared along with many of the features already established on desktops. The evolution was rapid; doing in 3-4 years what took the desktop Web nearly two decades.


We have reached the point where mobile Web features are as capable as their desktop counterparts. Mobile technology evolution is easily outpacing the desktop and there are now more mobile Web-enabled devices than desktops. Every day, more Web servers add support for mobile devices. We even see mobile-only sites.

Mobile applications have also evolved. In a bid to attract more customers, manufacturers have pulled out all the stops and offered ever-more-complex development tools to encourage more and more eye-popping applications. Witness the success of Apple’s app store for the iPhone and related devices, Google’s resources for Android/Chrome, RIM’s JDE and more.

The phenomenal success of mobile applications suggests that on-device is currently winning the ago-old tug o’ war. Web browsers are cool, but not cool enough. Does this mean the battle is won? I don’t think so. Mobile on-device applications are facing a unique challenge, orders of magnitude greater than that experienced in the desktop world: Device diversity.

The typical eye-popping on-device application is limited to a specific make/model of device. Replicating the effects on different devices is laborious and expensive, and sometimes impossible. Installing and maintaining the application can be difficult, with version control a real nightmare.

Meanwhile, the Web community has been paying attention. They have observed the general characteristics of on-device applications and have noticed what makes them compelling. Slowly and steadily the community has added these features to the browser. Local storage, animation, sensor access, embedded media services, better security, background processing, visual styling, responsive interaction, dynamic display updates, integration with other applications, shared preferences, better visual controls, pan and zoom, input gestures, 3D effects, collapsible regions, text effects, sensible form inputs, audio effects and much more.

It took a few years for the browser to become the trusted common platform on every desktop. It had to deal with a handful of desktop OS options, compete with sophisticated native applications and rein in several browser vendors who each had their own ideas of the direction the Web should take. In the mobile world the challenges are different. There are many more OS options to be addressed, many more input modalities and seemingly endless varieties of device capabilities. On the positive side, the native applications are nowhere near the levels of sophistication of their desktop cousins, mainly down to device constraints (processing, memory, display, input options etc) and to the appropriateness (or more correctly, inappropriateness) of certain applications to the mobile context. Finally, in perhaps the biggest boost to the Web, the vendors appear to be agreeing on a way forward: HTML 5.


Just like the history of “On-Desktop vs On-Web”, I don’t expect a winner to be declared. Each solution will find its niche, and both will be considered essential to the overall experience. In time, just like the blurring of the Web Application concept, it is possible that the two deployment strategies will merge. Browser and OS will become one, and users will find less to distinguish between solutions that are server-side, client-side or drifting somewhere in the cloudy hybrid zone.

Google’s Chromium/Chrome OS might look like the next step, but it’s only a model that others might choose to copy. There will be plenty more to-ing and fro-ing before this particular battle settles down.

Over the next few years, the core features of HTML 5 will become stable, or at least more stable than they are today. Implementations will be everywhere, and each implementation will incorporate some update mechanism so that as new browser features are agreed, updates will happen automatically. We already see this in mainstream desktop browsers and a few mobile browsers too. We will also see more applications using the browser as their primary user interface, even when offline. The cost of developing mobile applications will be reduced because of the sharing of the development space (less fragmentation means that skills can be spread over more devices). Much of the development work will be based on Web technology, while certain device-specific features will still be available through browser-based APIs (geo-location is a current example).

Does this mean that content/application adaptation will no longer be required? Will we finally get write-once use-everywhere? I don’t think so. Adaptation will drift into the architecture, where it will be assumed as much as we already assume the network, media codecs and many other things. HTML 5 will be what is delivered to any device worth supporting (potentially degrading to a few others such as XHTML 1.x/MP/Basic or some older variants of HTML). What is created by authors may be HTML 5, or more likely some authoring specific language that either incorporates HTML 5 or readily generates it.

If the HTML 5 standardization community agree on an extension mechanism, then authoring practices may gravitate towards extending HTML 5 at the authoring stage, while still delivering fully compliant markup to the Web device. If not, then we may yet see some completely different authoring language, though the post-adaptation result delivered to the Web device will still be HTML 5.

As someone with a vested interest in the success of the mobile Web, I will be working behind the scenes to encourage the necessary framework supports that will enable Web authors to work with HTML  5 (or some enhancement thereof) as an authoring language. I intend to ensure that the adaptation step is properly recognised. It will be, eventually, but I’d rather not wait too long. I’m impatient for more cool (Web) applications!

Categorised as: Protocols & Specs

Comment Free Zone

Comments are closed.