by yannickmahe on 5/13/11, 2:27 PM with 31 comments
by bad_user on 5/13/11, 3:40 PM
That's not the only or the most important reason - the integration of the browser with core functionalities of the iPhone (like location awareness, taking pictures, uploading those pictures, handling keyboard input) -- absolutely sucked and still sucks. Creating a web app on the iPhone that has to behave like a regular app, with some of the core iPhone standard functionalities, goes somewhere between an exercise in frustration and impossible.
And another reason - Apple clearly provided (useful) native apps, with which you could never integrate, or if the integration is available (like dialing a phone number, or opening Google Maps), iOS doesn't return the user to the originating app.
Building something like Skype in the browser would be clearly doable, if only Apple provided the hooks. Otherwise the only web apps you're going to see on the iPhone are the traditional web apps, minus important functionality available on the desktop (like you're not able to upload freakin' images, or focus automatically on freakin' form inputs; how fucked up is that?).
I'm fairly certain that Apple's suggestion (for building web apps) was tongue-in-cheek, more as in "get off my lawn".
by maratd on 5/13/11, 3:30 PM
You can now write software on the web that can do anything non-web software can do. The only issue for Google has been that it wasn't written yet ... so they wrote it. That's what Google Apps is about.
They also pushed Angry Birds on Chrome for a reason and made it free. Games are an integral part of the experience. Ro.me is also along that vein. Hardware acceleration was the last piece of the puzzle.
They are definitely in position for success, although that is certainly no guarantee of it.
by hamner on 5/13/11, 2:45 PM
by rl41 on 5/13/11, 3:02 PM
There's more to Google's new approach than just a browser-based OS. Hardware is completely interchangeable, and the prospect of accessing all your data anywhere has become increasingly popular in recent time.
by jorangreef on 5/13/11, 6:30 PM
Programming multi-user software that spans the network, i.e. is partition-tolerant, is different to the last few decades of single-user, single-node software we've been writing. It's a new paradigm. We've known about the fallacies of distributed computing for some time. But as developers we're taking our time coming to understand them.
For example, there are very few web apps today that can manage the synchronization/consistency/migration/offline access/memory/authorization/versioning/database implications of say 4GB of text data per user on multiple devices, whilst facilitating sharing between hundreds of user accounts.
It's Distributed Systems 101. Up until now, these ideas have been tried out in privately managed data-centers. Now they must be rolled out to the web and the code must run in different browsers on devices of varying capability. The frameworks for doing that do not yet exist. They're being built.
by evilduck on 5/13/11, 4:21 PM
by bradly on 5/13/11, 2:50 PM
• It was cheap and even free for many of us
• It came with an internet connection included
• It only ran applications off the web
by zitterbewegung on 5/13/11, 2:47 PM
by dpapathanasiou on 5/13/11, 2:38 PM
by throwaway32 on 5/13/11, 3:08 PM
by zwieback on 5/13/11, 3:07 PM
by jsight on 5/13/11, 7:14 PM
It's funny how quickly things get ingrained into us, such that we barely remember not having them (Chrome was introduced in late 2008).
by joezydeco on 5/13/11, 2:59 PM