I had a play with an iPad a week or so ago whilst visiting a friend. Before I’d actually had the opportunity to try it out I have to confess that I’d been a bit sceptical, I mean how much demand will there be for a device which seemed to me to have such limited use?
When the iPad launched I just assumed that it was a slightly larger iPhone and so could see nothing particularly revelatory or ground breaking about it. It wasn’t going to compete in the mobile market and given the interface and ergonomics I couldn’t imagine people using it for anything functionally oriented (i.e. constructing long emails/copy, online banking etc). It’s fair to say that I was pretty underwhelmed.
But after spending a brief amount of time with the iPad (literally minutes) I realised that this device may have come the closest yet to providing a literal translation of our movements in the physical world into the non physical (i.e. online). I experienced this most powerfully when skimming the pages of a virtual copy of Beatrix Potter’s ‘Peter Rabbit’ using the device. Indeed, the removal of the keyboard, mouse or any peripheral controller for me marks a merging of the physical and virtual worlds on a scale not yet experienced by the mainstream. It is this merging which will enable us to navigating the Internet in a more natural, haptic and less abstract way than ever before. In this way the iPad is revolutionary.
It’s not so long ago, when my parent’s generation were learning to use computers and the internet, that for many the concept of controlling your ‘movements’ on a screen using a ‘cursor’ and a ‘mouse’ was very abstract indeed. Consequently, learning to use computers and the Internet proficiently and with ease required not only that users bought into this a new way of navigating ‘the world’ conceptually, but also that they mastered the physical skills necessary. Sounds simple, right? But it is the need to master these additional skills that have created a barrier to computer and Internet usage amongst many and in particular historically, amongst older generations.
In 2003 when I was working for Sony PlayStation they launched the EyeToy, a video camera peripheral which sits on top of a user’s television and allows users to control video game play using only their movements. The EyeToy was designed by Richard Marks in 1999 and employs computer vision and gesture recognition technology in order to facilitate natural user interface. Quite rightly, PlayStation through research learnt that the traditional gaming controller presented a barrier to game play for the broader, less niche gaming audiences (e.g. women and older players) and realised that the EyeToy could help remove this barrier to usage.
At launch I found the EyeToy concept brilliant; for me it marked the beginning of truly controller free human computer interaction for the masses, an extension of which we may now experience using the iPhone and iPad. At this point it’s worth noting that although the hugely successful Wii also employs gesture recognition to control game play, given that the Wii is not truly controller free it does not in my mind herald such an interesting shift in human computer interaction.
Paradoxically as device and interface design moves away from the abstract and more towards the literal by facilitating natural, intuitive and ‘human’ movements we seem to be moving both backwards and forwards simultaneously: backwards, because this marks a return to a more simple, intuitive and less complex way of controlling web enabled devices; and forwards because at the moment, this simplicity requires more advanced technology.
And as the iPad starts to make the way we navigate the Internet more natural, intuitive and human in a physical sense, I wonder how and to what extend it will be adopted, by whom and for what. Will the iPad really be responsible for the most significant step change in Internet and ‘computer’ usage to date?