Siri (a Semantic) application launches

As mentioned in this video at SemTech Siri is a Virtual Personal Assistant that is launching soon (in the US only :<) that really picks up on the “Knowledge Navigator” video that Apple produced 20 years ago! Twenty years may not seem like much, but in 1987, Michael Jackson was still the King of Pop (he just released the album BAD), and the President was Ronald Regan. A lot has happened since then in the technology space but we are still not quite where this video would have us believe is the future of human computer interaction. (If you haven’t seen the original I encourage you to take a look at it.)

What Siri tries to do is make an interface that is as natural as talking with gestures that correspond to how you’d want to communicate not to suit the demands of a QWERTY interface or a desk-bound mouse. As Tom Gruber mentions the vision that Apple put forward in their video anticipated many of the technologies we are seeing today. It knows who is in your social network, it knows time (and the interconnection of events) and there was continuous speech recognition which is a bit different than voice commands. This is where the computer knows the context of what is being said and can “intuit” what is meant by the context, something that today’s systems still have trouble with.

But the big question which everyone wants to know remains: “Is the vision of Apple’s Knowledge Navigator here today?” Tom answers, “Unfortunately No. (But we’re getting there)” This highlights the promise (and reality) of Web 3.0 (or the Semantic Web, or Information Simplicity or…) It is really difficult to do all of these things that we take for granted in our daily lives interacting with other human beings. It takes children years to learn social cues, knowledge, basics on living in their environment and current systems are far from the complexity of the human brain.

But we do have some interfaces that point towards an easier way of interacting (besides the keyboard!)  The iPhone multi-touch interface is one approach. Voice-based applications are another, but as I’ve written about before, its not just voice which will be the silver bullet to all our interface problems. As with other problems, there are a multitude of different interfaces we use without thinking (voice, vision, touch etc.) that help us understand the world around us. We have for the past 20 years, confined ourselves to one or two of these because that was the simplest way that we could get information into the cloud and interact. Now we really need to put the multi-dimensional aspect back in to fully experience the richness of information.

(PS Siri looks really cool and I wish it were available now!)

Posted in Innovation, Social Media, User Experience and tagged , , .

Leave a Reply

Your email address will not be published. Required fields are marked *