For years, computer scientists and information technologists have been trying to make computers easy to use. While there has been some success, we’re still trying to teach people how to use computers. Many of us find this vexing. After all, shouldn’t computers be made to understand people, rather than having people understand computers?
Perhaps the biggest news with regards to a computer that can understand people was the debut of "Watson" on the TV game show "Jeopardy" a few weeks ago. Watson soundly trounced two of the most successful and prolific "Jeopardy" players in history.
Critical viewers of the show, however, noticed that there were certain types of questions with which Watson was not as accurate as others. These were typically quirkier queries that, while seemingly simple to a human, stumped the computer.
Further, Watson was not able to interpret the answers of the other players. In fact, on more than one occasion, Watson repeated the same incorrect answer given by one of the other players. Chalk this one up to the fact that Watson did not listen and interpret the spoken word. Rather, it was fed questions in a textual format.
So, while Watson was quite impressive, it demonstrated that we are still a ways away from computers being able to understand people.
How far? Well, from the perspective of processing power, not very. Watson was built with equipment that, while on the expensive side, is readily available and used by businesses and government organizations throughout the country. Rather, many opine that it is a result of not enough attention being paid to develop software that understands people. Sure, there have been some efforts in this area, but not nearly as much as say, developing Windows-based software.
In recent months, though, we’ve seen several innovations related to this area. Already, we’re seeing devices that can read brain waves and act accordingly. Such devices measure a person’s thoughts, and when the brain waves corresponding to those thoughts are repeated, an action can be taken.
Other innovations include interpreting common human gestures, such as a wave of the hand, and reacting accordingly, such as bringing a hologram closer. These are differentiated from current controls of smartphones. After all, no one ever used "touch and drag" gestures to make something bigger before it became common on smartphones. In fact, these gestures are borne out of innovations first pioneered at Xerox Palo Alto Research Center and later made much more popular with Macintosh computers.
Many folks worry that, with such progress, computers will take over the world, a la the "Terminator" movies. Most technologists disagree with this premise. After all, even we humans don’t fully understand our thought processes.