IBM’s supercomputer named Watson has completed its conquest for Jeopardy! domination by defeating two of the best contestants on earth: Ken Jennings, the longest consecutive winner in Jeopardy history, and Brad Rutter, the all-time earnings leader. Time will tell if the Jeopardy! victory is a Truman/Dewey moment. The Jeopardy! competition was a total score over two games (aired in three episodes Feb. 14-16), with the highest total score at the end of the two games claiming victory. Watson won the $1 million prize, and it was not even close until the last round when Ken Jennings seemed determined to put in a good showing for humans before bowing out by writing “it’s time to welcome our new artificial intelligence overlords” on his Final Jeopardy question. This delighted the IBM heavy studio audience.
The technological feat of having a computer properly interpret answers intentionally filled with wordplay cannot be understated. In addition to the impressive technology to find the correct answer in less than three seconds, Watson had to 1) decide whether to buzz in, 2) understand game theory for its wagers, and 3) have intuitive knowledge. There’s little doubt this was a showcase for the advancement in natural language computation. The project itself was dubbed DeepQA at IBM. Question answering (QA) is defined in the field of information retrieval as the task of answering a question posed in natural language.
Many people will focus on those questions that Watson misinterpreted and got wrong, but those examples were impressively few and far between. There were a few instances of irony/humor. The category “Also on your Keyboard” befuddled Watson and showed that it not been programmed with the computer keyboard shortcuts, and it thought that Toronto was a U.S. city.
Outside of these moments, Watson methodically went through and answered most of the questions leaving the human competitors watching with sometimes bemused looks on their faces. The look was a result of the speed at which Watson was able to answer the questions. Watson consistently showed that it was superior at buzzing in than the humans. In human-on-human competition, many people say that the hardest thing about Jeopardy! on TV is figuring out when to buzz in, and Watson did not have any such problems. While that may have proven to be a competitive disadvantage, that this was a showcase for IBM’s technology, and in the field of information retrieval speed is of paramount importance.
According to Ken Jennings, “Jeopardy! devotees know that buzzer skill is crucial—games between humans are more often won by the fastest thumb than the fastest brain. This advantage is only magnified when one of the “thumbs” is an electromagnetic solenoid trigged by a microsecond-precise jolt of current. I knew it would take some lucky breaks to keep up with the computer, since it couldn’t be beaten on speed.”
How They Did It
Throughout years of work, Watson was loaded with authoritative sources of information and IBM aimed to “illustrate how the wide and growing accessibility of natural language content and the integration and advancement of Natural Language Processing, Information Retrieval, Machine Learning, Knowledge Representation and Reasoning—along with massively parallel computation—can drive open-domain automatic Question Answering technology to a point where it clearly and consistently rivals the best human performance.” Put another way, IBM filled Watson with quality information that would help it answer questions typically found in Jeopardy over many fast computers. These sources included dictionaries, thesauri, Wikipedia, newspapers, and anything in text that would help it answer questions.
According to Sue Feldman, research vice president, search and discovery technologies at IDC, “Each query or hypothesis returns a confidence score, and is further iterated to gather more evidence. The highest scoring answer wins. IBM has combined not just NLP and statistical NLP, but machine learning, a voting algorithm, a method of interpreting the questions and assessing them by formulating parallel hypotheses, and Hadoop and UIMA for preprocessing, as well as the usual search (Lucene and INDRI), deep text analytics, fuzzy matching software, and of course an in-memory caching system to save time in retrieval.”
[Editor’s note: See Feldman’s article in the Jan./Feb. 2011 issue of Searcher, “The Answer Machine: Are We There Yet?”]
While none of these concepts are new, it shows just how powerful existing technology can be when attacking a specific problem, and in this instance, it is a Jeopardy! game.
How is This Different From Google?
Watson is a system that IBM fed information versus one that proactively crawls around the Internet looking for new information and trying to make sense of it. While both Watson and Google have an overlap on providing relevant responses to keyword and questions, the technology solutions IBM developed for Watson differ greatly than the problems that Google attempts to solve by crawling the Internet. IBM has gone to great lengths to explain that Watson is a self-contained system versus one that is randomly growing and finding new information.
Watson will add more wind at the sails of the red-hot Question and Answer segment of information retrieval. Quora, a “social” Question and Answer startup created by a Facebook founder is attempting similar natural language solutions but the questions are answered from fellow humans. Watson and Quora seem more competitive than Watson and Google.
So How Can This Help Me?
While Watson will not be a “magic bullet” to help you with your job, Watson does have the ability make our lives better. Any industry where people who can load authoritative data and generate answers to questions in a nice user experience (such as Watson’s voice) will benefit. Science, health care, and academia are three areas that come to mind. How about instant translation services? It is not a long leap to have Watson play Jeopardy! in any language, or respond to questions in one language with answers in another—something that would help any non-French speaker traveling around France.
According to Feldman, “Face it, if we spend our days sorting through information piles, that leaves us little time to analyze, understand, and make informed decisions. Imagine an intelligent physician’s assistant, able to forage in seconds through all of medical literature, compare the research to a patient’s records, and suggest a treatment. Imagine being able to get an answer from a call center without being on hold for an hour.”
So when you can phone a call center and get an instant response to a question in 5 seconds and hang up, then it may be time to worry about computers taking over. Until then, we’ll all just have to wait on hold…