Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



News & Events > NewsBreaks
Back Index Forward
Threads bluesky LinkedIn FaceBook Instagram RSS Feed
 



‘Old Economy’ Info Retrieval Clashes with ‘New Economy’ Web Upstarts at the Fifth Annual Search Engine Conference
by
Posted On April 24, 2000
A scaled-down version of the heroic struggle between the so-called old and new economies was played out to the delight and edification of the 300 attendees of the fifth annual Search Engine Meeting, held at the Fairmont Copley Plaza Hotel in Boston on April 10 and 11. Though the talks and panel discussions focused on familiar topics such as information retrieval research, search engine design, and usability issues, a major subtext of the conference was about valuation—specifically, the collision of values between researchers and academics insisting on upholding traditional, formal methods, and search engine entrepreneurs scrambling to keep their businesses alive and thriving at Internet speed.

Both camps stated their cases with compelling arguments and thoughtful persuasion. Yet as the meeting progressed, several emotionally charged exchanges erupted between participants on both sides. This heated jousting added an interesting and unexpected dimension to the meeting.

Speakers at the meeting, sponsored by Infonortics Ltd., were selected by Ev Brenner, self-described "noted raconteur" and long-time observer of the information retrieval industry. The depth and variety of information presented was generally first rate, which of course presents a challenge to a writer retrospectively recording the proceedings. Here then are highlights from the meeting, leading off with fireworks, followed by brief descriptions of as many other noteworthy insights from the meeting as space permits.

Old to New: Play by the Rules!
The "old economy" faction consisted primarily of participants in the Text REtrieval Conference (TREC), co-sponsored by the National Institute of Standards and Technology (NIST) and the Defense Advanced Research Projects Agency (DARPA). In his presentation "Secrets of TREC," Chris Buckley from SabIR Research said that TREC's purpose is to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies.

Each year, TREC focuses on several different problems of text retrieval. NIST establishes the testbed, providing a standardized set of documents and questions. Participants run their own retrieval systems on the data, and return to NIST a list of the retrieved top-ranked documents. NIST pools the individual results, judges the retrieved documents for correctness, and evaluates the results.

Last year, 66 groups representing industry, academia, and governments in 16 countries participated. "The benefits of TREC are the blind, independent evaluation, and TREC allows scientific evaluation of which techniques work best," said David Hawking, SIRO Mathematical and Information Sciences, Australia.

However, of the 66 groups participating, "the search engine companies didn't play," said Hawking. "We tried to entice them to come along with TREC. They didn't. So we evaluated them anyway," he said. The results weren't pretty. With the exception of Northern Light and Google, major search engines TREC evaluated fared relatively poorly compared to others participating in the test. Nonetheless, representatives from the major engines did not appear chastened by the results of the TREC evaluation.
 

New to Old: The Rules are Irrelevant!
On a panel hosted by TREC advocate David Evans (Claritech), search engine representatives spoke about improvements they've made to their services, and plans for the future. Evans also asked panelists to define relevance, and describe the metrics used to determine that a specific approach was working.

Citing studies of searcher behavior, Evans said, "We know that if people are willing to spend large amounts of time, eventually they will start to converge on perfect performance." Practice and persistence pay off—research shows that if a searcher spends just an hour working with an engine, performance improves dramatically. But most Web search engine users demand nearly instant results.

"Search engines are clearly caught in a corner because of the pragmatics of their enterprise. It's a difficult problem; we acknowledge that," said Evans.

The panelists' responses were forthright and pulled no punches, acknowledging that many design and implementation decisions were made quickly, often in response to user demand. "At the end of the day, it was seat of the pants," said Jan Pedersen, formerly head of search and directory for Go/Infoseek. "People had a hunch and it was implemented."

It gradually became clear why, despite diplomatic overtures, the major Web search engines didn't participate in the TREC evaluations. "We're constantly surprised by good ideas that don't help," said Marc Krellenstein of Northern Light.

Google CEO Larry Page had the most heated response to the TREC advocates, at one point calling the entire formal evaluation process "irrelevant." "I don't believe that binary relevance rankings are useful," said Page. He's convinced that surviving and thriving in the crucible of the Web is sufficient measure of success. "All of us could think of things to do that would make things better if you gave us infinite resources," he said.

Indeed, the TREC testing process seems akin to standing in the midst of a stampeding herd of elephants, taking a snapshot, and trying to draw meaningful conclusions about the veracity of the herd. The camera might be top-notch, the photographer first-rate, and the interpreters brilliant. Nonetheless, by the time conclusions are reached the herd will have changed course numerous times, individuals in the herd will have grown stronger or weaker, and even the environment through which the herd is stampeding will have changed dramatically.

On balance, both the "old" and "new" economy participants made valid, thought-provoking observations, and scored meaningful points against their counterparts. The charged dialog only underscored both factions' passion and commitment to providing the best possible results for searchers.
 

Other Meeting Highlights
Danny Sullivan of Search Engine Watch led off the meeting with a lively overview of Web search engine trends and achievements during the previous year. Two notable trends Sullivan described were "Size Wars II," the efforts by the engines to claim title to the largest index of Web pages, and a renewed focus on improving the relevance of Web search results.

Going forward, Sullivan sees the emergence of more specialty portals or "vortals," vertical portals. Vortals are "very specific for certain topics," allowing searches to be conducted against much smaller, inherently more relevant databases than those compiled by the general search engines. Sullivan also sees many more personalization features being added to search services, a theme echoed by many of the speakers at the meeting.

And perhaps disappointing fans of Tim Berners-Lee's vision of the Semantic Web, Sullivan concluded by noting that "nothing is happening with metadata."

Carol Hert from Syracuse University talked about the importance of understanding user behavior and incorporating that understanding in the design of search engines. "Search engines have tons and tons of users, tons and tons of data—they're drowning in the stuff," said Hert. But is all this data useful for improving the search experience for users?

Hert suggested studying the behavior of both novice and expert searchers. "Expert users tend to figure out what's going on—it's the others you have to worry about," she said. Mapping terms used by experts to terms used by novices would help, as would the ability to save both query terms and valuable documents retrieved in a search. "People are very good at recognizing; poor at recalling," said Hert.

Wim Albus of the Port of Rotterdam Authority, Netherlands, discussed constructing effective filters for news feeds using known retrieval tools. Albus described a filtering process that sidesteps limitations with Boolean or proximity operators that hamper many search engines.

Doran Howitt from Thunderstone Software spoke about the integration of text searching and relational databases. Search engines and relational databases use radically different paradigms for storing and retrieving information. "Grouping and sorting by some field in a database is very hard for search engines to do," said Howitt, adding, "Relational databases don't do text searching very well."

Thunderstone's answer is a "relational search engine" that combines the strengths of both approaches while avoiding the bottlenecks and performance problems that arise when using the two systems in parallel.

Knut Magne Risvik of FAST Search & Transfer spoke on the challenges search engines face scaling with the growth of the Web. "Our best estimate is that the Web is doubling every 8 months," said Risvik. This growth rate should be taken with a grain of salt, however. Risvik said a challenge to the engines is the rate of duplication: duplicate documents, servers, even entire hostnames. To adjust for this duplication, FAST has culled only 340 million pages from its crawl of the Web that found more than 850 million documents.

And there's an additional challenge unrelated to keeping up with the sheer volume of new documents. "The Web is growing in multiple dimensions," Risvik said. "It has grown away from being a document media to being an interactive media, which is a challenge for search engines."

Bill Bliss, director of MSN search Microsoft, described MSN's hybrid approach that uses traditional information retrieval techniques against a high-quality, well-classified directory optimized by human editors. "Many people have given up on machines to make search engines better," said Bliss. However, "the biggest problem with the editorial approach is that it doesn't scale well with the growth of the Web."

On the other hand, classic information retrieval techniques have value for creating Web indexes, but rely on underlying assumptions that make adapting them to the Web difficult. "One implicit assumption is that the corpus being searched is of high quality," said Bliss. "The second implicit assumption is that users are reasonably skilled. Unfortunately, the Web violates all of these assumptions. Even for an expert it can be difficult to zero in on what you want. A lot of the content on the Web is just plain lousy, but from a document retrieval standpoint [the documents] are relevant."

Stephen Arnold treated participants to one of his trademarked whirlwind overviews of new trends and important products we can expect to see in the coming year. "You don't search because it's fun, but because it's situational," said Arnold. "No one search engine can help you with all your situations."

Arnold sees several major trends, including the use of personalization to "expose" content where it can be easily accessed, the proliferation of recommendation engines and the use of "voting" to refine relevance, and the mixing and matching of "smart" technology, including clustering, natural language processing, data mining, and automatic ontology generation.

Eric Brewer, Inktomi's chief scientist, gave a brief history of Web search engines, noting that there have now been three distinct generations since the advent of the first engines in 1994-95. In the current third generation, Brewer said Inktomi and the other search engines need to focus their attention on improving relevance.

Brewer also announced that Inktomi has officially included 500 million pages in its index, establishing a new record in the "size wars." Interestingly, although Inktomi has crawled more than a billion pages on the Web, "It was difficult to find 500 million legitimate pages after culling duplicates and spam. "We found 445 million, but had to go digging to get the index to 500 million," said Brewer.

Anders Haldahl, vice president of research and development for Mondosoft, provided a marked contrast to Brewer, stating that size isn't the most important factor for a search engine. Instead, the underlying meaning contained within indexed documents is crucial. "Meaning is created by throwing away information," said Haldahl.

David Evans of Claritech spoke about the increasing need for "micro-message management." Micro-messages are those small pieces of information that we rely on but that don't fit into any system—e-mail, voicemail, news snippets, and so on. As this information proliferates we need management tools that go beyond the limitations of search. Evans described adaptive filtering systems that are designed to handle micro-messages, noting that it's a difficult challenge. "Most people involved believe that adaptive filtering is the hardest problem," said Evans.

Other presenters at the conference included Alison Huettner of Claritech on the role of natural language in questioning and answering systems; Claude Vogel of Semio Corp. on taxonomy building as an alternative corporate solution for metadata management; Lisa Braden-Harder of The Butler Hill Group on linguistics and precision; and a fascinating panel discussion on intelligent agents, moderated by Susan Feldman. For details on these presentations, as well as expanded coverage of the presentations and panels noted above, please see the extensive "Search Engine Meeting 2000 Report" on the author's Web site. (See URLs, below.)
 

Conclusion
The fifth annual Search Engine Meeting was an intensive 2-day conference featuring speakers covering a wide range of topics. The depth of information presented, combined with the lively and spirited debate among industry leaders and researchers, made the conference time well-spent. I'm already looking forward to the next Search Engine Meeting, on April 9 and 10 next year, once again at the Fairmont Copley Plaza Hotel in Boston.


URLs

Search Engine Meeting 2000 Report
http://websearch.about.com/library/blsem.htm

Infonortics Search Engine Meeting—Information and Presentations
http://www.infonortics.com/searchengines/index.html

TREC
http://trec.nist.gov/


Chris Sherman is president of Searchwise, a Boulder, Colorado-based Web consulting firm, and associate editor of Search Engine Watch.

Email Chris Sherman
Comments Add A Comment

              Back to top