Overview
- Editors:
-
-
W. Bruce Croft
-
Department of Computer Science, University of Massachusetts, Amherst, USA
-
John Lafferty
-
Computer Science Department, Carniege Mellon University, Pittsburgh, USA
Access this book
Other ways to access
Table of contents (10 chapters)
-
Front Matter
Pages i-xiii
-
- John Lafferty, ChengXiang Zhai
Pages 1-10
-
- Victor Lavrenko, W. Bruce Croft
Pages 11-56
-
- Karen Sparck Jones, Stephen Robertson, Djoerd Hiemstra, Hugo Zaragoza
Pages 57-71
-
- Warren R. Greiff, William T. Morgan
Pages 73-93
-
- Wessel Kraaij, Martijn Spitters
Pages 95-123
-
- Jinxi Xu, Ralph Weischedel
Pages 125-140
-
- William J. Teahan, David J. Harper
Pages 141-165
-
-
-
- Vibhu O. Mittal, Michael J. Witbrock
Pages 219-244
-
Back Matter
Pages 245-245
About this book
A statisticallanguage model, or more simply a language model, is a prob abilistic mechanism for generating text. Such adefinition is general enough to include an endless variety of schemes. However, a distinction should be made between generative models, which can in principle be used to synthesize artificial text, and discriminative techniques to classify text into predefined cat egories. The first statisticallanguage modeler was Claude Shannon. In exploring the application of his newly founded theory of information to human language, Shannon considered language as a statistical source, and measured how weH simple n-gram models predicted or, equivalently, compressed natural text. To do this, he estimated the entropy of English through experiments with human subjects, and also estimated the cross-entropy of the n-gram models on natural 1 text. The ability of language models to be quantitatively evaluated in tbis way is one of their important virtues. Of course, estimating the true entropy of language is an elusive goal, aiming at many moving targets, since language is so varied and evolves so quickly. Yet fifty years after Shannon's study, language models remain, by all measures, far from the Shannon entropy liInit in terms of their predictive power. However, tbis has not kept them from being useful for a variety of text processing tasks, and moreover can be viewed as encouragement that there is still great room for improvement in statisticallanguage modeling.
Editors and Affiliations
-
Department of Computer Science, University of Massachusetts, Amherst, USA
W. Bruce Croft
-
Computer Science Department, Carniege Mellon University, Pittsburgh, USA
John Lafferty