Exploring Resources for Lexical Chaining: A Comparison of Automated Semantic Relatedness Measures and Human Judgements

Title: Exploring Resources for Lexical Chaining: A Comparison of Automated Semantic Relatedness Measures and Human Judgements
Authors: Irene Cramer, Tonio Wandmacher, and Ulli Waltinger
Pub/Conf:  Modeling, Learning and Processing of Text Technological Data Structures. Mehler A, Henning Lobin and Harald Lüngen and K-UK, Storrer A, Witt A (Eds); Studies in Computational Intelligence, Berlin/New York: Springer

Abstract:
In the past decade various semantic relatedness, similarity, and distance measures have been proposed which play a crucial role in many NLP-applications. Researchers compete for better algorithms (and resources to base the algorithms on), and often only few percentage points seem to suffice in order to prove a new measure (or resource) more accurate than an older one. However, it is still unclear which of them performs best under what conditions. In this work we therefore present a study comparing various relatedness measures. We evaluate them on the basis of a human judgment experiment and also examine several practical issues, such as run time and coverage. We show that the performance of all measures – as compared to human estimates – is still mediocre and argue that the definition of a shared task might bring us considerably closer to results of high quality

BibTeX:

@incollection{DBLP:series/sci/CramerWW12,
  author    = {Irene M. Cramer and
               Tonio Wandmacher and
               Ulli Waltinger},
  title     = {Exploring Resources for Lexical Chaining: A Comparison of
               Automated Semantic Relatedness Measures and Human Judgments},
  booktitle = {Modeling, Learning, and Processing of Text Technological
               Data Structures},
  year      = {2012},
  pages     = {377-396},
  ee        = {http://dx.doi.org/10.1007/978-3-642-22613-7_18},
  crossref  = {DBLP:series/sci/2012-370},
  bibsource = {DBLP, http://dblp.uni-trier.de}
}

PDFBibTeX