One of the main difficulties in statistical parsing is associated with the task of choosing the correct parse tree for the input sentence, among all possible parse trees allowed by the adopted grammar model. While this difficulty is usually evaluated by means of empirical performance measures, such as labeled precision and recall, several theoretical measures have also been proposed in the iterature, mostly based on the notion of cross-entropy of a treebank. In this article we show how cross-entropy can be misleading to this end. We propose an alternative theoretical measure, called the expected conditional cross-entropy (ECC), which can be approximated through the inverse and normalized conditional log-likelihood of a treebank, relative to some model. We conjecture that the ECC provides a measure of the informativeness of a treebank, in such a way that more informative treebanks are easier to parse under the chosen model. We test our conjecture by comparing ECC values against standard performance measures across several treebanks for English, French, German and Italian, as well as other treebanks with different degrees of ambiguity and informativeness, obtained by means of artificial transformations of a source treebank. All of our experiments show the effectiveness of the ECC in characterizing parsing difficulty across different treebanks, making it possible treebank comparison.

Measuring Parsing Difficulty Across Treebanks

Lavelli, Alberto;
2008

Abstract

One of the main difficulties in statistical parsing is associated with the task of choosing the correct parse tree for the input sentence, among all possible parse trees allowed by the adopted grammar model. While this difficulty is usually evaluated by means of empirical performance measures, such as labeled precision and recall, several theoretical measures have also been proposed in the iterature, mostly based on the notion of cross-entropy of a treebank. In this article we show how cross-entropy can be misleading to this end. We propose an alternative theoretical measure, called the expected conditional cross-entropy (ECC), which can be approximated through the inverse and normalized conditional log-likelihood of a treebank, relative to some model. We conjecture that the ECC provides a measure of the informativeness of a treebank, in such a way that more informative treebanks are easier to parse under the chosen model. We test our conjecture by comparing ECC values against standard performance measures across several treebanks for English, French, German and Italian, as well as other treebanks with different degrees of ambiguity and informativeness, obtained by means of artificial transformations of a source treebank. All of our experiments show the effectiveness of the ECC in characterizing parsing difficulty across different treebanks, making it possible treebank comparison.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/3354
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact