Note:
This bibliographic page is archived and will no longer be updated.
For an up-to-date list of publications from the Music Technology Group see the
Publications list
.
Test Collection Reliability: A Study of Bias and Robustness to Statistical Assumptions via Stochastic Simulation
Title | Test Collection Reliability: A Study of Bias and Robustness to Statistical Assumptions via Stochastic Simulation |
Publication Type | Journal Article |
Year of Publication | 2016 |
Authors | Urbano, J. |
Journal Title | Information Retrieval Journal |
Volume | 19 |
Issue | 3 |
Pages | 313-350 |
Abstract | The number of topics that a test collection contains has a direct impact on how well the evaluation results reflect the true performance of systems. However, large collections can be prohibitively expensive, so researchers are bound to balance reliability and cost. This issue arises when researchers have an existing collection and they would like to know how much they can trust their results, and also when they are building a new collection and they would like to know how many topics it should contain before they can trust the results. Several measures have been proposed in the literature to quantify the accuracy of a collection to estimate the true scores, as well as different ways to estimate the expected accuracy of hypothetical collections with a certain number of topics. We can find ad-hoc measures such as Kendall tau correlation and swap rates, and statistical measures such as statistical power and indexes from generalizability theory. Each measure focuses on different aspects of evaluation, has a different theoretical basis, and makes a number of assumptions that are not met in practice, such as normality of distributions, homoscedasticity, uncorrelated effects and random sampling. However, how good these estimates are in practice remains a largely open question. In this paper we first compare measures and estimators of test collection accuracy and propose unbiased statistical estimators of the Kendall tau and tau AP correlation coefficients. Second, we detail a method for stochastic simulation of evaluation results under different statistical assumptions, which can be used for a variety of evaluation research where we need to know the true scores of systems. Third, through large-scale simulation from TREC data, we analyze the bias of a range of estimators of test collection accuracy. Fourth, we analyze the robustness to statistical assumptions of these estimators, in order to understand what aspects of an evaluation are affected by what assumptions and guide in the development of new collections and new measures. All the results in this paper are fully reproducible with data and code available online. |
Final publication | 10.1007/s10791-015-9274-y |
Additional material:
Data and code: https://github.com/julian-urbano/irj2015-reliability