Evaluating Multimedia and Language Tasks
- PMID: 33733150
- PMCID: PMC7861343
- DOI: 10.3389/frai.2020.00032
Evaluating Multimedia and Language Tasks
Abstract
Evaluating information access tasks, including textual and multimedia search, question answering, and understanding has been the core mission of NIST's Retrieval Group since 1989. The TRECVID Evaluations of Multimedia Access began in 2001 with a goal of driving content-based search technology for multimedia just as its progenitor, the Text Retrieval Conference (TREC) did for text and web.
Keywords: annotation; evaluation; information retrieval (IR); metrics; multimedia.
Copyright © 2020 Soboroff, Awad, Butt and Curtis.
Figures


References
-
- Abualsaud M., Ghelani N., Zhang H., Smucker M. D., Cormack G. V., Grossman M. R. (2018). “A system for efficient high-recall retrieval,” in The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18 (New York, NY: Association for Computing Machinery; ), 1317–1320.
-
- Allan J., Aslam J., Belkin N., Buckley C., Callan J., Croft B., et al. (2003). Challenges in information retrieval and language modeling. SIGIR Forum 37 10.1145/945546.945549 - DOI
-
- Anderson P., Fernando B., Johnson M., Gould S. (2016). “Spice: Semantic propositional image caption evaluation,” in European Conference on Computer Vision (Amsterdam: Springer; ), 382–398.
-
- Antol S., Agrawal A., Lu J., Mitchell M., Batra D., Lawrence Zitnick C., et al. (2015). “VQA: Visual question answering,” in Proceedings of the IEEE International Conference on Computer Vision (Santiago: ), 2425–2433.
-
- Aslam J. A., Pavlu V., Yilmaz E. (2006). “A statistical method for system evaluation using incomplete judgments,” in Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'06 (New York, NY: Association for Computing Machinery; ), 541–548.
LinkOut - more resources
Full Text Sources