Skip to main content
Book cover

Multilingual and Multimodal Information Access Evaluation

Second International Conference of the Cross-Language Evaluation Forum, CLEF 2011, Amsterdam, The Netherlands, September 19-22, 2011, Proceedings

  • Conference proceedings
  • © 2011

Overview

  • Fast track conference proceedings
  • Unique visibility
  • State of the art research

Part of the book series: Lecture Notes in Computer Science (LNCS, volume 6941)

Included in the following conference series:

Conference proceedings info: CLEF 2011.

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (16 papers)

  1. Methodologies and Lessons

  2. Language and Processing

  3. Visual and Context

Other volumes

  1. Multilingual and Multimodal Information Access Evaluation

Keywords

About this book

This book constitutes the refereed proceedings of the Second International Conference on Multilingual and Multimodal Information Access Evaluation, in continuation of the popular CLEF campaigns and workshops that have run for the last decade, CLEF 2011, held in Amsterdem, The Netherlands, in September 2011.
The 14 revised full papers presented together with 2 keynote talks were carefully reviewed and selected from numerous submissions. The papers accepted for the conference included research on evaluation methods and settings, natural language processing within different domains and languages, multimedia and reflections on CLEF. Two keynote speakers highlighted important developments in the field of evaluation: the role of users in evaluation and a framework for the use of crowdsourcing experiments in the setting of retrieval evaluation.

Editors and Affiliations

  • Center for the Evaluation of Language and Communication Technologies (CELCT), Povo, Italy

    Pamela Forner

  • National University of Distance Education, E.T.S.I. Informática de la UNED, Madrid, Spain

    Julio Gonzalo

  • School of Information Sciences, University of Tampere, Tampere, Finland

    Jaana Kekäläinen

  • Yahoo! Research, Barcelona, Spain

    Mounia Lalmas

  • Intelligent Systems Laboratory, University of Amsterdam, Amsterdam, The Netherlands

    Marteen Rijke

Bibliographic Information

Publish with us