Logo - springer
Slogan - springer

Computer Science - Artificial Intelligence | Multilingual and Multimodal Information Access Evaluation - International Conference of the Cross-Language

Multilingual and Multimodal Information Access Evaluation

International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings

Agosti, M., Ferro, N., Peters, C., de Rijke, M., Smeaton, A. (Eds.)

2010, XIII, 145p.

Available Formats:

Springer eBooks may be purchased by end-customers only and are sold without copy protection (DRM free). Instead, all eBooks include personalized watermarks. This means you can read the Springer eBooks across numerous devices such as Laptops, eReaders, and tablets.

You can pay for Springer eBooks with Visa, Mastercard, American Express or Paypal.

After the purchase you can directly download the eBook file or read it online in our Springer eBook Reader. Furthermore your eBook will be stored in your MySpringer account. So you can always re-download your eBooks.


(net) price for USA

ISBN 978-3-642-15998-5

digitally watermarked, no DRM

Included Format: PDF

download immediately after purchase

learn more about Springer eBooks

add to marked items


Softcover (also known as softback) version.

You can pay for Springer Books with Visa, Mastercard, American Express or Paypal.

Standard shipping is free of charge for individual customers.


(net) price for USA

ISBN 978-3-642-15997-8

free shipping for individuals worldwide

online orders shipping within 2-3 days.

add to marked items

In its ?rst ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the di?erent areasofexpertiseneededto dealwith thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of e?ort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the ?rst two days – and a series of laboratories and workshops – the second two days.

Content Level » Research

Keywords » Design - MapReduce - comparable corpora - corpus - cross-language - cross-language queries - cross-lingual - data mining - evaluation - image retrieval - information retrieval - intrinsic plagiarism analysis - medical images - meta learning - natural language process

Related subjects » Artificial Intelligence - Database Management & Information Retrieval - HCI - Linguistics

Table of contents 

Keynote Addresses.- IR between Science and Engineering, and the Role of Experimentation.- Retrieval Evaluation in Practice.- Resources, Tools, and Methods.- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages.- A New Approach for Cross-Language Plagiarism Analysis.- Creating a Persian-English Comparable Corpus.- Experimental Collections and Datasets (1).- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases.- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation.- Experimental Collections and Datasets (2).- MapReduce for Information Retrieval Evaluation: “Let’s Quickly Test This on 12 TB of Data”.- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types.- Evaluation Methodologies and Metrics (1).- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements.- On the Evaluation of Entity Profiles.- Evaluation Methodologies and Metrics (2).- Evaluating Information Extraction.- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation.- Automated Component–Level Evaluation: Present and Future.- Panels.- The Four Ladies of Experimental Evaluation.- A PROMISE for Experimental Evaluation.

Popular Content within this publication 



Read this Book on Springerlink

Services for this book

New Book Alert

Get alerted on new Springer publications in the subject area of Language Translation and Linguistics.