↓ Skip to main content

Comparative Evaluation of Focused Retrieval

Overview of attention for book
Cover of 'Comparative Evaluation of Focused Retrieval'

Table of Contents

  1. Altmetric Badge
    Book Overview
  2. Altmetric Badge
    Chapter 1 Overview of the INEX 2010 Ad Hoc Track
  3. Altmetric Badge
    Chapter 2 The Potential Benefit of Focused Retrieval in Relevant-in-Context Task
  4. Altmetric Badge
    Chapter 3 ENSM-SE and UJM at INEX 2010: Scoring with Proximity and Tag Weights
  5. Altmetric Badge
    Chapter 4 LIP6 at INEX’10: OWPC for Ad Hoc Track
  6. Altmetric Badge
    Chapter 5 A Useful Method for Producing Competitive Ad Hoc Task Results
  7. Altmetric Badge
    Chapter 6 Relaxed Global Term Weights for XML Element Search
  8. Altmetric Badge
    Chapter 7 Searching the Wikipedia with Public Online Search Engines
  9. Altmetric Badge
    Chapter 8 Extended Language Models for XML Element Retrieval
  10. Altmetric Badge
    Chapter 9 Overview of the INEX 2010 Book Track: Scaling Up the Evaluation Using Crowdsourcing
  11. Altmetric Badge
    Chapter 10 LIA at INEX 2010 Book Track
  12. Altmetric Badge
    Chapter 11 The Book Structure Extraction Competition with the Resurgence Software for Part and Chapter Detection at Caen University
  13. Altmetric Badge
    Chapter 12 Focus and Element Length for Book and Wikipedia Retrieval
  14. Altmetric Badge
    Chapter 13 Combining Page Scores for XML Book Retrieval
  15. Altmetric Badge
    Chapter 14 OUC’s Participation in the 2010 INEX Book Track
  16. Altmetric Badge
    Chapter 15 Overview of the INEX 2010 Data Centric Track
  17. Altmetric Badge
    Chapter 16 DCU and ISI@INEX 2010: Adhoc and Data-Centric Tracks
  18. Altmetric Badge
    Chapter 17 Automatically Generating Structured Queries in XML Keyword Search
  19. Altmetric Badge
    Chapter 18 UPF at INEX 2010: Towards Query-Type Based Focused Retrieval
  20. Altmetric Badge
    Chapter 19 BUAP: A First Approach to the Data-Centric Track of INEX 2010
  21. Altmetric Badge
    Chapter 20 Overview of the INEX 2010 Interactive Track
  22. Altmetric Badge
    Chapter 21 Using Eye-Tracking for the Evaluation of Interactive Information Retrieval
  23. Altmetric Badge
    Chapter 22 Overview of the INEX 2010 Link the Wiki Track
  24. Altmetric Badge
    Chapter 23 University of Otago at INEX 2010
  25. Altmetric Badge
    Chapter 24 Overview of the INEX 2010 Question Answering Track (QA@INEX)
  26. Altmetric Badge
    Chapter 25 The GIL Summarizers: Experiments in the Track QA@INEX’10
  27. Altmetric Badge
    Chapter 26 The Cortex Automatic Summarization System at the QA@INEX Track 2010
  28. Altmetric Badge
    Chapter 27 The REG Summarization System with Question Reformulation at QA@INEX Track 2010
  29. Altmetric Badge
    Chapter 28 Overview of the INEX 2010 Focused Relevance Feedback Track
  30. Altmetric Badge
    Chapter 29 Exploring Accumulative Query Expansion for Relevance Feedback
  31. Altmetric Badge
    Chapter 30 Combining Strategies for XML Retrieval
  32. Altmetric Badge
    Chapter 31 Overview of the INEX 2010 Web Service Discovery Track
  33. Altmetric Badge
    Chapter 32 Semantics-Based Web Service Discovery Using Information Retrieval Techniques
  34. Altmetric Badge
    Chapter 33 The BUAP Participation at the Web Service Discovery Track of INEX 2010
  35. Altmetric Badge
    Chapter 34 XML Retrieval More Efficient Using Double Scoring Scheme
  36. Altmetric Badge
    Chapter 35 Overview of the INEX 2010 XML Mining Track: Clustering and Classification of XML Documents
  37. Altmetric Badge
    Chapter 36 An Iterative Clustering Method for the XML-Mining Task of the INEX 2010
  38. Altmetric Badge
    Chapter 37 PKU at INEX 2010 XML Mining Track
Attention for Chapter 32: Semantics-Based Web Service Discovery Using Information Retrieval Techniques
Altmetric Badge

Mentioned by

facebook
1 Facebook page

Citations

dimensions_citation
4 Dimensions

Readers on

mendeley
9 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Chapter title
Semantics-Based Web Service Discovery Using Information Retrieval Techniques
Chapter number 32
Book title
Comparative Evaluation of Focused Retrieval
Published in
Lecture notes in computer science, January 2011
DOI 10.1007/978-3-642-23577-1_32
Book ISBNs
978-3-64-223576-4, 978-3-64-223577-1
Authors

Jun Hou, Jinglan Zhang, Richi Nayak, Aishwarya Bose

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 9 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 9 100%

Demographic breakdown

Readers by professional status Count As %
Student > Bachelor 2 22%
Student > Doctoral Student 1 11%
Professor 1 11%
Student > Ph. D. Student 1 11%
Professor > Associate Professor 1 11%
Other 1 11%
Unknown 2 22%
Readers by discipline Count As %
Computer Science 4 44%
Business, Management and Accounting 1 11%
Engineering 1 11%
Unknown 3 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 11 March 2012.
All research outputs
#20,156,138
of 22,663,969 outputs
Outputs from Lecture notes in computer science
#6,984
of 8,123 outputs
Outputs of similar age
#169,812
of 180,283 outputs
Outputs of similar age from Lecture notes in computer science
#278
of 318 outputs
Altmetric has tracked 22,663,969 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 8,123 research outputs from this source. They receive a mean Attention Score of 5.0. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 180,283 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 318 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.