↓ Skip to main content

Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data

Overview of attention for book
Cover of 'Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data'

Table of Contents

  1. Altmetric Badge
    Book Overview
  2. Altmetric Badge
    Chapter 1 Unsupervised Joint Monolingual Character Alignment and Word Segmentation
  3. Altmetric Badge
    Chapter 2 Improving Multi-pass Transition-Based Dependency Parsing Using Enhanced Shift Actions
  4. Altmetric Badge
    Chapter 3 Diachronic Deviation Features in Continuous Space Word Representations
  5. Altmetric Badge
    Chapter 4 Ontology Matching with Word Embeddings
  6. Altmetric Badge
    Chapter 5 Exploiting Multiple Resources for Word-Phrase Semantic Similarity Evaluation
  7. Altmetric Badge
    Chapter 6 Dependency Graph Based Chinese Semantic Parsing
  8. Altmetric Badge
    Chapter 7 A Joint Learning Approach to Explicit Discourse Parsing via Structured Perceptron
  9. Altmetric Badge
    Chapter 8 Chinese Textual Entailment Recognition Based on Syntactic Tree Clipping
  10. Altmetric Badge
    Chapter 9 Automatic Collection of the Parallel Corpus with Little Prior Knowledge
  11. Altmetric Badge
    Chapter 10 The Chinese-English Contrastive Language Knowledge Base and Its Applications
  12. Altmetric Badge
    Chapter 11 Clustering Product Aspects Using Two Effective Aspect Relations for Opinion Mining
  13. Altmetric Badge
    Chapter 12 Text Classification with Document Embeddings
  14. Altmetric Badge
    Chapter 13 Reasoning Over Relations Based on Chinese Knowledge Bases
  15. Altmetric Badge
    Chapter 14 Distant Supervision for Relation Extraction via Sparse Representation
  16. Altmetric Badge
    Chapter 15 Learning the Distinctive Pattern Space Features for Relation Extraction
  17. Altmetric Badge
    Chapter 16 An Investigation on Statistical Machine Translation with Neural Language Models
  18. Altmetric Badge
    Chapter 17 Using Semantic Structure to Improve Chinese-English Term Translation
  19. Altmetric Badge
    Chapter 18 Query Expansion for Mining Translation Knowledge from Comparable Data
  20. Altmetric Badge
    Chapter 19 A Comparative Study on Simplified-Traditional Chinese Translation
  21. Altmetric Badge
    Chapter 20 Combining Lexical Context with Pseudo-alignment for Bilingual Lexicon Extraction from Comparable Corpora
  22. Altmetric Badge
    Chapter 21 Chinese-English OOV Term Translation with Web Mining, Multiple Feature Fusion and Supervised Learning
  23. Altmetric Badge
    Chapter 22 A Universal Phrase Tagset for Multilingual Treebanks
  24. Altmetric Badge
    Chapter 23 Co-occurrence Degree Based Word Alignment: A Case Study on Uyghur-Chinese
  25. Altmetric Badge
    Chapter 24 Calculation Analysis on Consonant and Character for Corpus Study of Gesar Epic "HorLing"
  26. Altmetric Badge
    Chapter 25 Sentence Level Paraphrase Recognition Based on Different Characteristics Combination
  27. Altmetric Badge
    Chapter 26 Learning Tag Relevance by Context Analysis for Social Image Retrieval
  28. Altmetric Badge
    Chapter 27 ASR-Based Input Method for Postal Address Recognition in Chinese Mandarin
Attention for Chapter 22: A Universal Phrase Tagset for Multilingual Treebanks
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (56th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

wikipedia
1 Wikipedia page

Readers on

mendeley
5 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Chapter title
A Universal Phrase Tagset for Multilingual Treebanks
Chapter number 22
Book title
Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data
Published in
Lecture notes in computer science, January 2016
DOI 10.1007/978-3-319-12277-9_22
Book ISBNs
978-3-31-912276-2, 978-3-31-912277-9
Authors

Aaron Li-Feng Han, Derek F. Wong, Lidia S. Chao, Yi Lu, Liangye He, Liang Tian, Han, Aaron Li-Feng, Wong, Derek F., Chao, Lidia S., Lu, Yi, He, Liangye, Tian, Liang

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 5 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 5 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 1 20%
Professor > Associate Professor 1 20%
Researcher 1 20%
Lecturer 1 20%
Unknown 1 20%
Readers by discipline Count As %
Computer Science 3 60%
Linguistics 1 20%
Unknown 1 20%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 January 2019.
All research outputs
#7,453,350
of 22,786,087 outputs
Outputs from Lecture notes in computer science
#2,486
of 8,125 outputs
Outputs of similar age
#124,745
of 395,009 outputs
Outputs of similar age from Lecture notes in computer science
#236
of 515 outputs
Altmetric has tracked 22,786,087 research outputs across all sources so far. This one is in the 44th percentile – i.e., 44% of other outputs scored the same or lower than it.
So far Altmetric has tracked 8,125 research outputs from this source. They receive a mean Attention Score of 5.0. This one has gotten more attention than average, scoring higher than 55% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 395,009 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 56% of its contemporaries.
We're also able to compare this research output to 515 others from the same source and published within six weeks on either side of this one. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.