↓ Skip to main content

Computer Vision – ECCV 2022

Overview of attention for book
Cover of 'Computer Vision – ECCV 2022'

Table of Contents

  1. Altmetric Badge
    Book Overview
  2. Altmetric Badge
    Chapter 1 A Simple Approach and Benchmark for 21,000-Category Object Detection
  3. Altmetric Badge
    Chapter 2 Knowledge Condensation Distillation
  4. Altmetric Badge
    Chapter 3 Reducing Information Loss for Spiking Neural Networks
  5. Altmetric Badge
    Chapter 4 Masked Generative Distillation
  6. Altmetric Badge
    Chapter 5 Fine-grained Data Distribution Alignment for Post-Training Quantization
  7. Altmetric Badge
    Chapter 6 Learning with Recoverable Forgetting
  8. Altmetric Badge
    Chapter 7 Efficient One Pass Self-distillation with Zipf’s Label Smoothing
  9. Altmetric Badge
    Chapter 8 Prune Your Model Before Distill It
  10. Altmetric Badge
    Chapter 9 Deep Partial Updating: Towards Communication Efficient Updating for On-Device Inference
  11. Altmetric Badge
    Chapter 10 Patch Similarity Aware Data-Free Quantization for Vision Transformers
  12. Altmetric Badge
    Chapter 11 L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training
  13. Altmetric Badge
    Chapter 12 Streaming Multiscale Deep Equilibrium Models
  14. Altmetric Badge
    Chapter 13 Symmetry Regularization and Saturating Nonlinearity for Robust Quantization
  15. Altmetric Badge
    Chapter 14 SP-Net: Slowly Progressing Dynamic Inference Networks
  16. Altmetric Badge
    Chapter 15 Equivariance and Invariance Inductive Bias for Learning from Insufficient Data
  17. Altmetric Badge
    Chapter 16 Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance
  18. Altmetric Badge
    Chapter 17 Event Neural Networks
  19. Altmetric Badge
    Chapter 18 EdgeViTs: Competing Light-Weight CNNs on Mobile Devices with Vision Transformers
  20. Altmetric Badge
    Chapter 19 PalQuant: Accelerating High-Precision Networks on Low-Precision Accelerators
  21. Altmetric Badge
    Chapter 20 Disentangled Differentiable Network Pruning
  22. Altmetric Badge
    Chapter 21 IDa-Det: An Information Discrepancy-Aware Distillation for 1-Bit Detectors
  23. Altmetric Badge
    Chapter 22 Learning to Weight Samples for Dynamic Early-Exiting Networks
  24. Altmetric Badge
    Chapter 23 AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
  25. Altmetric Badge
    Chapter 24 Adaptive Token Sampling for Efficient Vision Transformers
  26. Altmetric Badge
    Chapter 25 Weight Fixing Networks
  27. Altmetric Badge
    Chapter 26 Self-slimmed Vision Transformer
  28. Altmetric Badge
    Chapter 27 Switchable Online Knowledge Distillation
  29. Altmetric Badge
    Chapter 28 $$\ell _\infty $$-Robustness and Beyond: Unleashing Efficient Adversarial Training
  30. Altmetric Badge
    Chapter 29 Multi-granularity Pruning for Model Acceleration on Mobile Devices
  31. Altmetric Badge
    Chapter 30 Deep Ensemble Learning by Diverse Knowledge Distillation for Fine-Grained Object Classification
  32. Altmetric Badge
    Chapter 31 Helpful or Harmful: Inter-task Association in Continual Learning
  33. Altmetric Badge
    Chapter 32 Towards Accurate Binary Neural Networks via Modeling Contextual Dependencies
  34. Altmetric Badge
    Chapter 33 SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
  35. Altmetric Badge
    Chapter 34 Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning
  36. Altmetric Badge
    Chapter 35 Network Binarization via Contrastive Learning
  37. Altmetric Badge
    Chapter 36 Lipschitz Continuity Retained Binary Neural Network
  38. Altmetric Badge
    Chapter 37 SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning
  39. Altmetric Badge
    Chapter 38 Soft Masking for Cost-Constrained Channel Pruning
  40. Altmetric Badge
    Chapter 39 Non-uniform Step Size Quantization for Accurate Post-training Quantization
  41. Altmetric Badge
    Chapter 40 SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
  42. Altmetric Badge
    Chapter 41 Meta-GF: Training Dynamic-Depth Neural Networks Harmoniously
  43. Altmetric Badge
    Chapter 42 Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning
  44. Altmetric Badge
    Chapter 43 Towards Accurate Network Quantization with Equivalent Smooth Regularizer
Attention for Chapter 10: Patch Similarity Aware Data-Free Quantization for Vision Transformers
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (54th percentile)
  • Good Attention Score compared to outputs of the same age and source (72nd percentile)

Mentioned by

twitter
4 X users

Citations

dimensions_citation
4 Dimensions

Readers on

mendeley
28 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Chapter title
Patch Similarity Aware Data-Free Quantization for Vision Transformers
Chapter number 10
Book title
Computer Vision – ECCV 2022
Published in
arXiv, November 2022
DOI 10.1007/978-3-031-20083-0_10
Book ISBNs
978-3-03-120082-3, 978-3-03-120083-0
Authors

Zhikai Li, Liping Ma, Mengjuan Chen, Junrui Xiao, Qingyi Gu, Li, Zhikai, Ma, Liping, Chen, Mengjuan, Xiao, Junrui, Gu, Qingyi

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 28 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 28 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 5 18%
Researcher 4 14%
Student > Master 4 14%
Student > Bachelor 1 4%
Other 1 4%
Other 1 4%
Unknown 12 43%
Readers by discipline Count As %
Computer Science 12 43%
Engineering 2 7%
Unknown 14 50%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 January 2023.
All research outputs
#14,338,684
of 24,093,053 outputs
Outputs from arXiv
#237,404
of 1,020,419 outputs
Outputs of similar age
#184,941
of 429,820 outputs
Outputs of similar age from arXiv
#10,195
of 41,498 outputs
Altmetric has tracked 24,093,053 research outputs across all sources so far. This one is in the 39th percentile – i.e., 39% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,020,419 research outputs from this source. They receive a mean Attention Score of 4.0. This one has gotten more attention than average, scoring higher than 74% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 429,820 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 54% of its contemporaries.
We're also able to compare this research output to 41,498 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 72% of its contemporaries.