↓ Skip to main content

Database and Expert Systems Applications

Overview of attention for book
Cover of 'Database and Expert Systems Applications'

Table of Contents

  1. Altmetric Badge
    Book Overview
  2. Altmetric Badge
    Chapter 1 Looking into the Peak Memory Consumption of Epoch-Based Reclamation in Scalable in-Memory Database Systems
  3. Altmetric Badge
    Chapter 2 Energy Efficient Data Placement and Buffer Management for Multiple Replication
  4. Altmetric Badge
    Chapter 3 Querying Knowledge Graphs with Natural Languages
  5. Altmetric Badge
    Chapter 4 Explaining Query Answer Completeness and Correctness with Partition Patterns
  6. Altmetric Badge
    Chapter 5 Research Paper Search Using a Topic-Based Boolean Query Search and a General Query-Based Ranking Model
  7. Altmetric Badge
    Chapter 6 Extractive Document Summarization using Non-negative Matrix Factorization
  8. Altmetric Badge
    Chapter 7 Succinct BWT-Based Sequence Prediction
  9. Altmetric Badge
    Chapter 8 TRR: Reducing Crowdsourcing Task Redundancy
  10. Altmetric Badge
    Chapter 9 Software Resource Recommendation for Process Execution Based on the Organization’s Profile
  11. Altmetric Badge
    Chapter 10 An Experiment to Analyze the Use of Process Modeling Guidelines to Create High-Quality Process Models
  12. Altmetric Badge
    Chapter 11 Novel Node Importance Measures to Improve Keyword Search over RDF Graphs
  13. Altmetric Badge
    Chapter 12 Querying in a Workload-Aware Triplestore Based on NoSQL Databases
  14. Altmetric Badge
    Chapter 13 Reverse Partitioning for SPARQL Queries: Principles and Performance Analysis
  15. Altmetric Badge
    Chapter 14 PFed: Recommending Plausible Federated SPARQL Queries
  16. Altmetric Badge
    Chapter 15 Representing and Reasoning About Precise and Imprecise Time Points and Intervals in Semantic Web: Dealing with Dates and Time Clocks
  17. Altmetric Badge
    Chapter 16 Context-Aware Multi-criteria Recommendation Based on Spectral Graph Partitioning
  18. Altmetric Badge
    Chapter 17 SilverChunk: An Efficient In-Memory Parallel Graph Processing System
  19. Altmetric Badge
    Chapter 18 A Modular Approach for Efficient Simple Question Answering Over Knowledge Base
  20. Altmetric Badge
    Chapter 19 Scalable Machine Learning in the R Language Using a Summarization Matrix
  21. Altmetric Badge
    Chapter 20 ML-PipeDebugger: A Debugging Tool for Data Processing Pipelines
  22. Altmetric Badge
    Chapter 21 Correlation Set Discovery on Time-Series Data
  23. Altmetric Badge
    Chapter 22 Anomaly Subsequence Detection with Dynamic Local Density for Time Series
  24. Altmetric Badge
    Chapter 23 Trajectory Similarity Join for Spatial Temporal Database
  25. Altmetric Badge
    Chapter 24 Multiviewpoint-Based Agglomerative Hierarchical Clustering
  26. Altmetric Badge
    Chapter 25 Triplet-CSSVM: Integrating Triplet-Sampling CNN and Cost-Sensitive Classification for Imbalanced Image Detection
  27. Altmetric Badge
    Chapter 26 Discovering Partial Periodic High Utility Itemsets in Temporal Databases
  28. Altmetric Badge
    Chapter 27 Using Mandatory Concepts for Knowledge Discovery and Data Structuring
  29. Altmetric Badge
    Chapter 28 Topological Data Analysis with $$\epsilon $$ ϵ -net Induced Lazy Witness Complex
  30. Altmetric Badge
    Chapter 29 Analyzing Sequence Pattern Variants in Sequential Pattern Mining and Its Application to Electronic Medical Record Systems
  31. Altmetric Badge
    Chapter 30 Composing Distributed Data-Intensive Web Services Using Distance-Guided Memetic Algorithm
  32. Altmetric Badge
    Chapter 31 Keyword Search Based Mashup Construction with Guaranteed Diversity
  33. Altmetric Badge
    Chapter 32 Using EDA-Based Local Search to Improve the Performance of NSGA-II for Multiobjective Semantic Web Service Composition
  34. Altmetric Badge
    Chapter 33 Adaptive Caching for Data-Intensive Scientific Workflows in the Cloud
Attention for Chapter 7: Succinct BWT-Based Sequence Prediction
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (70th percentile)
  • Good Attention Score compared to outputs of the same age and source (77th percentile)

Mentioned by

twitter
3 X users
wikipedia
1 Wikipedia page

Readers on

mendeley
8 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Chapter title
Succinct BWT-Based Sequence Prediction
Chapter number 7
Book title
Database and Expert Systems Applications
Published in
figshare, August 2019
DOI 10.1007/978-3-030-27618-8_7
Book ISBNs
978-3-03-027617-1, 978-3-03-027618-8
Authors

Rafael Ktistakis, Philippe Fournier-Viger, Simon J. Puglisi, Rajeev Raman, Ktistakis, Rafael, Fournier-Viger, Philippe, Puglisi, Simon J., Raman, Rajeev, Ktistakis, R, Fournier-Viger, P, Puglisi, S, Raman, R

Abstract

Sequences of symbols can be used to represent data in many domains such as text documents, activity logs, customer transactions and website click-streams. Sequence prediction is a popular task, which consists of predicting the next symbol of a sequence, given a set of training sequences. Although numerous prediction models have been proposed, many have a low accuracy because they are lossy models (they discard information from training sequences to build the model), while lossless models are often more accurate but typically consume a large amount of memory. This paper addresses these issues by proposing a novel sequence prediction model named SuBSeq that is lossless and utilizes the succinct Wavelet Tree data structure and the Burrows-Wheeler Transform to compactly store and efficiently access training sequences for prediction. An experimental evaluation shows that SuBSeq has a very low memory consumption and excellent accuracy when compared to eight state-of-the-art predictors on seven real datasets.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 8 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 8 100%

Demographic breakdown

Readers by professional status Count As %
Professor 2 25%
Student > Ph. D. Student 1 13%
Student > Doctoral Student 1 13%
Student > Master 1 13%
Unknown 3 38%
Readers by discipline Count As %
Computer Science 2 25%
Biochemistry, Genetics and Molecular Biology 1 13%
Environmental Science 1 13%
Chemistry 1 13%
Materials Science 1 13%
Other 0 0%
Unknown 2 25%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 6. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 July 2023.
All research outputs
#6,029,899
of 24,071,812 outputs
Outputs from figshare
#5,142
of 24,546 outputs
Outputs of similar age
#101,942
of 348,235 outputs
Outputs of similar age from figshare
#123
of 562 outputs
Altmetric has tracked 24,071,812 research outputs across all sources so far. This one has received more attention than most of these and is in the 74th percentile.
So far Altmetric has tracked 24,546 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.0. This one has done well, scoring higher than 78% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 348,235 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 70% of its contemporaries.
We're also able to compare this research output to 562 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 77% of its contemporaries.