SPARTAN: using constrained models for guaranteed-error semantic compressionSPARTAN: using constrained models for guaranteed-error semantic compression Peer-Reviewed Journal Publication Δημοσίευση σε Περιοδικό με Κριτές 2015-10-292002enWhile a variety of lossy compression schemes have been developed for certain forms of digital data (e.g., images, audio, video), the area of lossy compression techniques for arbitrary data tables has been left relatively unexplored. Nevertheless, such techniques are clearly motivated by the ever-increasing data collection rates of modern enterprises and the need for effective, guaranteed-quality approximate answers to queries over massive relational data sets.In this paper, we propose SPARTAN, a system that takes advantage of attribute semantics and data-mining models to perform lossy compression of massive data tables. SPARTAN is based on the novel idea of exploiting predictive data correlations and prescribed error-tolerance constraints for individual attributes to construct concise and accurate Classification and Regression Tree (CaRT) models for entire columns of a table. More precisely, SPARTAN selects a certain subset of attributes (referred to as predicted attributes) for which no values are explicitly stored in the compressed table; instead, concise error-constrained CaRTs that predict these values (within the prescribed error tolerances) are maintained. To restrict the huge search space of possible CaRT predictors, SPARTAN uses a Bayesian network structure to guide the selection of CaRT models that minimize the overall storage requirement, based on the prediction and materialization costs for each attribute. SPARTAN's CaRT-building algorithms employ novel integrated pruning strategies that take advantage of the given error constraints on individual attributes to minimize the computational effort involved. Our experimentation with several real-life data sets offers convincing evidence of the effectiveness of SPARTAN's model-based approach --- SPARTAN is able to consistently yield substantially better compression ratios than existing semantic or syntactic compression tools (e.g., gzip) while utilizing only small samples of the data for model inference.http://creativecommons.org/licenses/by/4.0/SIGKDD explorations : newsletter of the Special Interest Group (SIG) on Knowledge Discovery & Data Mining4111-20 Babu Shivnath Garofalakis Minos Γαροφαλακης Μινως Rastogi Rajeev Association for Computing Machinery Semantics Data mining