Ιδρυματικό Αποθετήριο
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Accelerating binarized convolutional neural networks with dynamic partial reconfiguration on disaggregated FPGAs

Skrimponis Panagiotis, Pissadakis Emmanouil, Alachiotis Nikolaos, Pnevmatikatos Dionysios

Πλήρης Εγγραφή


URI: http://purl.tuc.gr/dl/dias/12E9AEC6-B0B9-4E11-8E32-9BC8C8E704DB
Έτος 2020
Τύπος Κεφάλαιο σε Βιβλίο
Άδεια Χρήσης
Λεπτομέρειες
Βιβλιογραφική Αναφορά P. Skrimponis, E. Pissadakis, N. Alachiotis, and D. Pnevmatikatos, “Accelerating binarized convolutional neural networks with dynamic partial reconfiguration on disaggregated FPGAs,” in Parallel Computing: Technology Trends, vol 36, Advances in Parallel Computing, I. Foster, G. R. Joubert, L. Kučera, W. E. Nagel, F. Peters, Eds., Amsterdam, The Netherlands: IOS Press, 2020, pp. 691 - 700, doi: 10.3233/APC200099. https://doi.org/10.3233/APC200099
Εμφανίζεται στις Συλλογές

Περίληψη

Convolutional Neural Networks (CNNs) currently dominate the fields of artificial intelligence and machine learning due to their high accuracy. However, their computational and memory needs intensify with the complexity of the problems they are deployed to address, frequently requiring highly parallel and/or accelerated solutions. Recent advances in machine learning showcased the potential of CNNs with reduced precision, by relying on binarized weights and activations, thereby leading to Binarized Neural Networks (BNNs). Due to the embarassingly parallel and discrete arithmetic nature of the required operations, BNNs fit well to FPGA technology, thus allowing to considerably scale up problem complexity. However, the fixed amount of resources per chip introduces an upper bound on the dimensions of the problems that FPGA-accelerated BNNs can solve. To this end, we explore the potential of remote FPGAs operating in tandem within a disaggregated computing environment to accelerate BNN computations, and exploit dynamic partial reconfiguration (DPR) to boost aggregate system performance. We find that DPR alone boosts throughput performance of a fixed set of BNN accelerators deployed on a remote FPGA by up to 3x in comparison with a static design that deploys the same accelerator cores on a software-programmable FPGA locally. In addition, performance increases linearly with the number of remote devices when inter-FPGA communication is reduced. To exploit DPR on remote FPGAs and reduce communication, we adopt a versatile remote-accelerator deployment framework for disaggregated datacenters, thereby boosting BNN performance with negligible development effort.

Διαθέσιμα αρχεία

Υπηρεσίες

Στατιστικά