Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Explainable natural language processing with matrix product states

Tangpanitanon Jirawat, Mangkang Chanatip, Bhadola Pradeep, Minato Yuichiro, Angelakis Dimitrios, Chotibut Thiparat

Simple record


URIhttp://purl.tuc.gr/dl/dias/F1751C5C-0B04-4E57-A7EC-CDD589CF9873-
Identifierhttps://doi.org/10.1088/1367-2630/ac6232-
Identifierhttps://iopscience.iop.org/article/10.1088/1367-2630/ac6232-
Languageen-
Extent16 pagesen
TitleExplainable natural language processing with matrix product statesen
CreatorTangpanitanon Jirawaten
CreatorMangkang Chanatipen
CreatorBhadola Pradeepen
CreatorMinato Yuichiroen
CreatorAngelakis Dimitriosen
CreatorΑγγελακης Δημητριοςel
CreatorChotibut Thiparaten
PublisherIOP Publishingen
Content SummaryDespite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex non-linear computations. We systematically analyze RNNs' behaviors in a ubiquitous NLP task, the sentiment analysis of movie reviews, via the mapping between a class of RNNs called recurrent arithmetic circuits (RACs) and a matrix product state. Using the von-Neumann entanglement entropy (EE) as a proxy for information propagation, we show that single-layer RACs possess a maximum information propagation capacity, reflected by the saturation of the EE. Enlarging the bond dimension beyond the EE saturation threshold does not increase model prediction accuracies, so a minimal model that best estimates the data statistics can be inferred. Although the saturated EE is smaller than the maximum EE allowed by the area law, our minimal model still achieves ~ 99% training accuracies in realistic sentiment analysis data sets. Thus, low EE is not a warrant against the adoption of single-layer RACs for NLP. Contrary to a common belief that long-range information propagation is the main source of RNNs' successes, we show that single-layer RACs harness high expressiveness from the subtle interplay between the information propagation and the word vector embeddings. Our work sheds light on the phenomenology of learning in RACs, and more generally on the explainability of RNNs for NLP, using tools from many-body quantum physics.en
Type of ItemPeer-Reviewed Journal Publicationen
Type of ItemΔημοσίευση σε Περιοδικό με Κριτέςel
Licensehttp://creativecommons.org/licenses/by/4.0/en
Date of Item2024-03-04-
Date of Publication2022-
SubjectMatrix product stateen
SubjectEntanglement entropyen
SubjectEntanglement spectrumen
SubjectQuantum machine learningen
SubjectNatural language processingen
SubjectRecurrent neural networksen
Bibliographic CitationJ. Tangpanitanon, C. Mangkang, P. Bhadola, Y. Minato, D. G. Angelakis and T. Chotibut, “Explainable natural language processing with matrix product states,” New J. Phys., vol. 24, no. 5, May 2022, doi: 10.1088/1367-2630/ac6232.en

Available Files

Services

Statistics