"Iterative Compression-Decimation Scheme for Tensor Network Optimization"
This event is part of the Condensed Matter Theory Seminar Series.
Motivated by statistical physics models connected to computation problems, we devise a tensor network technique that is suited to problems with or without translation invariance and with arbitrary boundary conditions. We introduce a compression-decimation algorithm as an efficient iterative scheme to optimize tensor networks that encode generalized vertex models on regular lattices. The algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps allow us to gradually collapse the tensor network while keeping the tensor dimensions under control, such that ultimately the full tensor trace can be taken for relatively large systems. As a benchmark, we demonstrate the efficiency of the algorithm by computing the ground state entropy density of the planar ice model and the eight-vertex model. We then apply it to reversible classical computational problems based on a recently proposed vertex model representation of classical computations [Nat. Commun. {\bf8}, 15303 (2017)]. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times, suggesting that the algorithm is a promising practical tool for the solution of a plethora of problems in physics and computer science.