Open Conference Systems, StatPhys 27 Main Conference

Font Size: 
Discriminative and generative machine learning for spin systems based on physically interpretable features
Corneel Casert, Kyle Mills, Jannes Nys, Jan Ryckebusch, Isaac Tamblyn, Tom Vieijra

##manager.scheduler.building##: Edificio San Jose
##manager.scheduler.room##: Aula Juan Pablo II
Date: 2019-07-11 11:30 AM – 11:45 AM
Last modified: 2019-06-09

Abstract


Recently, much effort has been devoted to studying whether machine learning methods are capable of recognizing phase boundaries in spin systems. This is typically done using deep neural networks, trained on system configurations at fixed control parameter values, but without receiving any a priori knowledge of physical features. Opening the ‘black-box algorithms’ and uncovering which features are captured by the neural network is a crucial and oft-overlooked step. Without this additional step, one cannot guarantee that the algorithm's decision on the phase boundaries is based on physically relevant features, or on less relevant characteristics—which would limit its applicability. We use the example of exploring the two-dimensional phase diagram of a non-equilibrium spin system (active Ising model) to highlight the importance of scrutinizing the internal representation of a neural network. By only training networks on a small slice of the phase diagram, we show that some networks capture the relevant physics to complete the remainder of the phase diagram.  Other networks fail in doing so—even though they are perfectly capable of reaching their training objective. We demonstrate that by highlighting on which input regions networks base their decision, we can select the relevant networks and show that they can capture physical features such as emergent magnetization patterns [Casert, C., Vieijra, T., Nys, J., & Ryckebusch, J. (2019). Physical Review E, 99(2), 023304].An additional test of whether deep neural networks can recognize physical characteristics is in its potential to generate additional system configurations. We show that a generative adversarial network (GAN) can create spin configurations that are indistinguishable from the training data [Mills, K., & Tamblyn, I. (2017). arXiv preprint arXiv:1710.08053]. By conditioning the GAN on physical quantities, we show it can accurately learn how to use this information in its generation procedure, and create configurations at a requested value of an observable, e.g. at a fixed energy. Furthermore, we show how we can use GANs to create configurations at system sizes much larger than the training data, allowing for highly efficient sampling of arbitrarily large configurations.