Building: Cero Infinito
Room: Posters hall
Date: 2024-12-12 02:00 PM – 04:00 PM
Last modified: 2024-11-19
Abstract
The more a memory is revisited, the easier it becomes to recall. Consistent with this observation, experimental evidence shows that familiar items are represented by larger neural representations than less familiar ones. In order to capture the dynamic nature of memory representations, we designed a computational model based on an attractor network with dynamic synapses and we showed how neural assemblies can evolve differently, depending on how often a stimulus is presented (i.e. depending on its frequency). Specifically, we built our model starting from a standard rate attractor network to which we added: i) an online Hebbian learning rule, ii) background firing activity, iii) neural adaptation, iv) heterosynaptic plasticity. We investigated the behaviour of our model in different experimental paradigms involving memory formation, reinforcement and forgetting. We found that the dynamic interplay between online learning and background activity can explain the relationship between the size of neural assemblies and their frequency of stimulation. Notably, memory assemblies representing uncorrelated memories changed their sizes without interfering with each other (i.e. their neural representations remained orthogonal), in line with results from human single-cell recordings suggesting that partial overlaps between neural assemblies represent meaningful associations between the corresponding memories. We also observed that neural assemblies that were not further stimulated were eventually forgotten, and their neurons became available to create or reinforce other representations. Overall, these findings align with experimental results from single-cell recordings, thus showing that our model is suitable to investigate several memory coding mechanisms.