![]() ![]() First, the joint presence of randomness and changes (i.e., the non-stationarity of the stochastic process generating the observations) poses a well-known tension between stability and flexibility ( Behrens et al., 2007 Soltani & Izquierdo, 2019 R. We start by unpacking two specific challenges which arise in real-life environments. Here we aim to identify the computational mechanisms that could enable the brain to exhibit these aspects of optimality in these environments. Yet, in many situations, the brain is able to overcome these challenges and shows several aspects of the optimal solution ( Dehaene et al., 2015 Dolan & Dayan, 2013 Gallistel et al., 2014 Summerfield & de Lange, 2014). Specific features of real-life environments make this inference a challenge: they are often partly random, changing, and structured in different ways. the probability of a stimulus) that then serve to predict what is likely to be observed next. In that case, the prediction process formally corresponds to a statistical inference that uses past observations to estimate latent variables of the environment (e.g. In many situations, predictions are informed by a sequence of past observations. Sutton & Barto, 1998), a more accurate perception of our world, and faster reactions ( De Lange et al., 2018 Dehaene et al., 2015 Saffran et al., 1996 Sherman et al., 2020 Summerfield & de Lange, 2014). Being ubiquitous in the brain, gated recurrence could therefore serve as a generic building block to predict in real-life environments.īeing able to correctly predict what is coming next is advantageous: it enables better decisions ( Dolan & Dayan, 2013 R. ![]() Like the optimal solution and the human brain, such networks develop internal representations of their changing environment (including estimates of the environment’s latent variables and the precision of these estimates), leverage multiple levels of latent structure, and adapt their effective learning rate to changes without changing their connection weights. ![]() This architecture relies on three mechanisms: gating, lateral connections, and recurrent weight training. Here, we show that a specific recurrent neural network architecture enables simple and accurate solutions in several environments. What computational architecture could enable this feat? Bayesian inference makes optimal predictions but is prohibitively difficult to compute. It is also challenging in stochastic, changing, and structured environments yet the brain makes accurate predictions in many situations. From decision making to perception to language, predicting what is coming next is crucial. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |