Reinforcement learning is the process by which an agent learns to predict long-term future reward. We now understand a great deal about the brain's reinforcement learning algorithms, but we know considerably less about the representations of states and actions over which these algorithms operate. A useful starting point is asking what kinds of representations we would want the brain to have, given the constraints on its computational architecture. Following this logic leads to the idea of the successor representation, which encodes states of the environment in terms of their predictive relationships with other states. Recent behavioral and neural studies have provided evidence for the successor representation, and computational studies have explored ways to extend the original idea. This paper reviews progress on these fronts, organizing them within a broader framework for understanding how the brain negotiates tradeoffs between efficiency and flexibility for reinforcement learning.
Medicine by Alexandros G. Sfakianakis,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,00306932607174,00302841026182,alsfakia@gmail.com
Αναζήτηση αυτού του ιστολογίου
Πληροφορίες
Ετικέτες
Εγγραφή σε:
Σχόλια ανάρτησης (Atom)
-
Publication date: Available online 25 July 2018 Source: Journal of Photochemistry and Photobiology B: Biology Author(s): Marco Ballestr...
-
Editorial AJR Reviewers: Heartfelt Thanks From the Editors and Staff Thomas H. Berquist 1 Share + Affiliation: Citation: American Journal...
-
Publication date: Available online 28 September 2017 Source: Actas Dermo-Sifiliográficas Author(s): F.J. Navarro-Triviño
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου
Σημείωση: Μόνο ένα μέλος αυτού του ιστολογίου μπορεί να αναρτήσει σχόλιο.