Home |  English version |  Mappa |  Commenti |  Sondaggio |  Staff |  Contattaci Cerca nel sito  
Istituto di scienze e tecnologie della cognizione

Torna all'elenco Contributi in rivista anno 2014

Contributo in rivista

Tipo: Articolo in rivista

Titolo: Integrating Reinforcement Learning, Equilibrium Points, and Minimum Variance to Understand the Development of Reaching: A Computational Model

Anno di pubblicazione: 2014

Autori: Caligiore, Daniele; Parisi, Domenico; Baldassarre, Gianluca

Affiliazioni autori: Consiglio Nazionale delle Ricerche (CNR)

Autori CNR:


Lingua: inglese

Abstract: Despite the huge literature on reaching behavior, a clear idea about the motor control processes underlying its development in infants is still lacking. This article contributes to overcoming this gap by proposing a computational model based on three key hypotheses: (a) trial-and-error learning processes drive the progressive development of reaching; (b) the control of the movements based on equilibrium points allows the model to quickly find the initial approximate solution to the problem of gaining contact with the target objects; (c) the request of precision of the end movement in the presence of muscular noise drives the progressive refinement of the reaching behavior. The tests of the model, based on a two degrees of freedom simulated dynamical arm, show that it is capable of reproducing a large number of empirical findings, most deriving from longitudinal studies with children: the developmental trajectory of several dynamical and kinematic variables of reaching movements, the time evolution of submovements composing reaching, the progressive development of a bell-shaped speed profile, and the evolution of the management of redundant degrees of freedom. The model also produces testable predictions on several of these phenomena. Most of these empirical data have never been investigated by previous computational models and, more important, have never been accounted for by a unique model. In this respect, the analysis of the model functioning reveals that all these results are ultimately explained, sometimes in unexpected ways, by the same developmental trajectory emerging from the interplay of the three mentioned hypotheses: The model first quickly learns to perform coarse movements that assure a contact of the hand with the target (an achievement with great adaptive value) and then slowly refines the detailed control of the dynamical aspects of movement to increase accuracy.

Pagine da: 389

Pagine a: 421

Pagine totali: 33


Psychological review American Psychological Association [etc.]
Paese di pubblicazione: Stati Uniti d'America
Lingua: inglese
ISSN: 0033-295X

Numero volume: 121

Numero fascicolo: 3

DOI: 10.1037/a0037016

Indicizzato da: ISI Web of Science (WOS) [000340470300005]

Parole chiave:

  • Bernstein's problem
  • continuous state-action reinforcement learning
  • motor control
  • movement units
  • neural networks

Strutture CNR:

Allegati: Integrating Reinforcement Learning, Equilibrium Points, and Minimum Variance to Understand the Development of Reaching: A Computational Model (application/pdf)

Torna indietro Richiedi modifiche Invia per email Stampa
Home Il CNR  |  I servizi News |   Eventi | Istituti |  Focus