Add like
Add dislike
Add to saved papers

Fronto-parietal mirror neuron system modeling: Visuospatial transformations support imitation learning independently of imitator perspective.

Human Movement Science 2018 September 13
Although the human mirror neuron system (MNS) is critical for action observation and imitation, most MNS investigations overlook the visuospatial transformation processes that allow individuals to interpret and imitate actions observed from differing perspectives. This problem is not trivial since accurately reaching for and grasping an object requires a visuospatial transformation mechanism capable of precisely remapping fine motor skills where the observer's and imitator's arms and hands may have quite different orientations and sizes. Accordingly, here we describe a novel neural model to investigate the dynamics between the fronto-parietal MNS and visuospatial processes during observation and imitation of a reaching and grasping action. Our model encompasses i) the inferior frontal gyrus (IFG) and inferior parietal lobule (IPL), regions that are postulated to produce neural drive and sensory predictions, respectively; ii) the middle temporal (MT) and middle superior temporal (MST) regions that are postulated to process visual motion of a particular action; and iii) the superior parietal lobule (SPL) and intra-parietal sulcus (IPS) that are hypothesized to encode the visuospatial transformations enabling action observation/imitation based on different visuospatial viewpoints. The results reveal that when a demonstrator executes an action, an imitator can reproduce it with similar kinematics, independently of differences in anthropometry, distance, and viewpoint. As with prior empirical findings, similar model synaptic activity was observed during both action observation and execution along with the existence of both view-independent and view-dependent neural populations in the frontal MNS. Importantly, this work generates testable behavioral and neurophysiological predictions. Namely, the model predicts that i) during observation/imitation the response time increases linearly as the rotation angle of the observed action increases but remain similar when performing both clockwise and counterclockwise rotation and ii) IPL embeds essentially view-independent neurons while SPL/IPS includes both view-independent and view-dependent neurons. Overall, this work suggests that MT/MST visuomotion processes combined with the SPL/IPS allow the MNS to observe and imitate actions independently of demonstrator-imitator spatial relationships.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app