The visual encoding of purely proprioceptive intermanual tasks is due to the need of transforming joint signals, not to their interhemispheric transfer

To perform goal-oriented hand movement, humans combine multiple sensory signals (e.g., vision and proprioception) that can be encoded in various reference frames (body centered and/or exo-centered). In a previous study (Tagliabue M, McIntyre J. PLoS One 8: e68438, 2013), we showed that, when aligning a hand to a remembered target orientation, the brain encodes both target and response in visual space when the target is sensed by one hand and the response is performed by the other, even though both are sensed only through proprioception. Here we ask whether such visual encoding is due 1) to the necessity of transferring sensory information across the brain hemispheres, or 2) to the necessity, due to the arms’ anatomical mirror symmetry, of transforming the joint signals of one limb into the reference frame of the other. To answer this question, we asked subjects to perform purely proprioceptive tasks in different conditions: Intra, the same arm sensing the target and performing the movement; Inter/Parallel, one arm sensing the target and the other reproducing its orientation; and Inter/Mirror, one arm sensing the target and the other mirroring its orientation. Performance was very similar between Intra and Inter/Mirror (conditions not requiring joint-signal transformations), while both differed from Inter/Parallel. Manipulation of the visual scene in a virtual reality paradigm showed visual encoding of proprioceptive information only in the latter condition. These result...
Source: Journal of Neurophysiology - Category: Neurology Authors: Tags: Research Articles Source Type: research
More News: Brain | Neurology | Study