Resumen
To correctly position the hand with respect to the spatial location and orientation of an object to be reached/grasped, visual information about the target and proprioceptive information from the hand must be compared. Since visual and proprioceptive sensory modalities are inherently encoded in a retinal and musculo-skeletal reference frame, respectively, this comparison requires cross-modal sensory transformations. Previous studies have shown that lateral tilts of the head interfere with the visuo-proprioceptive transformations. It is unclear, however, whether this phenomenon is related to the neck flexion or to the head-gravity misalignment. To answer to this question, we performed three virtual reality experiments in which we compared a grasping-like movement with lateral neck flexions executed in an upright seated position and while lying supine. In the main experiment, the task requires cross-modal transformations, because the target information is visually acquired, and the hand is sensed through proprioception only. In the other two control experiments, the task is unimodal, because both target and hand are sensed through one, and the same, sensory channel (vision and proprioception, respectively), and, hence, cross-modal processing is unnecessary. The results show that lateral neck flexions have considerably different effects in the seated and supine posture, but only for the cross-modal task. More precisely, the subjects’ response variability and the importance associated to the visual encoding of the information significantly increased when supine. We show that these findings are consistent with the idea that head-gravity misalignment interferes with the visuo-proprioceptive cross-modal processing. Indeed, the principle of statistical optimality in multisensory integration predicts the observed results if the noise associated to the visuo-proprioceptive transformations is assumed to be affected by gravitational signals, and not by neck proprioceptive signals per se. This finding is also consistent with the observation of otolithic projections in the posterior parietal cortex, which is involved in the visuo-proprioceptive processing. Altogether these findings represent a clear evidence of the theorized central role of gravity in spatial perception. More precisely, otolithic signals would contribute to reciprocally align the reference frames in which the available sensory information can be encoded.
Idioma original | Inglés |
---|---|
Número de artículo | 788905 |
Publicación | Frontiers in Integrative Neuroscience |
Volumen | 16 |
DOI | |
Estado | Publicada - 10 mar 2022 |
Palabras clave
- Multisensory Integration
- Cross-modal transformation
- Gravity
- Reaching/grasping movement
- Eye-hand coordination
- Vision
- Proprioception
- Otolith
Project and Funding Information
- Funding Info
- This work was supported by the Centre National d’Etudes Spatiales (DAR 2017/4800000906, DAR 2018/4800000948, 2019/4800001041). JB-E was supported by a Ph.D. fellowship of the École Doctorale Cerveau-Cognition-Comportement (ED3C, n°158, Sorbonne Université and Université de Paris). The research team is supported by the Centre National de la Recherche Scientifique and the Université de Paris. This study contributes to the IdEx Université de Paris ANR-18-IDEX-0001.