Human timing behaviors are consistent with Bayesian inference, according to which both previous knowledge (prior) and current sensory information determine final responses. However, it is unclear whether the brain represents temporal priors exclusively for individual modalities or in a supramodal manner when temporal information comes from different modalities at different times. Here we asked participants to reproduce time intervals in either a unisensory or a multisensory context. In unisensory tasks, sample intervals drawn from a uniform distribution were presented in a single visual or auditory modality. In multisensory tasks, sample intervals from the two modalities were randomly mixed; visual and auditory intervals were drawn from two adjacent uniform distributions, with the conjunction of the two being equal to the distribution in the unisensory tasks. In the unisensory tasks, participants' reproduced times exhibited classic central-tendency biases: shorter intervals were overestimated and longer intervals were underestimated. In the multisensory tasks, reproduced times were biased towards the mean of the whole distribution rather than the means of intervals in individual modalities. The Bayesian model with a supramodal prior (distribution of time intervals from both modalities) outperformed the model with modality-specific priors in describing participants' performance. With a generalized model assuming the prior to be a weighted combination of unimodal priors, we further obtained the relative contribution of visual intervals and auditory intervals in forming the prior for each participant. These findings suggest a supramodal mechanism for encoding priors in temporal processing, although the extent of influence of one modality upon another differs individually.
from Physiology via xlomafota13 on Inoreader http://ift.tt/2toIlFv
via IFTTT
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου
Σημείωση: Μόνο ένα μέλος αυτού του ιστολογίου μπορεί να αναρτήσει σχόλιο.