During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g. syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses propose that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. While changes in low-frequency neural oscillations were compatible with the encoding of pre-lexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.
from Physiology via xlomafota13 on Inoreader http://ift.tt/2c6EHsJ
via IFTTT
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου
Σημείωση: Μόνο ένα μέλος αυτού του ιστολογίου μπορεί να αναρτήσει σχόλιο.