pmc logo imageJournal ListSearchpmc logo image
Exp Brain Res. 2007 October; 182(4): 559–565.
Published online 2007 June 28. doi: 10.1007/s00221-007-1012-2.
PMCID: PMC2190788
No effect of auditory–visual spatial disparity on temporal recalibration
Mirjam Keetels and Jean Vroomencorresponding author
Department of Psychology, Tilburg University, Warandelaan 2, Tilburg, The Netherlands
Jean Vroomen, Phone: +31-13-4662394, Fax: +31-13-4662370, Email: j.vroomen/at/uvt.nl.
corresponding authorCorresponding author.
Received April 4, 2007; Accepted May 29, 2007.
Abstract
It is known that the brain adaptively recalibrates itself to small (~100 ms) auditory–visual (AV) temporal asynchronies so as to maintain intersensory temporal coherence. Here we explored whether spatial disparity between a sound and light affects AV temporal recalibration. Participants were exposed to a train of asynchronous AV stimulus pairs (sound-first or light-first) with sounds and lights emanating from either the same or a different location. Following a short exposure phase, participants were tested on an AV temporal order judgement (TOJ) task. Temporal recalibration manifested itself as a shift of subjective simultaneity in the direction of the adapted audiovisual lag. The shift was equally big when exposure and test stimuli were presented from the same or different locations. These results provide strong evidence for the idea that spatial co-localisation is not a necessary constraint for intersensory pairing to occur.
Keywords: Intersensory perception, Spatial disparity, Auditory–visual, Temporal order judgment, Temporal recalibration
Introduction

In many circumstances people experience external events by a number of different sensory modalities. For example, when someone is talking, there is auditory and visual information that is initially processed by specialized neural pathways. Ultimately, though, the different sensory signals are integrated into a coherent multimodal percept of the speaker. Many behavioural and neurophysiological studies have emphasized the importance of spatial co-localisation and temporal synchrony for intersensory pairing to occur (e.g., Welch and Warren 1980; Bedford 1989; Stein and Meredith 1993; Radeau 1994; Bertelson 1999; Welch 1999). However, there is accumulating evidence that some intersensory phenomena may not require spatial alignment (Welch et al. 1986; Scheier et al. 1999; Morein-Zamir et al. 2003; Murray et al. 2004; Teder-Salejarvi et al. 2005; Vroomen and Keetels 2006; Keetels et al. 2007). In the present study, we explored the importance of spatial alignment for audio–visual (AV) temporal recalibration.

Temporal recalibration refers to the phenomenon that the brain adapts itself to (small) temporal asynchronies. In a multi-modal percept, it usually appears that information from different senses arrive at the same time. This occurs, despite the fact that there are natural asynchronies between the senses caused by differences in signal transduction time through air and differences in neural transmission time. At least two options are available to handle these asynchronies: one is concerned with immediate corrections, the other is important for adaptation on a longer time scale. As concerns the immediate effect, several studies have shown that the brain corrects for small AV temporal asynchronies by shifting one or both modalities on the time scale so that the temporal discordance is reduced. For example, when a sound and a light are presented at slightly different onset times (usually in the order of ~100 ms), the temporal asynchrony is reduced by a capturing effect of the light by the sound; a phenomenon called temporal ventriloquism (Scheier et al. 1999; Fendrich and Corballis 2001; Morein-Zamir et al. 2003; Vroomen and de Gelder 2004; Stekelenburg and Vroomen 2005; Vroomen and Keetels 2006). Temporal ventriloquism can, for example, be demonstrated by the use of a visual temporal order judgment (TOJ) task in which participants are presented two lights at various stimulus onset asynchronies (SOAs) and judge which light came first. By presenting a sound before the first and after the second light, the just noticeable difference (JND) improves (i.e. participants become more sensitive), presumably because the two sounds attract the temporal occurrence of the two lights, and thus effectively pull the lights further apart in time (Scheier et al. 1999; Morein-Zamir et al. 2003; Vroomen and Keetels 2006).

There are also long-term effects reflecting an adaptive change to AV asynchrony, a phenomenon called temporal recalibration (Fujisaki et al. 2004; Vroomen et al. 2004). For example, Vroomen et al. studied temporal recalibration by exposing participants to 3 min of sound and light flashes with a constant time lag, after which an AV TOJ or AV simultaneity task was performed. Following exposure, observers were given AV test stimuli and judged whether the sound or the light came first, or whether the sound and light were simultaneous or successive. The results showed that the point of subjective simultaneity (PSS), the point of perceived temporal alignment between the sound and the light, was shifted in the direction of the exposure lag. So, following exposure to a train of sound-first stimulus pairs, participants perceived sound-first trials as more simultaneous than after light-first exposure. Fujisaki et al. (2004) demonstrated similar findings and also provided somewhat mixed evidence that temporal recalibration may generalize to different test stimuli than the ones presented during exposure. The authors adapted participants to asynchronous tone-flash stimulus pairs and later tested them on the “bounce” illusion (Sekuler et al. 1997). In this illusion, two visual targets that move across each other can be perceived either to bounce off or to stream through each other. A brief sound presented at the moment that the visual targets coincide generally biases visual perception in favour of a bouncing motion, while without sound observers tend to report a streaming percept. Following exposure to asynchronous sound–light pairs, the optimal delay for obtaining the bounce illusion was shifted in the same direction, but in other conditions, the magnitude of the after-effect was smaller for some of the cross-adaptation conditions.

Temporal recalibration may also occur between other modalities than AV. For example, Navarra et al. (2006) demonstrated audio–tactile temporal recalibration by exposing participants to streams of brief auditory and tactile stimuli presented in synchrony, or else with the auditory stimulus leading by 75 ms. Rather than a shift in the PSS, they observed that the JND to resolve audio–tactile temporal order was larger after exposure to the desynchronized streams than after exposure to the synchronous streams. The authors argued that the temporal window for integration was widened due to audio–tactile asynchrony.

The goal of the present study was to explore whether spatial disparity between a sound and light affects temporal recalibration. According to the “common notion” of intersensory pairing, intersensory effects should be bigger when the individual components of a multisensory stimulus come from the same location (e.g. Welch and Warren 1980; Bedford 1989; Stein and Meredith 1993; Radeau 1994; Bertelson 1999; Welch 1999). However, Vroomen and Keetels (2006) demonstrated that, at least for temporal ventriloquism, spatial correspondence between sound and light is not important. In their study, a visual TOJ task was used with a sound presented before the first and after the second light. Temporal ventriloquism manifested itself as an improvement in the JNDs but, crucially, the improvement was unaffected by whether the sounds came from the same or a different position as the lights, whether the sounds were static or moved, or whether the sounds and lights came from the same or opposite sides of fixation. Keetels et al. (2007) further examined how principles of auditory grouping (Bregman 1990) relate to intersensory pairing. They embedded two sounds that normally enhance sensitivity on the visual temporal order judgement task in a sequence of flanker sounds, which either had the same or different frequency, rhythm, or location. In all experiments, temporal ventriloquism only occurred when the two capture sounds differed from the flankers, thus demonstrating that intramodal grouping of the sounds in the auditory stream took priority over intersensory pairing. By combining principles of auditory grouping with intersensory pairing, they also demonstrated that the capture sounds could, counter-intuitively, be more effective when their locations differed from that of the lights rather than when they came from the same position, thus demonstrating that sound location mattered for auditory grouping, but not intersensory pairing.

Here we examined whether, like in temporal ventriloquism, spatial disparity is ignored when temporal recalibration is at stake. Participants were exposed for 3 min to a train of asynchronous sounds and lights that came either from the same or a different location. Following exposure, participants performed an AV TOJ task with sounds and lights from either the same or different location. This design allowed us to address two questions. First, we could test whether temporal recalibration is affected by spatial disparity between the sounds and lights. Recalibration is usually considered to be a low-level perceptual learning phenomenon necessary for re-alignment of the senses (Bertelson and de Gelder 2004). Observing an after-effect following exposure to spatially disparate sound–light pairs would provide strong evidence that spatial co-occurrence is, even at this early stage, not necessary for intersensory pairing to occur. Secondly, the use of an exposure–test design allowed us to introduce a change between the exposure and test stimulus so that we could test whether after-effects generalize to different test stimuli. Here we tested whether spatial similarity between the exposure and test sound affects after-effects. If spatial co-location plays no role in intersensory pairing, one would expect stimulus generalization across space to be complete.

Method

Participants
Thirty students from Tilburg University received course credits for their participation. All reported normal hearing and normal or corrected-to-normal vision. They were tested individually and were unaware of the purpose of the experiment. The study was carried out along the principles laid down in the Helsinki Declaration and informed consent from the participants was obtained.

Stimuli
Participants sat at a table in a dimly lit and sound-proof booth. Head movements were precluded by a chin-rest. Visual stimuli were presented by a green LED, positioned at central location, at 70 cm from the subject’s eyes (diameter of 0.5 cm, luminance of 40 cd/m2). Auditory stimuli were 88 dB sound bursts presented by one of two loudspeakers; one directly behind the green LED and the other placed laterally at 70 cm distance on either the far left or the far right of the subject (i.e., 90 degrees of spatial separation between the sound and light). See Fig. 1 for a schematic view of the experimental set-up. The sounds and lights each had a duration of 10 ms. A small red LED, placed 2 cm below the green LED, was constantly lit during the experiment and served as fixation point.
Fig. 1Fig. 1
Schematic illustration of the experimental conditions. In the exposure phase, the subject was exposed to a sound–light pair with 100 ms temporal offset (either sound-first or light first). During exposure, sounds were either presented (more ...)

Design
Three within-subjects factors were used: exposure lag during the exposure phase (−100 and +100 ms, with negative values indicating that the sound was presented first), location of the sound during exposure (exposure-sound central or lateral) and SOA between the sound and light of the test stimuli (−240, −120, −90, −60, −30, 0, +30, +60, +90, +120, and +240 ms, with negative values indicating that the sound came first). The location of the test sound (central or lateral) was a between-subjects variable. Half of the participants were tested with central test sounds, the other with lateral test sounds. These factors yielded 44 equi-probable conditions for each location of the test sound (2 × 2 × 11), each presented 12 times for a total of 528 trials. Trials were presented in eight blocks of 66 trials each. The exposure lag and the location of the exposure sound were constant within a block, while the SOA between sound and light varied randomly. The order of the blocks was counterbalanced across participants. In half of the blocks with a lateral exposure sound, the sound came from the left, in the other half from the right. The lateral test sounds were presented from the same side as during exposure.

Procedure
Each block started with an exposure phase consisting of 240 repetitions (~3 min) of a sound–light stimulus pair (ISI = 750 ms) with a constant lag (−100 or +100 ms) between the sound and the light. After a 2,500 ms delay, the first test trial then started. To ensure that participants were fixating the light during exposure, they had to detect the occasional occurrence of the offset (150 ms) of the fixation light (i.e., a catch trial). Participants then pushed a special button.

The test phase consisted of two parts: a short AV re-exposure phase followed by three AV test trials of which the temporal order of the sound and light had to be judged. The re-exposure phase consisted of a train of ten sound-light pairs with the same lag, ISI, and sound location as used during the immediately preceding exposure phase. After 1 s, the three AV test trials were presented with a variable SOA between the sounds and lights. The participant’s task was to judge whether the sound or the light of the test stimulus was presented first. An unspeeded response was made by pressing one of two designated keys on a response box. The next test stimulus was presented 500 ms after a response, and the re-exposure phase of the next trial started 1,000 ms after the response on the third test stimulus.

To acquaint participants with the TOJ task, experimental blocks were preceded by four practice blocks in which no exposure preceded the test trials. The first two practice blocks were to acquaint participants with the response buttons, and consisted of 16 trials in which only the largest SOAs were presented (±240 and ±120). During this part, participants received verbal feedback (“correct” or “wrong”) about whether they gave the correct response or not. The next two practice blocks consisted of 66 trials in which all SOAs were presented 6 times randomly without verbal feedback. Total testing lasted approximately 2.5 h.

Results

Trials of the practice session were excluded from analyses. The proportion of “light-first” responses was for each participant calculated for each combination of exposure lag (−100, +100 ms), location of the exposure sound (central, or lateral), location of the test sound (central, or lateral) and SOA (ranging from −240 to +240 ms). Performance on catch trials was flawless, indicating that participants were indeed looking at the fixation light during exposure. For each combination of exposure lag, location of the exposure sound and location of the test sound, an individually determined psychometric function was calculated over the SOAs by fitting a cumulative normal distribution using maximum likelihood estimation. The mean of the resulting distribution (the interpolated 50% crossover point) is the point of subjective simultaneity (henceforth the PSS), and the slope is a measure of the sharpness with which stimuli are distinguished from one another. The slope is inversely related to the just noticeable difference (JND) and represents the interval (absolute SOA) at which 25 and 75% visual-first responses were given.

The PSS and the JND data are shown in Fig. 2 and Table 1. Temporal recalibration was expected to manifest itself as a shift of the PSS in the direction of the exposure lag. The temporal recalibration effect (TRE) was therefore computed by subtracting the PSS following auditory-first exposure from visual-first exposure.

Fig. 2Fig. 2
The proportions of visual-first responses (V-first) for each exposure lag (−100 ms sound-first, 100 ms light-first) for each combination of location of exposure sound (central, lateral) and location of the test sound (central, (more ...)
Table 1Table 1
Mean points of subjective simultaneity (PSSs) in ms, and mean just noticeable differences (JND) in parentheses

An overall 2 × 2 × 2 ANOVA with as within-subjects factors exposure lag, location of the exposure sound and as between-subjects factor location of the test sound was run on the JNDs. None of the effects was significant (all P > 0.08), except for a second-order interaction between exposure lag, exposure location and test location, F(1,28) = 4.6, P = 0.041. Inspection of Table 1 shows that the differences between the JNDs (on average 38.7 ms) were rather small and unsystematic.

The ANOVA on the TREs only showed a significant effect of exposure lag, F(1,28) = 23.0, P < 0.001, demonstrating, as predicted, that the exposure phase shifted the PSS such that there were more visual-first responses after sound-first exposure than after light-first exposure (i.e. the TRE). The average TRE was 12.9 ms or 6.5% of the exposure lag. The overall size of this effect corresponds well with previous reports (Fujisaki et al. obtained an average TRE of 12.5%; Vroomen et al. an average TRE of 6.7%). There were, furthermore, no main effects of location of the exposure and test sound, and the crucial interaction between the location of the exposure and test sound was non-significant (all F < 1). Temporal recalibration thus manifested itself no matter whether exposure sounds came from central or lateral location, and whether the location of the exposure and test sounds was changed or not.

Discussion

The goal of the present study was to address whether spatially co-located AV asynchronous stimulus pairs induce temporal recalibration as much as spatially dislocated stimuli do, and whether spatial correspondence between the exposure and test sound affects the size of this effect. Results showed that in all cases, there were clear temporal recalibration effects as subjective simultaneity was shifted in the direction of the adapted audiovisual lag. The shift was equally big for spatially separated and spatially co-located exposure stimuli. Apparently, spatial separation between the sound and light did not hinder temporal realignment. Stimulus generalization across space was also complete, as the shift in temporal alignment was equally big for when the exposure and test sound came from the same or different positions. The results therefore support the notion that spatial alignment between the senses is unimportant for AV pairing in the temporal domain. The results are also in line with previous reports on temporal ventriloquism (Keetels et al. 2007; Vroomen and Keetels 2006) where it was shown that spatial separation does not affect the capturing effect of a light by a sound. Taken together, these findings provide strong evidence that spatial co-occurrence is, even at early perceptual stages, not a necessary constraint for intersensory pairing.

One might object, though, that spatial ventriloquism has diminished the potential effects of spatial discordance. It is well-known that the apparent location of a sound can be shifted towards a visual stimulus that is presented at approximately the same time (Howard and Templeton 1966; Radeau and Bertelson 1978; Welch 1978; Bertelson and Radeau 1981; Bertelson 1994, 1999; Radeau 1994). Could it be, then, that the AV spatial discordance in our set-up was diminished, if not became unnoticeable due to spatial ventriloquism? If so, one may not observe an effect of spatial separation on temporal recalibration. This argument, though, seems highly unlikely because it is known that spatial ventriloquism dramatically declines whenever spatial separation exceeds approximately 15 degrees (Slutsky and Recanzone 2001; Godfroy et al. 2003). Given that we maximized the spatial separation between the sound and light (i.e., at 90 degrees azimuth), and that informal testing indeed confirmed that spatial separation was clearly noticeable, it seems safe to assume that spatial ventriloquism did not diminish the effect of spatial discordance.

One might also ask whether the visual task as used during the exposure phase (i.e., detection of the offset of visual fixation) resulted in an attentional shift towards the visual modality. According to the “law of prior entry” (Titchener 1908), attending to one sensory modality speeds up the perception of stimuli in that modality, resulting in a change in the PSS (see also Shore et al. 2001, 2005; Spence et al. 2001; Schneider and Bavelier 2003; Zampini et al. 2005b). Our visual task might thus result in a shift of the PSS towards more “visual-first” responses. However, this shift should be uniform for all conditions, and given that temporal recalibration is expressed as a difference in the PSS between exposure lags, the possible role of attention will be subtracted out.

A remarkable aspect of the data is that previous studies have demonstrated that AV temporal order judgements become more sensitive (i.e. smaller JND) when the sound and light of the test stimuli are spatially separated (see also Bertelson and Aschersleben 2003; Spence et al. 2003; Zampini et al. 2003a, b, 2005a; Keetels and Vroomen 2005). Here, there was a small trend in this direction (average JND of 39.1 vs. 38.2 ms, for spatially co-located vs. separated test stimuli, respectively), but the effect was non-significant. Possibly, we might have picked up this difference if the effect were measured as a within-subjects factor. For the current purpose, though, this was considered to be unpractical because it would have doubled individual testing time. Despite that we did not observe an effect of AV spatial separation on the JNDs, the data speak on the interpretation on this effect. At least two explanations have been brought up for the improved temporal sensitivity when the locations of test sound and light differ. One is that there is more intersensory integration with as a consequence that the temporal discordance is fused; the other is that there are extra spatial cues that help TOJ performance (Spence et al. 2003). Given that our results show that intersensory pairing occurs independent of a spatial mismatch (see also Vroomen and Keetels 2006; Keetels et al. 2007), it seems more likely that the previously observed effects of spatial separation on temporal sensitivity were induced by the availability of redundant spatial cues rather than fusion per se.

To conclude, our results provide strong evidence for the claim that commonality in space between a sound and light is not relevant for AV pairing in the temporal domain. This may, at first sight, seem unlikely, because after all, most natural multisensory events are spatially and temporally aligned. However, a critical assumption that underlies the idea of spatial correspondence for cross-modal pairing is that space has the same function in vision and audition. This notion, though is arguable, as it has been proposed that the role of space in hearing is to steer vision (Heffner and Heffner 1992), while in vision it is an indispensable attribute (Kubovy and Van Valkenburg 2001). If one accepts that auditory spatial perception evolved for steering vision, but not for deciding whether sound and light belong together, there is no reason why cross-modal interactions would require spatial co-localization. Our results therefore have also important implications for designing multimodal devices or creating virtual reality environments, as they show that the brain can ignore cross-modal discordance in space.

References
  • Bedford FL (1989) Constraints on learning new mappings between perceptual dimensions. J Exp Psychol Hum Percept Perform 15:232–248 .
  • Bertelson P (1994) The cognitive architecture behind auditory–visual interaction in scene analysis and speech identification. Curr Psychol Cogn 13:69–75 .
  • Bertelson P (1999) Ventriloquism: a case of crossmodal perceptual grouping. In: Aschersleben G, Bachmann T, Musseler J (eds) Cognitive contributions to the perception of spatial and temporal events. Elsevier, North-Holland, pp 347–363 .
  • Bertelson P, Radeau M (1981) Cross-modal bias and perceptual fusion with auditory–visual spatial discordance. Percept Psychophys 29:578–584 [PubMed].
  • Bertelson P, Aschersleben G (2003) Temporal ventriloquism: Crossmodal interaction on the time dimension. 1. Evidence from auditory–visual temporal order judgment. Int J Psychophysiol 50:147–155 [PubMed].
  • Bertelson P, de Gelder B (2004) The psychology of multisensory perception. In: Spence C, Driver J (eds) Crossmodal space and crossmodal attention. Oxford University Press, Oxford, pp 141–177 .
  • Bregman AS (1990) Auditory scene analysis. MIT Press, Cambridge, MA .
  • Fendrich R, Corballis PM (2001) The temporal cross-capture of audition and vision. Percept Psychophys 63:719–725 [PubMed].
  • Fujisaki W, Shimojo S, Kashino M, Nishida S (2004) Recalibration of audiovisual simultaneity. Nat Neurosci 7:773–778 [PubMed].
  • Godfroy M, Roumes C, Dauchy P (2003) Spatial variations of visual–auditory fusion areas. Perception 32:1233–1245 [PubMed].
  • Heffner RS, Heffner HE (1992) Visual factors in sound localization in mammals. J Comp Neurol 317:219–232 [PubMed].
  • Howard IP, Templeton WB (1966) Human spatial orientation. Wiley, Oxford, England .
  • Keetels M, Vroomen J (2005) The role of spatial disparity and hemifields in audio–visual temporal order judgements. Exp Brain Res 167:635–640 [PubMed].
  • Keetels M, Stekelenburg JJ, Vroomen J (2007) Auditory grouping occurs prior to intersensory pairing: evidence from temporal ventriloquism. Exp Brain Res (in press).
  • Kubovy M, Van Valkenburg D (2001) Auditory and visual objects. Cognition 80:97–126 [PubMed].
  • Morein-Zamir S, Soto-Faraco S, Kingstone A (2003) Auditory capture of vision: examining temporal ventriloquism. Cogn Brain Res 17:154–163 .
  • Murray MM, Michel CM, Grave de Peralta R, Ortigue S, Brunet D, Gonzalez Andino S, Schnider A (2004) Rapid discrimination of visual and multisensory memories revealed by electrical neuroimaging. Neuroimage 21:125–135 [PubMed].
  • Navarra J, Soto-Faraco S, Spence C (2006) Adaptation to audiotactile asynchrony. Neurosci Lett 413(1):72–76 [PubMed].
  • Radeau M (1994) Auditory–visual spatial interaction and modularity. Curr Psychol Cogn 13:3–51 [PubMed].
  • Radeau M, Bertelson P (1978) Cognitive factors and adaptation to auditory–visual discordance. Percept Psychophys 23:341–343 [PubMed].
  • Scheier CR, Nijhawan R, Shimojo S (1999) Sound alters visual temporal resolution. Invest Ophthalmol Vis Sci 40:4169 .
  • Schneider KA, Bavelier D (2003) Components of visual prior entry. Cogn Psychol 47:333–366 .
  • Sekuler R, Sekuler AB, Lau R (1997) Sound alters visual motion perception. Nature 385:308–308 [PubMed].
  • Shore DI, Spence C, Klein RM (2001) Visual prior entry. Psychol Sci 12:205–212 [PubMed].
  • Shore DI, Spence C, Klein RM (2005) Prior entry. In: Itti L, Rees G, Tsotsos J (eds) Neurobiology of attention. Elsevier, North Holland, pp 89–95 .
  • Slutsky DA, Recanzone GH (2001) Temporal and spatial dependency of the ventriloquism effect. Neuroreport 12:7–10 [PubMed].
  • Spence C, Shore DI, Klein RM (2001) Multisensory prior entry. J Exp Psychol Gen 130:799–832 [PubMed].
  • Spence C, Baddeley R, Zampini M, James R, Shore DI (2003) Multisensory temporal order judgments: when two locations are better than one. Percept Psychophys 65:318–328 [PubMed].
  • Stein BE, Meredith MA (1993) The merging of the senses. The MIT Press, Cambridge MA .
  • Stekelenburg JJ, Vroomen J (2005) An event-related potential investigation of the time-course of temporal ventriloquism. Neuroreport 16:641–644 [PubMed].
  • Teder-Salejarvi WA, Di Russo F, McDonald JJ, Hillyard SA (2005) Effects of spatial congruity on audio–visual multimodal integration. J Cogn Neurosci 17:1396–1409 [PubMed].
  • Titchener EB (1908) Lectures on the elementary psychology of feeling and attention. Macmillan, New York .
  • Vroomen J, de Gelder B (2004) Temporal ventriloquism: sound modulates the flash-lag effect. J Exp Psychol Hum Percept Perform 30:513–518 [PubMed].
  • Vroomen J, Keetels M (2006) The spatial constraint in intersensory pairing: no role in temporal ventriloquism. J Exp Psychol Hum Percept Perform 32:1063–1071 [PubMed].
  • Vroomen J, Keetels M, de Gelder B, Bertelson P (2004) Recalibration of temporal order perception by exposure to audio–visual asynchrony. Cogn Brain Res 22:32–35 .
  • Welch RB (1978) Perceptual modification: adapting to altered sensory environments. Academic, New York, NY .
  • Welch RB (1999) Meaning, attention, and the “unity assumption” in the intersensory bias of spatial and temporal perceptions. In: Aschersleben G, Bachmann T, Müsseler J (eds) Cognitive contributions to the perception of spatial and temporal events. Elsevier, Amsterdam, pp 371–387 .
  • Welch RB, Warren DH (1980) Immediate perceptual response to intersensory discrepancy. Psychol Bull 88:638–667 [PubMed].
  • Welch RB, DuttonHurt LD, Warren DH (1986) Contributions of audition and vision to temporal rate perception. Percept Psychophys 39:294–300 [PubMed].
  • Zampini M, Shore DI, Spence C (2003a) Audiovisual temporal order judgments. Exp Brain Res 152:198–210 [PubMed].
  • Zampini M, Shore DI, Spence C (2003b) Multisensory temporal order judgments: the role of hemispheric redundancy. Int J Psychophysiol 50:165–180 [PubMed].
  • Zampini M, Shore DI, Spence C (2005b) Audiovisual prior entry. Neurosci Lett 381:217–222 [PubMed].
  • Zampini M, Guest S, Shore DI, Spence C (2005a) Audio–visual simultaneity judgments. Percept Psychophys 67:531–544 [PubMed].