David W. Bressler, Francesca C. Fortenbaugh, Lynn C. Robertson, and Michael A. Silver. 2013. “Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner.” Vision Research, 85, Pp. 104-112. Publisher's VersionAbstract

Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas.

Francesca C. Fortenbaugh, Shradha Sanghvi, Michael A. Silver, and Lynn C. Robertson. 2012. “Exploring the edges of visual space: The influenceof visual boundaries on peripheral localization.” Journal of Vision, 12, 2, Pp. 19, 1-18. Publisher's VersionAbstract

Previous studies of localization of stationary targets in the peripheral visual field have found either underestimations (foveal biases) or overestimations (peripheral biases) of target eccentricity. In the present study, we help resolve this inconsistency by demonstrating the influence of visual boundaries on the type of localization bias. Using a Goldmann perimeter (an illuminated half-dome), we presented targets at different eccentricities across the visual field and asked participants to judge the target locations. In Experiments 1 and 2, participants reported target locations relative to their perceived visual field extent using either a manual or verbal response, with both response types producing a peripheral bias. This peripheral localization bias was a non-linear scaling of perceived location when the visual field was not bounded by external borders induced by facial features (i.e., the nose and brow), but location scaling was linear when visual boundaries were present. Experiment 3 added an external border (an aperture edge placed in the Goldmann perimeter) that resulted in a foveal bias and linear scaling. Our results show that boundaries that define a spatial region within the visual field determine both the direction of bias in localization errors for stationary objects and the scaling function of perceived location across visual space.

Francesca C. Fortenbaugh, William Prinzmetal, and Lynn C. Robertson. 2011. “Rapid changes in visual-spatial attention distort object shape.” Psychonomic Bulletin & Review, 18, Pp. 287-294. Publisher's VersionAbstract

Shifts of attention due to rapid cue onsets have been shown to distort the perceived location of objects, but are there also systematic distortions in the perceived shapes of the objects themselves from such shifts? The present study demonstrates that there are. In three experiments, oval contours were presented that varied in width and height. Two brief, bright white dots were presented as cues and were positioned horizontally or vertically either inside or outside the oval contour. Observers had to judge whether the oval was taller than wide. The results show that the perceived shape of an oval was changed by visual cues such that the oval contours were repelled by the cues (Exp. 1). This effect only occurred when the cues preceded the ovals, providing sufficient time between the presentations to attract involuntary attention (Exp. 2). Moreover, an explanation based on figural aftereffects was ruled out (Exp. 3).

Alexandra List, Ayelet N. Landau, Joseph L. Brooks, Anastasia V. Flevaris, Francesca C. Fortenbaugh, Michael Esterman, Thomas M. Van Vleet, Alice R. Albrecht, Bryan D. Alvarez, Lynn C. Robertson, and Krista Schendel. 2011. “Shifting attention in viewer- and object-based reference frames after unilateralbrain injury.” Neuropsychologia , 49, Pp. 2090-2096.Abstract

The aims of the present study were to investigate the respective roles that object- and viewer-based reference frames play in reorienting visual attention, and to assess their influence after unilateral brain injury. To do so, we studied 16 right hemisphere injured (RHI) and 13 left hemisphere injured (LHI) patients. We used a cueing design that manipulates the location of cues and targets relative to a display comprised of two rectangles (i.e., objects). Unlike previous studies with patients, we presented all cues at midline rather than in the left or right visual fields. Thus, in the critical conditions in which targets were presented laterally, reorienting of attention was always from a midline cue. Performance was measured for lateralized target detection as a function of viewer-based (contra- and ipsilesional sides) and objectbased (requiring reorienting within or between objects) reference frames. As expected, contralesional detection was slower than ipsilesional detection for the patients. More importantly, objects influenced target detection differently in the contralesional and ipsilesional fields. Contralesionally, reorienting to a target within the cued object took longer than reorienting to a target in the same location but in the uncued object. This finding is consistent with object-based neglect. Ipsilesionally, the means were in the opposite direction. Furthermore, no significant difference was found in object-based influences between the patient groups (RHI vs. LHI). These findings are discussed in the context of reference frames used in reorienting attention for target detection.

Francesca C. Fortenbaugh and Lynn C. Robertson. 2011. “When here becomes there: Attentional distribution modulates foveal bias in peripheral localization.” Attention, Perception & Psychophysics, 73, Pp. 809-929. Publisher's VersionAbstract

Much research concerning attention has focused on changes in the perceptual qualities of objects while attentional states were varied. Here, we address a complementary question—namely, how perceived location can be altered by the distribution of sustained attention over the visual field. We also present a new way to assess the effects of distributing spatial attention across the visual field. We measured magnitude judgments relative to an aperture edge to test perceived location across a large range of eccentricities (30°), and manipulated spatial uncertainty in target locations to examine perceived location under three different distributions of spatial attention. Across three experiments, the results showed that changing the distribution of sustained attention significantly alters known foveal biases in peripheral localization.

Anna A. Kosovicheva, Francesca C. Fortenbaugh, and Lynn C. Robertson. 2010. “Where does attention go when it moves? Spatial properties and locus of attentional repulsion effect.” Journal of Vision, 10, 12, Pp. 33, 1-13. Publisher's VersionAbstract

Reliable effects of spatial attention on perceptual measures have been well documented, yet little is known about how attention affects perception of space per se. The present study examined the effects of involuntary shifts of spatial attention on perceived location using a paradigm developed by S. Suzuki and P. Cavanagh (1997) that produces an attentional repulsion effect (ARE). The ARE refers to the illusory displacement of two vernier lines away from briefly presented cues. In Experiment 1, we show that the magnitude of the ARE depends on cue–target distance, indicating that the effects of attention on perceived location are not uniform across the visual field. Experiments 2 and 3 tested whether repulsion occurs away from cue center of mass or from cue contour. Perceived repulsion always occurred away from the cues’ center of mass, regardless of the arrangement of the cue contours relative to the vernier lines. Moreover, the magnitude of the ARE varied with shifts in the position of the cues’ center of mass. These experiments suggest that the onset of the cue produces a shift of attention to its center of mass rather than to the salient luminance contours that define it, and that this mechanism underlies the ARE.

Francesca C. Fortenbaugh, John C. Hicks, and Kathleen A. Turano. 2008. “The effect of peripheral visual field loss on representations of space: Evidence for distortion and adaptation.” Investigative Ophthalmology & Vision Science, 49, 6, Pp. 2765-2772.Abstract

PURPOSE. To determine whether peripheral field loss (PFL) systematically distorts spatial representations and to determine whether persons with actual PFL show adaptation effects.

METHODS. Nine participants with PFL from retinitis pigmentosa (RP) learned the locations of statues in a virtual environment by walking a predetermined route. After this, the statues were removed and the participants were to walk to where they thought each statue had been located. Placement errors, defined as the differences between the actual and estimated locations, were calculated and decomposed into distance errors and angular offsets.

RESULTS. Participants showed distortions in remembered statue locations, with mean placement errors increasing with decreasing field of view (FOV) size. A correlation was found between FOV size and mean distance error but not mean angular offsets. Compared with eye movements of normal-vision participantswith simulated PFL from a previous study, the eye movements of the RP participants were shorter in duration, and smaller saccadic amplitudes were observed only for the RP participants with the smallest FOV sizes. The RP participants also made more fixations to the statues than the simulated PFL participants. Results from a real-world replication of the task showed no behavioral differences between simulated and naturally occurring PFL.

CONCLUSIONS. PFL is associated with distortions in spatial representations that increase with decreasing FOV. The differences in eye movement and gaze patterns suggest possible adaptive changes on the part of the RP participants. However, the use of different sampling strategies did not aid the performance of the RP participants as FOV size decreased.

Francesca C. Fortenbaugh, Sidhartha Chaudhury, John C. Hicks, Lei Hao, and Kathleen A. Turano. 2007. “Gender Differences in Cue Preference During Path Integration in Virtual Environments.” ACM Transactions in Applied Perception, 4, 1, Pp. 6:1-18. Publisher's VersionAbstract

Three studies were conducted to examine whether men and women differ in how they recalibrate their path-integration systems when walking without vision in virtual environments. Distance cues provided by a scene and a tone, which ended each trial, were placed in conflict. Participants briefly viewed a room with a target, which was offset from their midlines and hung inside a doorframe on the far wall. After viewing, participants walked to the target’s position until a tone sounded, ending the trial. In two experiments the doorframe was placed at 6 m and the tone sounded at 4 or 8 m. The rooms had minimal or photorealistic texturing applied. The third experiment used photorealistic texturing, but here the tone sounded at 6 m and the doorframe was presented at 4 or 8 m. Path angles were recorded to estimate perceived distance to the target. In all conditions tested, the women failed to scale their path angles. The men, however, scaled their path-angles with the auditory cue in the minimal-texture condition, but with the visual cue in the photorealistic-texture conditions. These results suggest that gender differences exist in the way that humans recalibrate their path-integration systems when walking without vision in virtual environments.

Francesca C. Fortenbaugh, John C. Hicks, Lei Hao, and Kathleen A. Turano. 2007. “Losing sight of the bigger picture: Peripheral field loss compresses representations of space.” Vision Research, 47, Pp. 2506-2520.Abstract

Three experiments examine how the peripheral visual field (PVF) mediates the development of spatial representations. In Experiment 1 participants learned and were tested on statue locations in a virtual environment while their field-of-view (FOV) was restricted to 40, 20, 10, or 0 (diam). As FOV decreased, overall placement errors, estimated distances, and angular offsets increased. Experiment 2 showed large compressions but no effect of FOV for perceptual estimates of statue locations. Experiment 3 showed an association between FOV size and proprioception influence. These results suggest the PVF provides important global spatial information used in the development of spatial representations.

Francesca C. Fortenbaugh, John C. Hicks, Lei Hao, and Kathleen A. Turano. 2007. “A technique for simulating visual field loss in virtual environments to study human navigation.” Behavior Research Methods, 39, 3, Pp. 552-560. Publisher's VersionAbstract

The following paper describes a new technique for simulating peripheral field losses in virtual environments to study the roles of the central and peripheral visual fields during navigation. Based on Geisler and Perry’s (2002) gaze-contingent multiresolution display concept, the technique extends their methodology to work with three-dimensional images that are both transformed and rendered in real time by a computer graphics system. In order to assess the usefulness of this method for studying visual field losses, an experiment was run in which seven participants were required to walk to a target tree in a virtual forest as quickly and efficiently as possible while artificial head and eye-based delays were systematically introduced. Bilinear fits were applied to the mean trial times in order to assess at what delay lengths breaks in performance could be observed. Results suggest that breaks occur beyond the current delays inherent in the system. Increases in trial times across all delays tested were also observed when simulated peripheral field losses were applied compared to full FOV conditions. Possible applications and limitations of the system are discussed. The source code needed to program visual field losses can be found at

Francesca C. Fortenbaugh, John C. Hicks, Lei Hao, and Kathleen A. Turano. 2006. “High-speed navigators: Using more than what meets the eye.” Journal of Vision, 6, Pp. 565-579. Publisher's VersionAbstract

This study employed a novel method to dissociate the use of external visual information and internal spatial representations in human navigation. Using a goal-directed walking task and gaze-contingent displays, 14 participants with normal vision navigated within an immersive virtual forest during which each participant’s field of view (FOV) was restricted to 10, 20, or 40 deg in diameter. Participants were classified into two groups, good and poor navigators, based on a cluster analysis of their individual mean latencies, walk times, and path efficiencies in the 10 deg condition. Changes in performance measures across the three FOVs were calculated for the two groups. Significant interactions were found, with the overall performance of the poor navigators decreasing at a faster rate than the performance of the good navigators. Perceptual spans were also calculated for the two groups, and it was determined that the good navigators were able to complete the same task as effectively as the poor navigators with a smaller FOV. Collectively, these results support recent theories stating that good navigators rely on internal spatial representations to a greater extent than poor navigators do.