The receptive fields of early visual neurons are anchored in retinotopic coordinates (Hubel and Wiesel, 1962). Eye movements shift these receptive fields and therefore require that different populations of neurons encode an object’s constituent features across saccades. Whether feature groupings are preserved across successive fixations or processing starts anew with each fixation has long been hotly debated (Melcher and Morrone, 2003; Melcher, 2005; Knapen et al., 2009; Cavanagh et al., 2010a; 2010b; Melcher, 2010; Morris et al., 2010). Here we show that feature integration initially occurs within retinotopic coordinates, but is then conserved within a spatiotopic coordinate frame independent of where the features fall on the retinas. With human observers, we first found that the relative timing of visual features plays a critical role in determining the spatial area over which features are grouped. We exploited this temporal dependence of feature integration to show that features co-occurring within 45 ms remain grouped across eye movements. Our results thus challenge purely feed-forward models of feature integration (Pelli, 2008; Freeman and Simoncelli, 2011), that begin de novo after every eye movement, and implicate the involvement of brain areas beyond early visual cortex. The strong temporal dependence we quantify, and its link with trans-saccadic object perception, instead suggest that feature integration depends, at least in part, on feedback from higher brain areas (Mumford, 1992; Rao and Ballard, 1999; Di Lollo et al., 2000; Moore and Armstrong, 2003; Stanford et al., 2010).