We analyze data sharing practices of astronomers over the past fifteen years. An analysis of URL links embedded in papers published by the American Astronomical Society reveals that the total number of links included in the literature rose dramatically from 1997 until 2005, when it leveled off at around 1500 per year. The analysis also shows that the availability of linked material decays with time: in 2011, 44% of links published a decade earlier, in 2001, were broken. A rough analysis of link types reveals that links to data hosted on astronomers' personal websites become unreachable much faster than links to datasets on curated institutional sites. To gauge astronomers' current data sharing practices and preferences further, we performed in-depth interviews with 12 scientists and online surveys with 173 scientists, all at a large astrophysical research institute in the United States: the Harvard-Smithsonian Center for Astrophysics, in Cambridge, MA. Both the in-depth interviews and the online survey indicate that, in principle, there is no philosophical objection to data-sharing among astronomers at this institution. Key reasons that more data are not presently shared more efficiently in astronomy include: the difficulty of sharing large data sets; over reliance on non-robust, non-reproducible mechanisms for sharing data (e.g. emailing it); unfamiliarity with options that make data-sharing easier (faster) and/or more robust; and, lastly, a sense that other researchers would not want the data to be shared. We conclude with a short discussion of a new effort to implement an easy-to-use, robust, system for data sharing in astronomy, at theastrodata.org, and we analyze the uptake of that system to-date
We present Brut, an algorithm to identify bubbles in infrared images of the Galactic midplane. Brut is based on the Random Forest algorithm, and uses bubbles identified by >35,000 citizen scientists from the Milky Way Project to discover the identifying characteristics of bubbles in images from the Spitzer Space Telescope . We demonstrate that Brut's ability to identify bubbles is comparable to expert astronomers. We use Brut to re-assess the bubbles in the Milky Way Project catalog, and find that 10%-30% of the objects in this catalog are non-bubble interlopers. Relative to these interlopers, high-reliability bubbles are more confined to the mid-plane, and display a stronger excess of young stellar objects along and within bubble rims. Furthermore, Brut is able to discover bubbles missed by previous searches—particularly bubbles near bright sources which have low contrast relative to their surroundings. Brut demonstrates the synergies that exist between citizen scientists, professional scientists, and machine learning techniques. In cases where "untrained" citizens can identify patterns that machines cannot detect without training, machine learning algorithms like Brut can use the output of citizen science projects as input training sets, offering tremendous opportunities to speed the pace of scientific discovery. A hybrid model of machine learning combined with crowdsourced training data from citizen scientists can not only classify large quantities of data, but also address the weakness of each approach if deployed alone.
, in Cosmos in the Classroom, 125th Annual Meeting. Cosmos in the Classroom, 125th Annual Meeting. San Jose, CA: Astronomical Society of the Pacific; 2014.Abstract
We report preliminary results from an NSF-funded project to build, test, and research the impact of a WorldWide Telescope Visualization Lab (WWT Vizlab), meant to oer learners a deeper physical understanding of the causes of the Moon’s phases. The Moon Phases VizLab is designed to promote accurate visualization of the complex, 3-dimensional Earth-Sun-Moon relationships required to understand the Moon’s phases, while also providing opportunities for middle school students to practice critical science skills, like using models, making predictions and observations, and linking them in evidence-based explanations. In the VizLab, students use both computer-based models and lamp + ball physical models.
We present findings from the first two phases of the study - one where we compared learning gains from the WWT VizLab with a traditional 2-dimensional Moon phases simulator; and another where we experimented with different ways of blending physical and virtual models in the classroom.
Presented July 20-24, 2013.
, in American Astronomical Society, AAS Meeting #223. American Astronomical Society, AAS Meeting #223. Washington, DC: American Astronomical Society; 2014. Publisher's VersionAbstract
We report results from an NSF-funded project to build, test, and research the impact of a WorldWide Telescope Visualization Lab (WWT Vizlab), meant to offer learners a deeper physical understanding of the causes of the Moon’s phases and eclipses. The Moon Phases VizLab is designed to promote accurate visualization of the complex, 3-dimensional Earth-Sun-Moon relationships required to understand the Moon’s phases, while also providing opportunities for middle school students to practice critical science skills, like using models, making predictions and observations, and linking them in evidence-based explanations. In the Moon Phases VizLab, students use both computer-based models and lamp + ball physical models. The VizLab emphasizes the use of different scales in models, why some models are to scale and some are not, and how choices we make in a model can sometimes inadvertently lead to misconceptions. For example, textbook images almost always depict the Earth and Moon as being vastly too close together, and this contributes to the common misconception that the Moon’s phases are caused by the Earth’s shadow. We tested the Moon Phases VizLab in two separate phases. In Phase 1 (fall 2012), we compared learning gains from the WorldWide Telescope (WWT) VizLab with a traditional 2-dimensional Moon phases simulator. Students in this study who used WWT had overall higher learning gains than students who used the traditional 2D simulator, and demonstrated greater enthusiasm for using the virtual model than students who used the 2D simulator. In Phase 2 (spring 2013), all students in the study used WWT for the virtual model, but we experimented with different sequencing of physical and virtual models in the classroom. We found that students who began the unit with higher prior knowledge of Moon phases (based on the pre-unit assessment) had overall higher learning gains when they used the virtual model first, followed by the physical model, while students who had lower prior knowledge benefited from using the physical model first, then the virtual model.