On September 14, 2015, gravitational waves were directly observed for the first time by both detectors of the Laser Interferometer Gravitational-wave Observatory (LIGO), confirming what Einstein proposed in his general theory of relativity. Scientists are now seeking the source of such events.
While LIGO can pick out the general direction of the source of gravitational waves, it can’t identify the exact location. So, LIGO scientists coordinated their measurements with observations made by observatories like the Dark Energy Camera on the Blanco Telescope in Chile to find out if they could identify the waves’ source. The camera can see light from up to eight billion light years away, and captures more than 100,000 galaxies in each digital image.
Scientists at Fermilab and other institutions in the Dark Energy Survey use the camera to understand dark energy—a force scientists believe is helping the universe expand. A subset of members known as the Dark Energy Survey-Gravitational Wave (DES-GW) group are using the camera and the Open Science Grid (OSG) to build on LIGO’s groundbreaking findings.
Image courtesy Dark Energy Survey. Image taken by the DES Collaboration with the DECAM camera mounted on the Blanco Telescope at the Cerro Tololo Observatory in Chile
“Our focus primarily is the search for dark energy,” said Marcelle Soares-Santos, associate scientist at the U.S. Department of Energy’s Fermilab. “We were motivated to help LIGO because it’s the first time gravitational waves have been detected. Since we have experience detecting things through magnetic emissions, we coordinated with LIGO to find a source that we would find useful in our own research. Unfortunately this time we did not see anything, but we are now much better prepared when LIGO becomes active again later this year.”
Photo courtesy Marcelle Soares-Santos
The area of sky DES-GW members observe is very large, and requires processing a lot of images very quickly. That’s where the Open Science Grid (OSG) comes in. Without OSG, Soares-Santos says they couldn’t keep up. “For this event, we had something like 4-5,000 jobs. We must break every image down into smaller parts and process them in parallel on the OSG. The overall number of jobs is not exceptionally high, but it is critical to get our results fast—within 24 hours.”
Observations are then run with a spectrograph—which is expensive—so it’s important to narrow the choices down to only a few candidates. “At first, our turnaround time was not very fast, but thanks to our close partnership with the computing side here at Fermilab, now it is. We have great confidence that when LIGO observations start again in early August, we will be ready and hopefully see something. A big strength is we have experts on the computing side and also on the astrophysics side all in the same group.”
Kenneth Herner, an application developer and systems analyst at Fermilab, is one of those key experts on the computing side. He makes sure the DES group has as many resources as they need and devotes part of his time to OSG.
“Opportunistic OSG resources really help with the computing needs and the time crunch,” said Herner. “When we submit jobs, we get the first resources that meet the requirements no matter where they may be. We use the CernVM File System to pull in a code repository over HTTP to a local cache on a worker node. It only pulls down what it needs as it needs it. We don’t have to configure each OSG site—it just works. All OSG sites then look the same and all the site has to do is mount a repository.”
Photo courtesy Katherine Lato
In preparation for the LIGO partnership, Herner’s group prepared a code pipeline and made sure everything would work. The LIGO alert came on the 14th. “We had to wait on the telescope—and on top of that an earthquake in Chile,” said Herner. “We worked our plan, checked our code, transferred images from Chile up to the US, and submitted our jobs.”
Almost all the jobs ran at Fermilab, but Herner says it just worked out that way this time. They could have gone anywhere on OSG. “This was our shakedown cruise,” said Herner. “Future events will happen faster because now we are prepared. We also want to do a dry run entirely on OSG. Imagine if Fermilab computing was down and we got a trigger that day. We want to be sure we could still do it. The first event used about 15,000 CPU hours for a full pass over all nights, but with making multiple passes and preprocessing it was over 25,000 hours. Future events will hopefully be more like 15,000 hours but we don’t yet know for sure. It could be up to 25,000—it really depends on the amount of observation time. The whole point is we need the data right away. Without OSG resources, we would have taken Fermilab computing resources away from other experiments.”
Observing the sources of these gravitational waves will tell Soares-Santos how systems work and give her and her colleagues deeper insight into the physics. “It is quite challenging to observe these events,” said Soares-Santos. “We have to be quick to respond to see them. We have to be on the spot sooner and it is the computing that makes that possible. We couldn’t do it without the OSG because of the volume of data. We must have massive parallel computing and quick turnaround and hopefully next time we will see something exciting.”
– Greg Moore