The NOvA particle physics experiment, along with Fermilab’s Scientific Computing Division personnel and systems administrators from participating Open Science Grid institutions, recently deployed its large-scale C++ analysis code to run on demand on participating OSG sites. The resulting production campaign was the culmination of several months of work.
The NuMI Off-Axis electron-neutrino Appearance (NOvA) experiment is attempting to observe the appearance of electron neutrinos as a result of neutrino oscillations within the Neutrinos at the Main Injector (NuMI) beam. The NOvA Far Detectorin Ash River, MN is usually run with two main independent output event streams, a cosmic trigger and beam trigger, which contains the oscillated neutrino signal. The data is transferred back to Fermilab as soon as it becomes available and is catalogued and archived for permanent storage. To extract the oscillated neutrino signal from this data, it is critical to understand the cosmic ray background in great detail. This is achieved through computing simulations, the focus of the computational activity on OSG.
The deployment team faced two major obstacles that they needed to overcome: deploying a consistent version of rapidly changing software to many different OSG sites and efficiently transferring large amounts of data to these sites. To overcome the first challenge, they used the CERN Virtual Machine File System. CVMFS stores an experiment’s entire software suite, including all of the external dependencies, on a set of distribution servers. As individual worker nodes require access to software libraries, they download the needed libraries and store them in a local cache. The system downloads only the software that is needed for an individual job rather than the entire suite.
The second challenge, efficiently transferring data to sites, was tackled by using Fermilab’s file cataloguing front end (using dCache) to its ENSTORE mass storage system. From dCache, the sequential access via metadata (SAM) data management system retrieves the input file then transfers the files in series to the worker nodes, which process the input file. Once the processing completes, output files are automatically transferred back to Fermilab for cataloguing and archiving. The collaboration targeted the generation of 1,000,000 events—about three times as many events as have ever been produced in the past. This was achieved by running 10,000 jobs, requiring almost 90,000 CPU hours and producing 2 TB of data during two weeks of operations.
NOvA’s Andrew Norman and Fermilab’s Gabriele Garzoglio spearheaded this effort along with Southern Methodist University, University of Nebraska-Lincoln, University of Chicago, University of California, San Diego, University of Wisconsin-Madison, and FermiCloud. The lessons learned and success of these accomplishments serves as a good pre-cursor for use of the OSG by other similar experiments such as the Long-Baseline Neutrino Experiment (LBNE).
~ Nathan Mayer and Gavin Davies