Snowmass is a US long-term planning study by the American Physical Society’s Division of Particles and Fields. The main focus is to develop the high-energy community’s long-term physics aspirations and communicate discovery opportunities to the broader scientific community and the US government. The study addresses the opportunities and challenges in the intensity, energy, and cosmic frontiers in the field. OSG has been instrumental in providing computing resources for the energy frontier studies. GlideinWMS was also instrumental to harnessing opportunistic resources for simulation of physics backgrounds involving CPU-intensive exact matrix element calculations.
Snowmass perspective: Sanjay Padhi
In my role as a Snowmass technical advisor for the energy frontier studies, I co-lead and designed the Combined LHC detector with members from ATLAS, CMS and FNAL LPC. The Combined LHC detector uses components from ATLAS and CMS sub-detectors with the best-expected performance in a high radiation environment. For the first time, we simulated large additional interactions (pile-ups of 0, 50, and 140) using parameterized (Delphes) simulations for 14, 33, and 100 TeV proton-proton collisions at the LHC. In order to evaluate the physics discovery potential from the recently observed Higgs boson and new physics searches, we needed a large Standard Model background – we expected luminosities of 300 1/fb after the long shutdown-1 (LS1) and 3000 1/fb for high luminosity LHC. Traditionally, this would require simulating 10s or 100s of billions of events using matrix element calculations with a parton shower.
In collaboration with Jay Wacker, Timothy Cohen, and Kiel Howe from the SLAC theory group, we created innovative techniques of weighted events – including pre-assigned particle decay branching ratios – and incorporated the differences in the event weights. We also computed and stored (as part of the event weights) the higher-order corrections in collaboration with John Campbell from the FNAL theory group. This leads to a significant reduction of the data volume needed to perform studies with such large luminosities. Simulation outputs are being used not only by the members of the ATLAS and CMS Collaborations, but also by the theoretical and phenomenology communities involved with hadron collider studies. These outputs contain collections of objects similar to what we expect from real proton-proton collisions. We harness resources using an OSG scheduler at Indiana University, and we submit them to many sites in the US as shown in the attached plot.
The results are stored at BNL, FNAL and the University of Nebraska (UNL). Thanks to an added web server and Xrootd-based access from UNL, the entire high-energy physics community can use samples (without being restricted to a given LHC virtual organization).
Providing guidance on the future direction of the energy frontier, we will present the results of these studies – involving Higgs, Top, and New Physics searches – at the Snowmass workshop in Minneapolis, July 29 – August 6, 2013. The help from OSG and FNAL LPC was crucial to the success of this program.
Sanjay Padhi is a scientist at the University of California, San Diego and a FNAL LPC Fellow. He is co-convener of the CMS Generator Physics group and is also involved with several supersymmetry and new physics searches at the LHC. He currently co-leads the technical advisory group of the Snowmass energy frontier activities.
Snowmass perspective: John Stupak
The Snowmass LPC group has been running two types of jobs. The first set of jobs runs a package named MadGraph to produce simulated proton-proton collisions (or “events”). The output from these jobs then serves as input to the second class of jobs. In this second set of jobs, packages named Bridge, Pythia, and Delphes serially process the simulated events, with the resulting output transferred to the following storage locations: the USCMS T1 at FNAL, the Holland Computing Center at UNL, and the ATLAS T1 at BNL. (We also had storage and data transfer help from the CILogon, AAA, OSG Public Storage, and GlideinWMS projects.) These outputs contain data similar to the data we get from actual proton-proton collisions: collections of electrons and other particles, along with their characteristics (energy, momentum, etc).
The first set of jobs needs instructions regarding what type of events to simulate. These instructions come in the form of a “gridpack.” Given that they are only O(10MB), Condor transfers them to the jobs. The output is also O(10MB), so Condor transfers this back to OSG-XSEDE.
The second set of jobs needs access to a large “minimum bias” file, which is O(1GB). However, in an earlier iteration of our recipe, the file was significantly larger – almost 50GB. This is too large to transfer with each job, so we worked out a better solution. Using a system known as iRods, the minimum bias file is pre-staged to storage elements at around 10 different grid sites. When a job starts, it checks for the presence of the minimum bias file in a temporary directory associated with the Glidein, and uses that file if it is present. If it is not present, the job uses iRods to get the file from one of the grid sites, then puts it in the Glidein temp directory. Thus, the data transfer is spread across several different sites, and the file is reused when possible. Some sites do not allow iRods, so in those cases we manually get the file using srm from FNAL, UNL, or BNL.
John Stupak has been a postdoctoral research associate with Purdue University – Calumet for almost a year. He received his PhD from Stony Brook University.