Difference between revisions of "DUNE Computing Mission Statement"

From DUNE-Computing Wiki
Jump to navigation Jump to search
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
  
The DUNE long baseline neutrino oscillation collaboration consists of 178 institutions from 32 countries, including 15 European nations and CERN. The experiment is in preparation now with commissioning of the first module expected over the period 2024-2026 and a long data taking run with 4 modules expected from 2026-2036 and beyond.  An active prototyping program is already in place with a short test beam run with a 700T, 15,360 channel prototype of single-phase readout at the neutrino platform at CERN in late 2018 and tests of a similar sized dual-phase detector scheduled for mid-2019.  The DUNE experiment has already  benefited greatly from these initial tests.  The collaboration has recently formed a formal '''Computing Consortium'''[https://wiki.dunescience.org/wiki/Computing_Consortium_Main_Page], with significant contributions by European institutions and interest from groups in Asia to work on common software and computing development and to formalize resource contributions.
+
The DUNE long baseline neutrino oscillation collaboration consists of 178 institutions from 32 countries, including 15 European nations and CERN. The experiment is in preparation now with commissioning of the first module expected over the period 2024-2026 and a long data taking run with 4 modules expected from 2026-2036 and beyond.  An active prototyping program is already in place with a short test beam run with a 700T, 15,360 channel prototype of single-phase readout at the neutrino platform at CERN in late 2018 and tests of horizontal-drift and vertical-drift prototypes in 2023.  The DUNE experiment has already  benefited greatly from these initial tests.  The collaboration has recently formed a formal '''Computing Consortium'''[https://wiki.dunescience.org/wiki/Computing_Consortium_Main_Page], with significant contributions by European institutions and interest from groups in Asia to work on common software and computing development and to formalize resource contributions.
  
 
The consortium resource model benefits from existing Grid OSG and WLCG infrastructure developed for the LHC.  DUNE, through  the ProtoDUNE-SP effort, is already using global resources for simulation and the analysis of ProtoDUNE-SP data.  Multiple European sites are part of this resource pool and are making significant contributions ( > 50%) to the ProtoDUNE single and dual phase programs.  We expect this global computing consortium to grow and evolve as we move towards data from the full DUNE detectors in the middle of the next decade.
 
The consortium resource model benefits from existing Grid OSG and WLCG infrastructure developed for the LHC.  DUNE, through  the ProtoDUNE-SP effort, is already using global resources for simulation and the analysis of ProtoDUNE-SP data.  Multiple European sites are part of this resource pool and are making significant contributions ( > 50%) to the ProtoDUNE single and dual phase programs.  We expect this global computing consortium to grow and evolve as we move towards data from the full DUNE detectors in the middle of the next decade.
Line 16: Line 16:
 
In summary, DUNE's computing strategy is to be '''global''', working with partners worldwide, and '''collaborative''', as almost all of the computational challenges we face are faced by similar experiments.
 
In summary, DUNE's computing strategy is to be '''global''', working with partners worldwide, and '''collaborative''', as almost all of the computational challenges we face are faced by similar experiments.
  
[[MediaWiki:Breadcrumbs]]
 
 
[[Category:Management]]
 
[[Category:Management]]

Latest revision as of 23:57, 17 November 2022

The DUNE long baseline neutrino oscillation collaboration consists of 178 institutions from 32 countries, including 15 European nations and CERN. The experiment is in preparation now with commissioning of the first module expected over the period 2024-2026 and a long data taking run with 4 modules expected from 2026-2036 and beyond. An active prototyping program is already in place with a short test beam run with a 700T, 15,360 channel prototype of single-phase readout at the neutrino platform at CERN in late 2018 and tests of horizontal-drift and vertical-drift prototypes in 2023. The DUNE experiment has already benefited greatly from these initial tests. The collaboration has recently formed a formal Computing Consortium[1], with significant contributions by European institutions and interest from groups in Asia to work on common software and computing development and to formalize resource contributions.

The consortium resource model benefits from existing Grid OSG and WLCG infrastructure developed for the LHC. DUNE, through the ProtoDUNE-SP effort, is already using global resources for simulation and the analysis of ProtoDUNE-SP data. Multiple European sites are part of this resource pool and are making significant contributions ( > 50%) to the ProtoDUNE single and dual phase programs. We expect this global computing consortium to grow and evolve as we move towards data from the full DUNE detectors in the middle of the next decade.

The DUNE science program is expected to produce raw data volumes similar in scale to the data volumes that current LHC Run-2 experiments have already recorded. Baseline predictions for the DUNE data, dependent on actual detector performance and noise levels, are 30-60 PB of raw data per year. These data, with simulations and derived analysis samples, need to be available to all collaborating institutions. We anticipate that institutions worldwide will play an important role both as contributors and end-users of storage and CPU resources for DUNE.

To enable these resource contributions in cooperation with the LHC and other communities, we plan to utilize common computing layers for infrastructure access and use common tools to ease integration of facilities with both the DUNE and LHC computing ecosystems. For example, we plan to utilize common data storage methodologies to establish large highly available data lakes worldwide and to collaborate with the broader HEP community in developing other common tools.

The 2018 test beam run of protoDUNE Single-Phase (SP) was a valuable live test of this model. The ProtoDUNE Single Phase detector at CERN produced raw data at rates of up to ~2GB/s of data. These data were stored on tape at CERN and Fermilab and replicated at sites in the UK and Czech Republic. In total 1.8 PB of raw data were produced during the test beam run. This prototype run has been extremely beneficial in exercising the existing computing infrastructure and in building a team of interested institutions.

HEP has considerable infrastructure in place for international computing collaboration thanks to the LHC program. Other large experiments - LSST, SKA, DUNE and HyperK will be coming on board over the next decade. DUNE's strategy is to work with the global community to maximize the use of common tools for data movement and storage, job control and monitoring, accounting and authentication. All large-scale experiments will encounter similar issues and worldwide cooperation on common tools is the most cost-effective way to proceed. For example, in collaboration with the Fermilab, CERN and the UK groups, we are investigating the use of Rucio as our primary data manager.

In addition to traditional HEP computational strategies, DUNE's data consists of simple but very large 2D data objects which share many characteristics with astrophysical images. This presents opportunities to use current advances in machine learning and pattern recognition as a frontier user of High Performance Computing facilities capable of massively parallel processing. We share this problem, and propose to share solutions, with the other Liquid Argon experiments - ArgoNeut, LAriat, MicroBooNE, SBND and ICARUS. We have already benefited greatly from prior work and plan to contribute cooperatively.

In summary, DUNE's computing strategy is to be global, working with partners worldwide, and collaborative, as almost all of the computational challenges we face are faced by similar experiments.