top of page

ARTICLE

A new look at one of the most abundant particles in the universe



@ PNNL

Neutrinos are constantly bombarding the surface of the earth. They are one of the most abundant particles in the universe; an estimated 400 trillion zip through your very body every second. Additionally, particle accelerators fire neutrinos through hundreds of kilometers at their detector targets through significant distances in the earth. But, despite this, they are extremely difficult to detect.

Researchers at PNNL are applying deep learning techniques to learn more about neutrinos, part of a worldwide network of researchers trying to understand one of the universe’s most elusive particles.

Their expertise is aimed at the glut of data generated in experiments by massive liquid argon time projection chambers known as LArTPCs. Scientists use these to study neutrinos and their role in our cosmos, but the enormous amounts of data they generate for scientists to sift through can be overwhelming due to the tens of thousands of channels and very high data rates in these high-fidelity detectors.

PNNL researchers recently used the national laboratory complex’s leadership-class supercomputer, known as Summit, to tap the power of deep learning to delve into the data generated by these neutrino physics experiments.

Alex Hagen discussed his team’s research at the recent International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2019) in Switzerland. It’s one of the first applications – perhaps the first – of a form of deep learning called deep convolutional neural networks in particle physics research using this powerful hardware and software.

The core of deep learning is training computer networks to learn patterns from data, to enable the network to make decisions based on future data. But in neutrino physics, the outpouring of data makes training times very long; it’s extremely difficult to train a network quickly enough to make sense of all the data. One solution is to crop the data – to limit the amount of data flowing into the network for analysis – but that approach can lead to the loss of critical information.

The PNNL team – which also includes Eric Church, Jan Strube, Kolahal Bhattacharya, and Vinay Amatya – tackled a mountain of simulated data from the MicroBooNE experiment at Fermilab. The scientists addressed the problem of data overload by training convolutional neural networks on PNNL’s research computing cluster, known as Marianas, and scaling up the problem to multiple nodes. Using tools such as PyTorch, Horovod, and SparseConvNet—mainly developed by Uber and Facebook— the scientists slashed data loss by more than 80 percent when they scaled the system from one to 14 NVIDIA P100 GPUs.

Then the team tested its data on 128+ NVIDIA P100 GPUs on the SummitDev computer at Oak Ridge, where the PNNL team achieved additional significant reductions in training time and data loss. They applied SparseConvNet to speed up training time even further. Training time is reduced almost linearly with GPU count, and the lowest losses achieved with a large number of GPUs are sometimes lower than achievable with only a few GPUs, due to the technical issue that the effective batch size is large enough to allow more optimal learning.

While this research has its roots in fields like image classification and handwriting recognition, the team expects that programs such as the Deep Underground Neutrino Experiment (DUNE)—the flagship U.S. high energy physics program for the next twenty years—will particularly benefit from this work.

Pacific Northwest National Laboratory (PNNL)
  • RSS

Subscribe to our monthly Newsletter

Get the nanotech news that matters directly in your inbox.

Thank you registering!

Follow us on social media

  • LinkedIn
  • X
  • Youtube
  • Tumblr
  • Facebook

Oct 16, 2024

Brno, Czech Republic

Nanocon 2024

Oct 20, 2024

Kobe, Hyogo, Japan

IEEE Sensors 2024

Oct 21, 2024

Athens, Greece

Future Materials Conference 2024

bottom of page