What is the universe made of? AI could help us find out.

Written by
Alaina O'Regan, Office of the Dean for Research
Oct. 20, 2023

Artificial intelligence could be the secret to uncovering theories of the universe that no one has thought of yet, and Princeton physicists at the world’s largest particle collider are gearing up to use AI advancement to their advantage.

Scientists have confirmed many theories about how the universe works by accelerating particles to near light-speed and watching what happens when they crash together. But in the search for what’s missing in our picture of the universe, it can be difficult to come up with possible answers, or to know what questions to ask in the first place.

Isobel Ojalvo with her group members Pallabi Das, Luis Alberto Perez Moreno and Adrien Pol in front of the CMS detector.
Isobel Ojalvo with her group members Pallabi Das, Luis Alberto Perez Moreno and Adrien Pol. Photo courtesy of researchers

A research team at the Large Hadron Collider (LHC) led by Princeton’s Isobel Ojalvo, assistant professor of physics, is developing an “anomaly detection algorithm” that uses machine learning – a branch of artificial intelligence where computers mimic the way humans learn – to identify collision events that are particularly rare or unusual. The team plans to fully deploy the algorithm next year to help search for new physics, developments still needed to complete our understanding of the universe.

“The difficulty with searching for new physics is that we don’t know where to look,” said Ojalvo, who has researched particle physics at the Large Hadron Collider for over a decade. “At the LHC, we haven’t actually discovered any new physics that wasn’t already theorized with the methods we’ve been using so far, so I thought we should try a model-independent approach.”

Support from the Eric and Wendy Schmidt Transformative Technology Fund awarded to Princeton’s Peter Elmer, senior research physicist, Mariangela Lisanti, professor of physics, and Ojalvo in 2021 enabled the beginning of research and development on this new method.

You'll know it when you see it

Particles in the LHC race in opposite directions around a 17-mile long underground tube and crash together at set interaction points, producing a variety of events for physicists to analyze. When two particles collide and break apart, the smaller particles that they’re composed of can interact with each other to form an array of all kinds of different particles.

Illustrated diagram of the LHC main tunnel underground, showing the locations of the four main detectors at opposite quadrants: CMS, ATLAS, LHCb and ALICE
The 17-mile long Large Hadron Collider (LHC) is contained in a circular tunnel thirty stories underground near Geneva, Switzerland. Its four main detectors are situated at opposite quadrants, where particles collide at near the speed of light. Image by CERN
Event displays of lead-lead ion collisions in the CMS detector. Depiction of the particles in the CMS detector spraying out in many directions with different trajectories from a central interaction point.
Physicists analyze the outcomes of particle collisions to discover new particles and unknown laws of nature. This image is a display of lead-lead ion collisions in the CMS detector on September 26, 2023. Image by CERN

Many of the same outcomes happen repeatedly, and researchers have statistical models that tell how often they should see each type of event. Some collision events are especially rare, and some may be so rare that they’ve gone completely unnoticed. These events are what the anomaly detection algorithm will help bring to light.

“Two things are required to make a particle physics discovery,” said Andrew Loeliger, postdoctoral researcher at the University of Wisconsin-Madison and member of Ojalvo’s team. “You need enough energy to make the interaction you’re looking for happen, and you need enough statistical information to be able to pick out what you’re looking for.”

Disrupting the paradigm

The LHC generates about 40 million particle collisions per second, producing far too much data for the detectors to store. To solve this problem, each detector is connected to a series of systems called "triggers" that optimize decisions about what data to store and what to throw away.

The anomaly detection algorithm will become part of a series of trigger systems connected to one of the LHC’s main detectors, the Compact Muon Solenoid (CMS) detector.

Narrow hallway lined with large computer systems.

The level one trigger system is a series of hardware connected to the CMS detector that optimizes decisions about what collision data to save and what to throw away. Photo by Alaina O'Regan

“To understand trigger systems, imagine you’re searching the internet for the best pasta recipe,” said Stephanie Kwan, Princeton University graduate student in Ojalvo's group. “You might scroll through pages of Google results and just read the titles, and decide which ones look interesting to bookmark for later. Then, you read through the recipes, maybe look at the pictures, and narrow it down further.”

Stephanie Kwan
Princeton graduate student Stephanie Kwan in front of the CMS detector at CERN in France. Photo by Alaina O'Regan

The existing trigger systems are trained to recognize the data that scientists have pre-selected to be of interest. It’s like telling your computer to only save pasta recipes containing ingredients you think will taste good, like cheese or garlic.

But what if the best pasta recipe doesn’t have cheese or garlic, and instead contains something unexpected, like mustard or corn flakes? Your computer would automatically filter that recipe out every time, and you would never find it.

The new algorithm takes a different approach by using artificial intelligence to recognize when something highly unusual happens, and telling the detector to save that event.

“We don’t exactly know what we’re looking for with this algorithm, or what we’re going to find,” said Adrian Alan Pol, postdoctoral researcher who designed and implemented much of the machine learning code into the hardware. “If this method works, it will disrupt the paradigm that’s been followed in the LHC for the past fifteen years.”

The paradigm – coming up with theories about a particle or a law of nature, and searching for evidence to prove it – has led to success in the past, like when scientists discovered the Higgs Boson, the particle associated with the Higgs field that causes every other fundamental particle to have mass, a decade ago.

Jennifer Ngadiuba
Jennifer Ngadiuba, associate scientist at the Fermi National Accelerator Laboratory, is collaborating with Ojalvo to design a different version of the anomaly detection algorithm for broader use. Photo courtesy of researcher

“We have a lot of theories, we just don’t know which one is the right one,” said Jennifer Ngadiuba, associate scientist with Wilson Fellowship at the Fermi National Accelerator Laboratory, whose group is working on a slightly different version of an anomaly detection algorithm for wider implementation. “We’re not going to abandon the classical approach because it’s worked when we’ve had good theories. But we need this complementary approach that is able to be sensitive to something that we haven’t theorized.”

A bias-free approach

An overarching goal of the project is to overcome biases that are typically woven into the hunt for new physics. “Normally, you start out with a lot of assumptions about what you're looking for, and what you're going to do to find it,” Loeliger said. “The anomaly detection algorithm is different in that it’s assumption-free by design.”

The algorithm is trained on what is called a zero-bias data set, meaning the researchers trained the program using real data produced by the LHC, rather than hand-selecting which data they want to use.

Instead of scanning for particular features or outcomes, the algorithm uses a clever process to determine whether or not a given event is rare.

When the program receives data from the detector, it compresses it into a string of numbers. Then, it decompresses the string of numbers, and the final output should be roughly the same as the original input data, supposing the program has seen the same data and performed the computation many times over throughout its training.

The more often the system sees a given event, the more accurately it learns to compress and decompress the data produced. When a rare event comes along and produces data that the program hasn’t seen before, or hasn’t seen often, then it will not accurately perform this process and the output will look significantly different from the input.

In the final step, the program takes the difference between the output and the input to calculate what is called the “loss score.” If the loss score is high, that means the event that produced the data is rare, and the detector will store it for later analysis.

Ojalvo said that the most challenging task was to get the algorithm to complete this process in just hundreds of nanoseconds. “The fact that we have so little time to do this means that we need to think about how to design the algorithm in a way that allows it to process the data as quickly as possible,” she said.

The researchers found a way to do this by developing what they call a “student-teacher model” to calculate the loss score directly from the input data. The “teacher” model performs the process as described, and the “student” model learns to calculate the loss score directly by training on data from the teacher model.

Ojalvo said the student-teacher model greatly reduces the constraint of resources, and allows the algorithm to perform effectively in real-time.

Physics that nobody’s tried before

When dealing with such a large volume of data and complex processes, researchers check their work by simulating what is happening with the trigger system’s hardware directly on a regular computer, using what is called an emulator.

“It’s just like how when I was in high school, people would download these Game Boy emulators onto their computer to play videogames,” Ojalvo said. “It’s software that mimics the functionality of hardware, and you could play Mario on your laptop.”

“Every system that we put into the level one trigger has a corresponding emulator, a piece of software that’s designed to replicate it as best as we possibly can on our computer,” said Loeliger. “So that way, we can make sure that it's acting the way we think it's acting, and simulate what we think the response should be on data we take.”

Andrew Loeliger in front of CMS detector

Andrew Loeliger in front of the CMS detector. Photo courtesy of researcher

Loeliger, who designed the emulator software, said it was a challenge because there are dedicated pieces of silicon in the trigger systems that are specifically designed for ultra-fast computations. “You have to be able to bridge this gap between what somebody puts onto this very dedicated hardware, and something that runs on a computer. So that’s kind of tricky. A lot of my job has been jumping that gap,” he said.

Pallabi Das, postdoctoral researcher at Princeton who works on the project, said the most exciting moment to her was the first time the data from the hardware and the software emulator matched. “When the program running on the hardware and the emulator running on the computer produced the same output with the same inputs, that was our ‘a-ha moment’ that this works,” she said.

The researchers recently implemented the algorithm into the CMS detector’s level one trigger system to test it and make sure it picks out interesting events on its own. They plan to begin collecting data with it in 2024 and 2025.

“What I’m excited about is that this is a kind of particle physics nobody's really tried to do before,” Loeliger said. “I'm very excited to be part of a physics physics result that is sort of bias-free and will help us strategize on where we spend our targeted time.”

The group is collaborating with Ngadiuba and other researchers who are developing a version of the algorithm that can be generalized for a variety of other uses, along with the generic tools needed to apply machine learning models to broader applications in science.

Ngadiuba also deploys these tools to design machine learning models to be used for a large-scale project called the Deep Underground Neutrino Experiment (DUNE), which is currently under development and aims to detect particles that may help us understand mysteries about the origin of matter and formation of black holes.

“You can imagine that if anybody could just take the algorithm and plug it in to look for something that’s out of the ordinary, and collect data, that would be useful,” Ojalvo said.

“The goal of every particle physics experiment is more or less to find some new particle that nobody's expecting, this piece that suddenly makes it all work together,” Loeliger said. “If our algorithm can show us particles or events that we wouldn’t expect to see, that didn’t go over in a prediction, it’s a sign of a discovery.”

The High Energy Physics research program at Princeton University is funded by the Department of Energy (DOE) grant number DE-SC0007968. Development of the anomaly detection algorithm was also supported by the Eric and Wendy Schmidt Transformative Technology Fund.