Every month, here at Lawrence Berkeley National Laboratory, we have a lunchtime discussion between the particle physicists who work in theory and those who work on experiments. That may not sound very often, but even having an organized meeting so frequently is a recent development; we may all be working to understand the same natural phenomena, but our day-to-day concerns are very different.
Experimental particle physicists like me mostly work on building our equipment, keeping it working, and simulating how it responds to (relatively) ordinary particles so we can be sure that our measurements work at a basic level. Once we’re confident what we’ve measured, turning it into the answer to a question about fundamental physics — have we seen a new particle, or are we sure it isn’t there? — is the conceptual “last step.” Meanwhile, theorists spend a lot of their time working out the details of things that can’t be measured directly; their “last step” is to figure out how their ideas can be observed. We all have calculations to do, but theorists often work on beautiful math, while experimentalists usually do ugly statistics.
But even if we think and work differently, at some point we have to interact: we experimentalists look for the things the theorists invent, and they change their theories based on what we find. How does that usually work? Through journal articles and online preprints. Theorists publish their multitude of ideas. Experiments search for a set of physics signatures that correspond to those ideas, and publish what we find. (By “physics signature” I mean some particular combination of particles; for example, one of several signatures of the top quark would be a bottom quark jet, an electron or muon, and missing energy from an escaping neutrino.) Assuming we don’t find anything in an experimental search, there are two categories of things that experimentalists can publish: one is a precise specification of the physics signature we looked for, and the other is a limit on some particular theory that we tried to find.
If we put a limit on a theory we searched for, that is very helpful information for whoever worked on that particular theory, but what about someone with a different theory? What about somebody who comes up with a new theory later? How can he or she figure out whether that theory has been excluded by previous searches? What we discussed at the Theory-Experiment lunch meeting yesterday was some of the ways to answer that question.
A note to my colleagues in theory and experiment: if there are any ideas I have misrepresented or omitted, it is probably due to my own ignorance. Please leave comments and I will update my post. Oh, and while I’m giving caveats: any opinions on any of these options are my own.
And now, on to the ways we might answer our hypothetical theorist’s question: I’ve invented a new theory, what do I do now?
Option 1. Publish your theory, then see if anyone on an experiment is interested in looking for it. This is the current default system. The problem is that doing a new analysis from scratch is hard, and there are way too many theories to look for!
Option 2. Realize that your theory “looks kinda like” another theory that has been searched for. Suggest to experimentalists that they rerun their analysis, but look for your theory instead. So they have to rerun the parts of the analysis that simulate what they’re looking for, but they don’t have to rerun any of the backgrounds or change any of the details of the signature they’re searching for. A background is something ordinary from the Standard Model of Particle Physics that manages to mimic the signal you’re hoping to get from your new theory — and figuring them out is actually the bulk of the work, because they often depend on very rare mismeasurements in the detector that are hard to simulate. If experimenters can avoid redoing that by reusing most of the analysis, then they might have time to look, even if it probably won’t be as effective a search as if it had been custom-designed for your theory.
A framework called RECAST was released this past week, to provide a means of streamlining requests to “recast” analyses in this manner. I don’t think the LHC experiments are likely to use it in the short term, for a few reasons. First, we’re really rushing around to find new stuff, rather than apply existing measurements to old theories. Second, we’re still figuring out our procedures for completing and in-house review and approval ordinary analyses; figuring out how to organize and approve recast analyses would be more on top of a process that’s already tough. Older experiments like the ones at the Tevatron accelerator at Fermilab, where things are settled down and there’s an effort to take the years of accumulated data as far as possible, are more likely to have time to take a close look at such a system.
Option 3. You could run a version of the analysis yourself, using something like Pretty Good Simulation in place of having an experiment redo the analysis. The problem is that rough simulation, while much less computationally intensive, doesn’t have all the details of the detector and all the experimental knowledge that went into understanding them. That makes getting the backgrounds right hard, so although this could be useful for getting a general idea of whether your new theory should have been detected by now, it’s not going to give an exact answer.
Option 4. If an experimental measurement did a good job of explaining exactly the signature they saw, in a way that’s fully corrected for detector effects, then maybe nobody has to redo the analysis at all! Just simulate collisions in your theory, count how many there are fitting the signature, and compare to the measurement. For example, in a search for Supersymmetry, ATLAS might publish the number of events we see with four jets above some momentum and a certain amount of missing energy. Your theory doesn’t have to be a kind of Supersymmetry for you to count jets and compare to that number. If your theory were right, would there have been a lot more events than that? The problem with this method is that experimentalists don’t always give such straightforward conditions for theorists to compare with — sometimes we can’t, because in some cases we have to assume things about what we looking for in order to correct our data. Still, this is definitely something to strive for.
Option 5. Encourage experimentalists to put limits on more general models. When experimentalists compare theory to data, one of the problems that arises is that we pick some theory and limit what the masses of its particles could be — but that theory is too specific. But if there were simple models that could be translated into many different theories, then potentially even new theories could be tied back to limits that had already been set. The LHC New Physics Working Group is working to define a list of simple models. The problem with this is that theorists are good at coming up with new ideas that don’t fit nearly in existing lists of models: in fact, some folks specialize in inventing things that are hard to look for!
In conclusion, I think a lot of us are hoping that all these details are premature for the LHC, and that new physics will jump right out at us! If that happens, it will give us a lot more clarity on what to look for, how to look for it, and what kind of theories to compare with what kind of data — and, of course, a lot of excitement! Until then, I don’t think there’s a perfect answer to our question.