• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Seth Zenz | Imperial College London | UK

View Blog | Read Bio

I’ve Invented a New Theory, What Do I Do Now?

Every month, here at Lawrence Berkeley National Laboratory, we have a lunchtime discussion between the particle physicists who work in theory and those who work on experiments.  That may not sound very often, but even having an organized meeting so frequently is a recent development; we may all be working to understand the same natural phenomena, but our day-to-day concerns are very different.

Experimental particle physicists like me mostly work on building our equipment, keeping it working, and simulating how it responds to (relatively) ordinary particles so we can be sure that our measurements work at a basic level. Once we’re confident what we’ve measured, turning it into the answer to a question about fundamental physics — have we seen a new particle, or are we sure it isn’t there? — is the conceptual “last step.”  Meanwhile, theorists spend a lot of their time working out the details of things that can’t be measured directly; their “last step” is to figure out how their ideas can be observed.  We all have calculations to do, but theorists often work on beautiful math, while experimentalists usually do ugly statistics.

But even if we think and work differently, at some point we have to interact: we experimentalists look for the things the theorists invent, and they change their theories based on what we find.  How does that usually work?  Through journal articles and online preprints.  Theorists publish their multitude of ideas.  Experiments search for a set of physics signatures that correspond to those ideas, and publish what we find.  (By “physics signature” I mean some particular combination of particles; for example, one of several signatures of the top quark would be a bottom quark jet, an electron or muon, and missing energy from an escaping neutrino.)  Assuming we don’t find anything in an experimental search, there are two categories of things that experimentalists can publish: one is a precise specification of the physics signature we looked for, and the other is a limit on some particular theory that we tried to find.

If we put a limit on a theory we searched for, that is very helpful information for whoever worked on that particular theory, but what about someone with a different theory?  What about somebody who comes up with a new theory later?  How can he or she figure out whether that theory has been excluded by previous searches?   What we discussed at the Theory-Experiment lunch meeting yesterday was some of the ways to answer that question.

A note to my colleagues in theory and experiment: if there are any ideas I have misrepresented or omitted, it is probably due to my own ignorance.  Please leave comments and I will update my post.  Oh, and while I’m giving caveats: any opinions on any of these options are my own.

And now, on to the ways we might answer our hypothetical theorist’s question: I’ve invented a new theory, what do I do now?

Option 1. Publish your theory, then see if anyone on an experiment is interested in looking for it.  This is the current default system.  The problem is that doing a new analysis from scratch is hard, and there are way too many theories to look for!

Option 2. Realize that your theory “looks kinda like” another theory that has been searched for.  Suggest to experimentalists that they rerun their analysis, but look for your theory instead.  So they have to rerun the parts of the analysis that simulate what they’re looking for, but they don’t have to rerun any of the backgrounds or change any of the details of the signature they’re searching for.  A background is something ordinary from the Standard Model of Particle Physics that manages to mimic the signal you’re hoping to get from your new theory — and figuring them out is actually the bulk of the work, because they often depend on very rare mismeasurements in the detector that are hard to simulate.  If experimenters can avoid redoing that by reusing most of the analysis, then they might have time to look, even if it probably won’t be as effective a search as if it had been custom-designed for your theory.

A framework called RECAST was released this past week, to provide a means of streamlining requests to “recast” analyses in this manner.  I don’t think the LHC experiments are likely to use it in the short term, for a few reasons.  First, we’re really rushing around to find new stuff, rather than apply existing measurements to old theories.  Second, we’re still figuring out our procedures for completing and in-house review and approval ordinary analyses; figuring out how to organize and approve recast analyses would be more on top of a process that’s already tough.  Older experiments like the ones at the Tevatron accelerator at Fermilab, where things are settled down and there’s an effort to take the years of accumulated data as far as possible, are more likely to have time to take a close look at such a system.

Option 3. You could run a version of the analysis yourself, using something like Pretty Good Simulation in place of having an experiment redo the analysis.  The problem is that rough simulation, while much less computationally intensive, doesn’t have all the details of the detector and all the experimental knowledge that went into understanding them.  That makes getting the backgrounds right hard, so although this could be useful for getting a general idea of whether your new theory should have been detected by now, it’s not going to give an exact answer.

Option 4.  If an experimental measurement did a good job of explaining exactly the signature they saw, in a way that’s fully corrected for detector effects, then maybe nobody has to redo the analysis at all! Just simulate collisions in your theory, count how many there are fitting the signature, and compare to the measurement. For example, in a search for Supersymmetry, ATLAS might publish the number of events we see with four jets above some momentum and a certain amount of missing energy. Your theory doesn’t have to be a kind of Supersymmetry for you to count jets and compare to that number. If your theory were right, would there have been a lot more events than that? The problem with this method is that experimentalists don’t always give such straightforward conditions for theorists to compare with — sometimes we can’t, because in some cases we have to assume things about what we looking for in order to correct our data. Still, this is definitely something to strive for.

Option 5.  Encourage experimentalists to put limits on more general models. When experimentalists compare theory to data, one of the problems that arises is that we pick some theory and limit what the masses of its particles could be — but that theory is too specific. But if there were simple models that could be translated into many different theories, then potentially even new theories could be tied back to limits that had already been set. The LHC New Physics Working Group is working to define a list of simple models. The problem with this is that theorists are good at coming up with new ideas that don’t fit nearly in existing lists of models: in fact, some folks specialize in inventing things that are hard to look for!

In conclusion, I think a lot of us are hoping that all these details are premature for the LHC, and that new physics will jump right out at us! If that happens, it will give us a lot more clarity on what to look for, how to look for it, and what kind of theories to compare with what kind of data — and, of course, a lot of excitement! Until then, I don’t think there’s a perfect answer to our question.

  • I have had a sort of love affair with CERN since the 1985 PBS project “Creation of the Universe” with Timothy Ferris. So, I “crunch” for [email protected]

    I have been building up a list of RSS feeds, Facebook pages, Twitter pages for all of the US labs that I can find participating in the work at CERN.

    By hook and crook, I turn up new finds in posts such as yours. But this is catch-as-catch-can.

    Is there a site where I can get a list of all of the labs participating in the work at CERN?

    (Oh, do I wish we had built the Superconducting Super Collider in Texas that our idiot Congress killed in 1993 because of “no obvious practical value”. If the LHC cost $10-12 billion now, what would that have cost then? A handfull of our billionaires could have ponied up the dollars for building and endowing that venture and, if there were enough of them, they would not even have felt it and the US would be competitive.

    But, I am gathering that even though the work is based in Switzerland, a huge amount of it is actually taking place here.)

  • josh222


    You can look up the institutions at some page from the Users Office.
    At the following link it is sorted by member states,
    just click on the map:

    You can look up institutions from other countries too, and sorted by experiments in the “Grey Book” (or maybe, “Gray Book”? 🙂

    All Institutions from the US (a very long list) can be found here:

    Regarding the SSC: Well the US can’t always have the biggest. It would be simply boring. Future accelerators will probably be international projects anyway, as single countries will not be able to finance them.
    Finally, in my opinion science on a national scale/approach makes no sense at all, IMHO.

  • Josh-

    Thanks for your reply. The more and more I find my way, the greater is the contribution I see from US based scientists to the LHC effort. Brian Cox, Mr TV, is getting all of the press; but, clearly the US investment of time and intellect is huge. I think Professor Cox is really good for science in general, I am not knocking him.

    The video “The Atom Smashers”, about Fermi, was very depressing about the US near the end of the video. But, when I go to Fermi’s site, Brookhaven’s site, your site, I see what looks like a different truth.

    If you think I am nuts, please tell me.

  • Richard,

    You can also find more information about U.S. Institutions working on the LHC on this very website:


    The U.S. contribution to the LHC is substantial, and we are having a good time doing it. In fact, the U.S. has made big contributions to CERN experiments for a long time, just like European physicists have made big contributions to the experiments at Fermilab.

  • Great to hear of this initiative to collaborate with folks who speak a different “language”; I hope that over time the interaction can increase to generate an increase in shared eureka moments, no matter where the other people are from. Best wishes.

  • As a Fermilab scientist who is also working at CMS, I can attest that the intellectual atmosphere at Fermilab is anything but depressing. While the energy frontier has moved to Europe, the Tevatron still has a lot more data in the can. There may be an extension of the Tevatron run. This will be decided in the next couple of months. However, as Fermilab’s Tevatron operations ramps down, there is a corresponding increase in making the most intense beams ever achieved.

    While we all would like to see an energy frontier machine based at Fermilab, this will have to wait until some potential future accelerator. In the meantime, realize that the US supplies about 30% of the LHC manpower and that Fermilab remains the single-biggest contributor of scientific manpower and resources to the LHC outside CERN itself. As the first high-statistics measurements of the LHC start to shake themselves out, the future will become clearer.

  • Don- Thanks for the statement about Fermi.

    Josh- Thanks for checking out my music blog.

  • josh222

    as Seth, Mike and Don pointed out there is no reason to be frustrated. I think the US is one of the most attractive country for scientists from all over the world. Sometimes I even hear complaints about the US “buying out” the brightest heads from other countries. From another POV you can say: If the politicians from other countries are not able to create such working and living environment, it is their own fault.
    I think that, and the whole amount of educated scientists and budget leads to the big US-contribution in the LHC (and other experiments at CERN and elsewhere). The fact that the SSC was not built may have some impact too.
    But since the LHC is mostly about basic science I don’t care that much where it takes place as long as the results belong to all mankind.

    Whats wrong with Brian Cox getting all the attention?
    If he is good at explaining physics to a broader audience I thinks its fine. Far more better than all the “Gods Particle” and doomsday nonsense that was in the press. Is it because he is British? I would be happy
    if there was anything comparable in my language in our national TV stations.
    Well, I have heard Jon Stewart has done something about the LHC and Obama will be on Myth Busters in the US-TV soon 🙂
    But the perfect host for the funniest science show in the worl would probably be C. -“American scientific companies are cross-breeding humans and animals and coming up with mice with fully functioning human brains”- O’Donell.
    Sorry, just kidding, hope this is not too offensive.
    But are there any good science shows or reporting in the US?
    I have an eye on your political landscape because it is really entertaining (and a bit frightening) compared to most of Europe, but I’m not aware of whats going on in your TV regarding science.

  • josh222

    thanks for the hint, just checked it.
    A lot about Jazz, which I like. Ever heard about the
    Moers Festival?

  • So, you all have been a big help and an inspiration. I have begun a blog, Science Springs, http://sciencesprings.wordpress.com, which will be aimed at raising the visibility of scientific research in the United States. Believe me, if one is not in some science community, you guys are all invisible.

    My first post deals with what is closest to my heart, Public Distributed Computing, using BOINC software developed at rthe Space Sciences Lab at UC Berkeley.

    I have zero scientific background beyond my basic college courses. But “crunching data” for great scientific research projects all around the globe, such as [email protected], gives me a great sense of trying to help by doing what I can do. I am running five Windows machines, two of which are hyper threaded quads [that equals 16 threads = 16 WU’s at a time] plus three lesser machines, running 24/7/365, except if I am on a trip.

    I got every RSS feed, every Twitter feed, and every Facebook feed from the list of US Labs and the list for the US in the Graybook. This will be what I use to try to collect news and give it a face.

    So, hey, on to Higgs…

  • Chen Xiao-Fan

    Does superstring theory predict correct particle spectra known to us?

  • Hi Seth,

    Nice article.

    I would like to encourage anyone that is supportive of the RECAST proposal to leave a comment on the RECAST website. It will require effort by the experimentalists to incorporate their analyses into a system like this, so we need encouragement that it would be appreciated.

    I also agree strongly with the suggestion that the Tevatron is a very natural place for RECAST to take off. There is a lot of well understood data, and we could broaden the impact of the existing analyses and extend the legacy of the Tevatron.

    A few comments on Options 4 & 5:

    Your Option 4 starts “If an experimental measurement did a good job of explaining exactly the signature they saw, in a way that’s fully corrected for detector effects, then …”

    At that point I agree with what you might say next… but there is a subtlety. Correcting for the detector effects depends on the kinematics of the signal you are interested in. If you want to plot a pT spectrum where detector effects have been ‘corrected for’ (eg. removed, unfolded, etc.) then you need to assume something about the eta distribution of those jets. The only way you could avoid this is if you corrected the full event in a differential way (eg. unfold an N-d joint distribution, which isn’t feasible usually).

    I’m familiar with the unfolded/corrected approach for things like Z+jets where you know what you are looking for and you aren’t expecting large differences in the the other distributions. It’s less clear to me how well it would work with something like a non-SUSY signal through a SUSY search, where the kinematics might be very different.

    I also agree with your Option 5 “Encourage experimentalists to put limits on more general models.”, but there we also face an issue of how we will publish the results. The more general theories tend to have more than two parameters… and then you can’t publish your limits on a 2-d piece of paper without resorting to huge tables or some form of digital publication. I’ve been trying to promote the use of the RooFit/RooStats workspace as a digital form of publication of those results for models with more than two parameters.

    Thanks for the post!

  • http://op-webtools.web.cern.ch/op-webtools/vistar/vistars.php?usr=LHC1

    had a visit today from the ScienceSprings blog.


  • Well worth to read this article, thanks for sharing this information. With this article you offered me got a chance to know about this, anyway i say Great Article! and waiting for you next article about this interesting subject.

  • Somebody visited the MusicSprings blog today. I hope that you found something of interest.