• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘analysis’

More is less!

Friday, July 15th, 2011

Life is full of uncertainty. And so is particle physics. No matter how sophisticated our models are or how good our understanding is there are still things we don’t know. This is research after all. Uncertainty is a huge part of everything we do.

Every time we make a measurement of anything, we have to give in our result, by saying “It’s this much, give or take that much”, and we refer to the “that much” as the uncertainty. (This has nothing to do with the famous Heisenberg uncertainty!) There are four main sources of uncertainty in our measurements:

From the control room to our dataset...

From the control room to our dataset... It's a long and arduous journey full of risks, dangers, and systematic uncertainties.

  • – Statistical uncertainty
  • – Model dependent uncertainty
  • – Uncertainty from other measurements
  • – Systematic uncertainty

The statistical uncertainty simply comes from having low numbers of events to work with and we can reduce this uncertainty by recording more data. This is why we love luminosity so much, and why we spend thousands of hours babysitting the detector.

The model dependent uncertainty comes from our choice of physical model and usually limited by how well we can simulate different models of physics. For a lowly experimental physicist like me the best thing to do is ask the theorists for these uncertainties. They’re often larger than we’d like, but that’s the price we pay for having access to cutting edge models.

The uncertainty from other measurements is usually included when we expect another measurement to more precise in the future. (We can hope!) A good example is the uncertainty on luminosity. As our understanding of the detector improves, this uncertainty can decrease.

The final uncertainty, the systematic uncertainty, is the one that keeps physicists awake at night. It’s what we all fear. These kinds of uncertainties enter into our work at every step, they can take months to evaluate, they can spur long and sometimes fierce debate and they can be the center of controversial discussions. Every time we manipulate the dataset in any way we bias our measurement somehow. Even worse, there’s no easy way of knowing how many systematic uncertainties there are and if we have taken them all into account.

After hours of discussion... we're still not entirely sure what we're looking at, but we've got a good idea.

After hours of discussion... we're still not entirely sure what we're looking at, but we've got a good idea.

This bias can be very simple to evaluate, or it be seemingly intractable. For example, let’s say that, for reasons beyond comprehension we choose to ignore every physics event that is recorded on a Tuesday. Unless there is some reason to think that the data we record depends on the day of the week, we can just exclude these events from our analysis and everything is fine. (Apart from the fact that we’d need to be insane to throw away perfectly good data like this!)

But let’s imagine that one of the experts likes to be around the ATLAS Control Room on weekdays and they are very good, almost obsessive, at making sure the Muon systems are in fine working order. That would mean that, on average, we would expect marginally better muon performance on a Tuesday compared to Saturday or Sunday. All of a sudden the performance of the detector depends on the day of the week. With so many factors like this it can be very hard to work out when to stop taking systematic uncertainties into account.

So how do we get around the problem of performing a completely intractable analysis with underlying processes which are chaotic and poorly understood on time for a conference? By performing another completely intractable analysis with underlying processes which are chaotic and poorly understood, of course! The easiest to remove all these uncertainties is to take a ratio of one measurement with another, and then magically nearly every single systematic uncertainty cancels out and we don’t have to care whether Alice or Bob was on shift when we took the data, or the quality of the coffee in Restaurant 1 when the Good Runs List was being compiled.

For my analysis I’m looking at the production and decay of the charged Higgs boson. (This is different to the Standard Model Higgs boson, and if it exists, it indicates new physics.) The Feynman diagram for the process looks like this:

How to make a charged Higgs boson

How to make a charged Higgs boson

That’s a lot of particles! The initial particles are gluons and quarks, which means we need to deal with some QCD and the rate of production of quarks, gluons and all that nasty stuff. So to safely cancel out all these uncertainties we can take a look at the closely related Standard Model process:

Our biggest irreducible background (and best control sample)

Our biggest irreducible background (and best control sample)

This kind of process happens very often as far as very heavy particle processes go, and it’s quite well understood. So if we want to take a ratio of measurements we can safely ignore everything that happens before the \(t\bar{t}\) quarks were produced and the whole analysis becomes much simpler!

In fact, taking ratios like this a very common trick, and it’s not just experimentalists who use it get their results quickly and efficiently. Theoreticians often deal with huge uncertainties and while there are many values that they can’t estimate directly, they can estimate lots of ratios very precisely and it’s these estimates that make life easier for the experimentalists.

So in the world of particle physics, “More is less”!

Share