## Posts Tagged ‘Cross-Sections’

### A Day in the Life: Cross-Sections

Sunday, May 1st, 2011

Hello again!

I thought I might take some time to describe what an experimental particle physicist actually does on a day-to-day basis.

I remember when I was an undergraduate studying physics, I found particle physics so fascinating.  It was this high tech world that seemed so glamorous.  But, at the time, I had no idea what a particle physicist did!  Big shiny detectors, and billion dollar machines were all that I knew about!

But, now that I’ve spent two years in the field, perhaps I can give you an idea of what happens “behind the scenes.”  I’m going to talk about cross-sections, and how we go about finding them.

(If you are unfamiliar with what a cross-section is, then take a look at these nice posts by Aidan Randle-Conde and Seth Zenz found here, and here, respectively.)

# The Bane of My Existence: Coding

So one of the things I’ve gotten far better at over the years has been computer programming.  Sadly, I purposefully avoided almost all computer-programming classes during my undergraduate studies.  This was a horrifically stupid idea in retrospect.  And if anyone reading this is interested in pursuing a career in a math, science, or an engineering related discipline; my suggestion to you is learn to code before you’re expected to.  It will do wonders for your career.

Moving on though, long gone are the days were particle physics experiments relied on photographic plates and cloud chambers.  Nowadays our detectors record everything electronically.

The detectors spit out an electric signal.  Then we perform what is called “reconstruction” on these signals (using computer algorithms), to make physics objects (observable particles, like photons, electrons, muons, jets, etc…).

Now, if you are a computer programmer, you might know where I’m going with this discussion.  If not a bit of some background info is required.  There is something called object-oriented programming (OOP).  In OOP you make what is called a class.  A class is like a template, which you use make objects.

Imagine I own a factory that makes cars.  Somewhere in my factory are the blue prints for the cars I produce.  Well a blueprint is what a class is in OOP.  Each blueprint is a template for a car, just as each class is a template for an object.  So we see that in this analogy, a car represents an object.

Now classes have what are called methods and data members.  On the blueprint for the 2012 Ford Mustang there is a data member for the car’s color, and there is a method for what type of transmission the car will be manufactured with.  So data members store information (car’s color), and methods perform actions on objects (manufacture with transmission type X).

But what do classes and methods have to do with High Energy Physics?  Well, physicists use classes present in an OOP language to store and analyze our data.  In CMS we use two OOP languages to accomplish this; they are python and C++; and we make our own custom classes to store our data.

So what types of classes do we have?  Well, there are classes for all physics objects (electron, a muon, a jet, etc…), detector pieces, and various other things.  In fact we’ve created an entire software framework to perform our research.

But, lets take the electron class as an example.  Because of these classes, all electrons in our data have the same structure.  The way they are accessed is the same regardless of the electron; and all the information about a particular electron is stored/retrieved in the same way (via the methods & data members of the electron class).

This is a very good thing, because a physicist may have to look at hundreds of thousands of electrons in the course of their research; so having a standardized way to access information is beneficial.

So in summary, to do research and analyze data we write code, and we run our analysis code on super-computing clusters around the world.

# Event Selection

Okay, now we know we need to write code to get anywhere, but what do we do from there?

Well we need to decide on what type of physics we want to study.  And how to find that physics in the data.

In 2010, the CMS detector recorded 43 inverse picobarns of data.  Now, there are approximately 7 * 1010 (or 70 billion) proton-proton collisions in one inverse pico-barn.  This makes for a total of  3 trillion recorded proton-proton collision events for 2010.

That’s a lot of data…and not all of it is going to be useful to a physicist.  But as they say, one person’s trash is another’s treasure.

For example, in my own analysis I look for low energy muons inside jets because this helps me find b-Jets in an event.  But an electro-weak physicist looking for W or Z’s decaying to muons is going to think the events that I use are garbage.  My muons are low energy whereas an electro-weak physicist needs high energy muons.  My muons are within jets whereas an electroweak physicist needs muons that are isolated (nothing else around them).  So while my data is perfect for the physics I’m trying  to do, it is worthless to an electroweak physicist.

With this in mind we as physicists make checklists of what an event needs for it to be considered useful.  This type of checklist is called a pre-selection, and it will include what type of data acquisition trigger was used; and a list of physics objects that must be present (and restrictions on them) in the event.

After an event has been tagged as being possibly useful to us, we investigate it further using another checklist, called a full event-selection.

For example, I might be interested in studying B-Physics, and I want to look at the correlations between two B-Hadrons produced in an event.

My pre-selection check-list for this might be:

• Jets detected by the High Level Trigger
• Presence of a Secondary Vertex in the event

My Event Selection Checklist might then be:

• The most energetic jet in the event must have an energy above threshold X
• The invariant mass of the secondary vertex must be above some value Y.

In case you are wondering, a secondary vertex is a point at which a heavy particle decayed within the detector, this occurs away from the primary vertex (point at which the protons collided).  The invariant mass of the secondary vertex is found by summing the invariant masses of all of the products that the heavy particle decayed into.

So in summary, we make checklists of what we are looking for; and then implement this into our computer code.

# Efficiencies

Finally we need to measure the efficiency of our selection process, or what percent of events that are created do we actually select.  We use a combination of real collision data and simulated data to make this estimation.  Then our efficiency is a measure of everything from the detectors ability to record the collision, our reconstruction process, and up to our specific selection techniques listed above.

The reason we need to measure this efficiency is that we are, more often then not, interested in performing inclusive measurements in physics.  Meaning, I want to study every single proton-proton collision that could give insight into my physics process of interest (i.e all events in which two B-Hadrons were produced).

The problem is, I could never possibly study all such collisions.  For one, we are colliding protons every 50 nano-seconds at the LHC currently.  We design our trigger system to only capture the most interesting events, and this sometimes causes us to purposefully drop a few here and there.  But this is a story for another time, and Aidan has done a good job describing this already in this post.

Anyway, so we convert our measurements back to this “inclusive” case.  This conversion allows us to say, “well if we were able to record all possible events, this is what our results would look like.”

But how is this accomplished?  Well, one way to do this is restrict ourselves to the point of which our data acquisition triggers have an efficiency of greater then 99%.

Courtesy of the CMS Collaboration

Here is a plot that shows the efficiency to record an event via several single jet triggers available in CMS.  Three triggers are plotted here, they each have a minimum energy/momentum threshold to detect a jet.

As an example, if in a proton-proton collision, a jet is produced with a momentum of 50 GeV/c; then this event will be recorded:

• 99% of the time by the trigger represented by the green line
• 50% of the time by the trigger represented by the blue line
• 0% of the time by the trigger represented by the red line (The Jet’s momentum isn’t high enough for that  trigger!).

So by playing with the jet energy thresholds in our Event Selection above, I can ensure that my detector will inclusively record all events in  this region of phase space (99% or higher chance to record an event).

But as I said earlier this is just one way we can transform our measurements into inclusive measurements.  There are usually other steps that must also be done to get back to the inclusive case.

# Experimental Cross-Section

Now that I’ve selected my events and physics objects within those events; and determined the efficiency of this process, I’m ready to make my measurement.

This part of the process takes much less time then our previous two steps.  In fact, it may take months for physicists to write our analysis code, and become confident in our selection techniques (rigorous investigation is required for this part).

Then, to determine an inclusive cross-section with respect to some quantity (say the angle between two B-Hadrons), I make a histogram.

The angle between two B-Hadrons can be between 0 and 180 degrees.  So the x-axis of this histogram is in degrees, and is binned into different regions.  The y-axis is then counts, or number of times I observed a B-Hadron pair with angle φ between them.

Next, I need to divide by the number of counts in each bin of my histogram by three things:

1. The integrated luminosity of my data sample (see Aidan’s post “What the L!?”), this makes the Y-Axis go from counts to units of inverse barn (or more appropriately, inverse picobarn).
2. My selection efficiency, this takes my measurement to the inclusive case
3. The width of each bin, this puts my cross-section purely in units of inverse barn (rather then inverse barn times degrees)

And finally, I’m left with a cross-section:

Image Courtesy of the CMS Collaboration.  Here the data points are shown in black, and the theoretical prediction is shown in green.

I’m now left with the differential scattering cross-section, for the production of 2 B-Hadrons, with respect to the angle between the two B-Hadrons.

Three cross-sections are actually plotted here.  Each of them corresponds to one of the triggers in our efficiency graph above.  The researchers who made this plot also multiplied two of the distributions by a factor of 2 and a factor of 4 (as shown in the legend).  This was done so the three curves wouldn’t fall on top of each other, and other scientists could interpret the data in an easier fashion.

This plot tells us that, at LHC Energies, B-Hadron pairs are more likely to be produced with small angles between them (the data points near the zero region on the x-axis are higher then the other points).  This is because a process called gluon splitting (a gluon splits into a quark and anti-quark) occurs more often then other processes.  Due to conservation of momentum, the angle between the quark/anti-quark that the gluon split into is very small.  But this is also a lengthy discussion for another time!

But that’s how we experimentally measure cross-sections, from start to finish.  We need to: write computer code, make a checklists of what we are looking for, determine the efficiency of our selection technique, and then make our measurement.

So hopefully this gives you an idea as to what an experimental particle physicists actually does on a day to day basis.  This is by no means all we do, measuring cross-sections is only one part of the research being done at the LHC.  I could not hope to, in a single post, cover all of our research activities.

Until next time,

-Brian