• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for the ‘Uncategorized’ Category

The April Meeting

Tuesday, May 3rd, 2011

2011 American Physical Society April Meeting, Anaheim, CA

 

 

 

 

 

 

 

 

 

 

 

Hello from Anaheim, California!

Yes it is that time of year: the April APS (American Physical Society) meeting.   It has become tradition that each year in April, the membership of the APS in the Division of Particles and Fields meets together with the membership of somewhat related divisions: Astrophysics, Nuclear Physics, Computational Physics, Physics of Beams, and Plasma Physics.  I find these meetings particularly broadening, as I can sometimes hear about topics that I do not necessarily get exposure to all of the time in my day-to-day work in hadron collider physics.  In fact, some of the more entertaining session titles I have seen here include “Black Holes: Nature’s Ultimate Spinmeisters, “Much Ado about Nothing: The Quantum Vacuum, and “So Many Dynamos: Flow-Generated Magnetic Fields in Nature, in the Computer, and in the Lab.  (I believe the latter also wins for longest session title, barely beating out the more straightforward and understandable– for me at least– “Precision Measurements, Fundamental Symmetries, and Tests of the Standard Model“.)

Other interesting topics at this meeting, such as “Nuclear Weapons at 65“, “The Status of Arms Control“, and “Best Practices in K12 Physics Teacher Education Programs”  are a result of the inclusion of the Forum on Society, the Forum on Education, and other such broad-interest topics in this meeting.  Yet in my opinion one of the most important roles that these APS (and the Divisional) meetings play is to provide a forum for students to give 10-15 minute parallel session talks on their own analysis.  At other conferences it is rare to have single-result talks rather than summaries, and summaries are generally given to more senior people.  This is often the first (and sometimes only) chance a graduate student has to prepare a talk for a non-expert (non-working group) audience. With these talks they learn to prepare a summary of their work with an appropriate level of detail, omitting jargon, timing it properly, and most importantly, stating the big picture (the context) of their work, as well as the bottom line.  When I was a graduate student I found the APS meetings to be valuable training in public presentations.  For this reason I sent my student, David Cox, from Fermilab to Anaheim to present his own recent work on our searches for a massive top-like, perhaps 4th generation, quark (“tprime“) at the Tevatron.  He has actually had practice giving talks at other meetings, but this is still good experience for him.  He is also attending useful career sessions for graduate students as well.

My own main purpose for attending this meeting has been to present results in an invited plenary talk on Top Quark Physics, which I delivered on Saturday morning during one of several plenary sessions. My talk focused on results from the Tevatron‘s CDF and D0 experiments, not from the LHC.  This was in fact a tall order for a 30 minute talk, since the large datasets from Run 2 of the Tevatron, together with the years of experience with these detectors and analysis tools, have meant a plethora of interesting and innovative results from CDF and D0 constantly being released to the public.  Measurements of the top quark mass for example, the all-important electroweak parameter, have reached sensitivities to less than a percent relative, much better than the Run 2 goal of 3 GeV.  Yet some relatively new measurements, such as the studies of the difference between the mass of the top quark and the mass of the anti-top quark (expected to be zero if CPT is conserved), still have very little statistical sensitivity due to the difficulty of the measurement.

The measurements of the forward-backward asymmetry AFB in top pair production have received attention earlier this year not only because both CDF and D0 continue to see a 2-sigma (or more) discrepancy with the theoretical predictions, but also because there appears to be a dependence on the invariant mass of the top pair system, which could imply the existence of new high-mass particles decaying to top quarks.  (The original AFB measurement at the Tevatron was actually performed by my postdoc, Tom Schwarz — CDF Top Group Convener, when he was a thesis student at U. Michigan, and we’ve continued to study this anomaly with our collaborators from Michigan since then.)  This measurement has generated quite a bit of theoretical interest so I was happy to take some time for these measurements,  along with many other interesting topics, such as whether the top quark really has an exotic -4/3 charge instead of the +2/3 charge of the Standard Model.

While the Tevatron is producing spectacular results in the area of top quark physics (and many other areas), the reality is that even at half of the design energy, the LHC will soon outshine the Tevatron for most measurements.  The production cross section (production rate) for top pairs at the 7 TeV LHC is much greater than at the ~2 TeV Tevatron due to the higher energies available.  Measurements of things like the top-antitop mass difference, or the top quark charge, will soon have better sensitivity at the LHC.  It may take a little longer for the LHC experiments to catch up in the area of the precision top mass measurement, mainly due to the complicated systematic uncertainties that need to be taken into account, but eventually the Tevatron will be bested there as well.  The AFB measurement will be difficult to challenge or improve upon at the LHC, however, since the asymmetry is thought to result from quark-antiquark annihilation, which is much more dominant at the Tevatron’s proton anti-proton collider than the proton-proton collider of the LHC.  For that we will still have more to say from the Tevatron’s final datasets.

Giving this talk has been a nice way for me to pay tribute to the amazing results from dedicated analysts at the Tevatron over the ~16 years since the top quark was discovered there. Although the Tevatron is scheduled to close down later this year , I cannot help be excited about the new projects I and many others are working on at the LHC.  Some are topics that we could barely touch at the Tevatron such as boosted top quarks, which I am currently working on at CMS.  (See Flip Tanedo’s recent post on this subject from Atlas.)  Some, like our tprime searches, have shown hints of excess events on the tails of the distribution, so we are excited to see whether this excess grows at the LHC.  Regardless of the particular topic, we are all approaching the LHC with the knowledge we have gained from the Tevatron, and are excited to continue to explore the particle frontier with the greater rates and energies of the LHC.  And we are definitely on the look-out for discoveries!

 

Share

Any large collaboration like ATLAS needs a process for allowing members to communicate their work to each other and to the public. There have been some recent questions about how this process works, so I’m going to address the topic in this post.

We particle physicists are a bit unusual, though not unique, among scientific disciplines in that our authors sign official papers in alphabetical order as opposed to being ranked by how much they contributed to the work. We are also famous for our long author lists, which for the large LHC experiments include up to a few thousand people since all members of the collaboration sign each paper unless individuals request that their names be removed.

There has been some debate in the field about whether our author lists should be more exclusive and include only those people who worked directly on the physics analysis being published. I have always appreciated the lack of squabbling over author lists and the way our inclusive list gives a nod to the fact that our detector is incredibly complex and could only be built, maintained and interpreted for physics results with a large team. There are also many people who have contributed to the upstream work of an analysis, which makes the final result possible. The counter-argument is that it is nearly impossible for people outside the field to know who did the actual analysis work for any particular result. I think that people inside the field can usually find out who did what, even at other experiments, pretty easily by seeing who gives the related talks at the conferences and from reference letters within the collaboration, and even just by asking around.

Regardless of where you come down on the author list debate, the fact that our author list is currently the entire collaboration puts a burden on our result approval process in that every author needs to be given the opportunity to comment on every result he/she will sign.

Before we worry about communicating our results to the world, we need to have a mechanism to communicate our work in real time to each other within the collaboration. This allows us to scrutinize the steps as they are taken so we know that we are building a solid analysis. We achieve that by giving presentations at meetings and writing emails, but we communicate probably most efficiently by writing notes to each other to document snapshots of the early stages of an analysis. This documentation can have a much smaller list of authors who are responsible for the specific set of ideas presented. Documents like this are simply labeled “COM” for “communication,” and they are not intended for public consumption. Any ATLAS member can write a COM note at any time, and people do not necessarily put the names of all of the people on which their work relies on the author list.

If you want your work to move toward official internal ATLAS approval, you can request that it be given the status “INT” for “internal”. At this point leaders of the relevant physics group appoint reviewers, and the authors have a chance to get feedback in a formal way from other collaboration members. A note that has gained INT status has undergone at least some peer review, though it stays internal to the collaboration.  The content of the INT note is often too technical for general public interest, but can be invaluable for other ATLAS collaborators who want to either reproduce a result or take the analysis to the next step with a good understanding of everything that has come before.

Some COM-notes can also become public (i.e. available to everyone on the planet). Together with published papers, these public notes report the scientific output of the experiment.  In order for the result to take the final step to become public, an editorial board is appointed, and often a new note is written (starting as a COM note) with an attempt to remove ATLAS-specific jargon and details that people outside the collaboration would not necessarily find useful. With the help of the editorial board, the note is brought to a stage where it is ready to receive feedback from the entire collaboration. If the note is approved by the collaboration it will be posted to an archive that is available to the public, submitted for publication and/or the results will be shown at conferences.

There are, of course, many details that I haven’t described, but the end result is that an analysis that has been publicly approved by ATLAS will have come under scrutiny at many stages of the process. People work very hard to make sure that the results presented to the public are worthy of being signed by the collaboration. Our goal is to work as a team as quickly as we can to get these results out to the rest of the world while at the same time ensuring that we have not made mistakes.  Our scientific reputation is on the line.

Share

Hello again!

I thought I might take some time to describe what an experimental particle physicist actually does on a day-to-day basis.

I remember when I was an undergraduate studying physics, I found particle physics so fascinating.  It was this high tech world that seemed so glamorous.  But, at the time, I had no idea what a particle physicist did!  Big shiny detectors, and billion dollar machines were all that I knew about!

But, now that I’ve spent two years in the field, perhaps I can give you an idea of what happens “behind the scenes.”  I’m going to talk about cross-sections, and how we go about finding them.

(If you are unfamiliar with what a cross-section is, then take a look at these nice posts by Aidan Randle-Conde and Seth Zenz found here, and here, respectively.)

 

The Bane of My Existence: Coding

So one of the things I’ve gotten far better at over the years has been computer programming.  Sadly, I purposefully avoided almost all computer-programming classes during my undergraduate studies.  This was a horrifically stupid idea in retrospect.  And if anyone reading this is interested in pursuing a career in a math, science, or an engineering related discipline; my suggestion to you is learn to code before you’re expected to.  It will do wonders for your career.

Moving on though, long gone are the days were particle physics experiments relied on photographic plates and cloud chambers.  Nowadays our detectors record everything electronically.

The detectors spit out an electric signal.  Then we perform what is called “reconstruction” on these signals (using computer algorithms), to make physics objects (observable particles, like photons, electrons, muons, jets, etc…).

Now, if you are a computer programmer, you might know where I’m going with this discussion.  If not a bit of some background info is required.  There is something called object-oriented programming (OOP).  In OOP you make what is called a class.  A class is like a template, which you use make objects.

Imagine I own a factory that makes cars.  Somewhere in my factory are the blue prints for the cars I produce.  Well a blueprint is what a class is in OOP.  Each blueprint is a template for a car, just as each class is a template for an object.  So we see that in this analogy, a car represents an object.

Now classes have what are called methods and data members.  On the blueprint for the 2012 Ford Mustang there is a data member for the car’s color, and there is a method for what type of transmission the car will be manufactured with.  So data members store information (car’s color), and methods perform actions on objects (manufacture with transmission type X).

But what do classes and methods have to do with High Energy Physics?  Well, physicists use classes present in an OOP language to store and analyze our data.  In CMS we use two OOP languages to accomplish this; they are python and C++; and we make our own custom classes to store our data.

So what types of classes do we have?  Well, there are classes for all physics objects (electron, a muon, a jet, etc…), detector pieces, and various other things.  In fact we’ve created an entire software framework to perform our research.

But, lets take the electron class as an example.  Because of these classes, all electrons in our data have the same structure.  The way they are accessed is the same regardless of the electron; and all the information about a particular electron is stored/retrieved in the same way (via the methods & data members of the electron class).

This is a very good thing, because a physicist may have to look at hundreds of thousands of electrons in the course of their research; so having a standardized way to access information is beneficial.

So in summary, to do research and analyze data we write code, and we run our analysis code on super-computing clusters around the world.

 

Event Selection

Okay, now we know we need to write code to get anywhere, but what do we do from there?

Well we need to decide on what type of physics we want to study.  And how to find that physics in the data.

In 2010, the CMS detector recorded 43 inverse picobarns of data.  Now, there are approximately 7 * 1010 (or 70 billion) proton-proton collisions in one inverse pico-barn.  This makes for a total of  3 trillion recorded proton-proton collision events for 2010.

That’s a lot of data…and not all of it is going to be useful to a physicist.  But as they say, one person’s trash is another’s treasure.

For example, in my own analysis I look for low energy muons inside jets because this helps me find b-Jets in an event.  But an electro-weak physicist looking for W or Z’s decaying to muons is going to think the events that I use are garbage.  My muons are low energy whereas an electro-weak physicist needs high energy muons.  My muons are within jets whereas an electroweak physicist needs muons that are isolated (nothing else around them).  So while my data is perfect for the physics I’m trying  to do, it is worthless to an electroweak physicist.

With this in mind we as physicists make checklists of what an event needs for it to be considered useful.  This type of checklist is called a pre-selection, and it will include what type of data acquisition trigger was used; and a list of physics objects that must be present (and restrictions on them) in the event.

After an event has been tagged as being possibly useful to us, we investigate it further using another checklist, called a full event-selection.

For example, I might be interested in studying B-Physics, and I want to look at the correlations between two B-Hadrons produced in an event.

 

My pre-selection check-list for this might be:

  • Jets detected by the High Level Trigger
  • Presence of a Secondary Vertex in the event

My Event Selection Checklist might then be:

  • The most energetic jet in the event must have an energy above threshold X
  • The invariant mass of the secondary vertex must be above some value Y.

 

In case you are wondering, a secondary vertex is a point at which a heavy particle decayed within the detector, this occurs away from the primary vertex (point at which the protons collided).  The invariant mass of the secondary vertex is found by summing the invariant masses of all of the products that the heavy particle decayed into.

So in summary, we make checklists of what we are looking for; and then implement this into our computer code.

 

Efficiencies

Finally we need to measure the efficiency of our selection process, or what percent of events that are created do we actually select.  We use a combination of real collision data and simulated data to make this estimation.  Then our efficiency is a measure of everything from the detectors ability to record the collision, our reconstruction process, and up to our specific selection techniques listed above.

The reason we need to measure this efficiency is that we are, more often then not, interested in performing inclusive measurements in physics.  Meaning, I want to study every single proton-proton collision that could give insight into my physics process of interest (i.e all events in which two B-Hadrons were produced).

The problem is, I could never possibly study all such collisions.  For one, we are colliding protons every 50 nano-seconds at the LHC currently.  We design our trigger system to only capture the most interesting events, and this sometimes causes us to purposefully drop a few here and there.  But this is a story for another time, and Aidan has done a good job describing this already in this post.

Anyway, so we convert our measurements back to this “inclusive” case.  This conversion allows us to say, “well if we were able to record all possible events, this is what our results would look like.”

But how is this accomplished?  Well, one way to do this is restrict ourselves to the point of which our data acquisition triggers have an efficiency of greater then 99%.

 

Courtesy of the CMS Collaboration

 

Here is a plot that shows the efficiency to record an event via several single jet triggers available in CMS.  Three triggers are plotted here, they each have a minimum energy/momentum threshold to detect a jet.

As an example, if in a proton-proton collision, a jet is produced with a momentum of 50 GeV/c; then this event will be recorded:

  • 99% of the time by the trigger represented by the green line
  • 50% of the time by the trigger represented by the blue line
  • 0% of the time by the trigger represented by the red line (The Jet’s momentum isn’t high enough for that  trigger!).

So by playing with the jet energy thresholds in our Event Selection above, I can ensure that my detector will inclusively record all events in  this region of phase space (99% or higher chance to record an event).

But as I said earlier this is just one way we can transform our measurements into inclusive measurements.  There are usually other steps that must also be done to get back to the inclusive case.

 

Experimental Cross-Section

Now that I’ve selected my events and physics objects within those events; and determined the efficiency of this process, I’m ready to make my measurement.

This part of the process takes much less time then our previous two steps.  In fact, it may take months for physicists to write our analysis code, and become confident in our selection techniques (rigorous investigation is required for this part).

Then, to determine an inclusive cross-section with respect to some quantity (say the angle between two B-Hadrons), I make a histogram.

The angle between two B-Hadrons can be between 0 and 180 degrees.  So the x-axis of this histogram is in degrees, and is binned into different regions.  The y-axis is then counts, or number of times I observed a B-Hadron pair with angle φ between them.

Next, I need to divide by the number of counts in each bin of my histogram by three things:

 

  1. The integrated luminosity of my data sample (see Aidan’s post “What the L!?”), this makes the Y-Axis go from counts to units of inverse barn (or more appropriately, inverse picobarn).
  2. My selection efficiency, this takes my measurement to the inclusive case
  3. The width of each bin, this puts my cross-section purely in units of inverse barn (rather then inverse barn times degrees)

 

And finally, I’m left with a cross-section:

Image Courtesy of the CMS Collaboration.  Here the data points are shown in black, and the theoretical prediction is shown in green.

 

I’m now left with the differential scattering cross-section, for the production of 2 B-Hadrons, with respect to the angle between the two B-Hadrons.

Three cross-sections are actually plotted here.  Each of them corresponds to one of the triggers in our efficiency graph above.  The researchers who made this plot also multiplied two of the distributions by a factor of 2 and a factor of 4 (as shown in the legend).  This was done so the three curves wouldn’t fall on top of each other, and other scientists could interpret the data in an easier fashion.

This plot tells us that, at LHC Energies, B-Hadron pairs are more likely to be produced with small angles between them (the data points near the zero region on the x-axis are higher then the other points).  This is because a process called gluon splitting (a gluon splits into a quark and anti-quark) occurs more often then other processes.  Due to conservation of momentum, the angle between the quark/anti-quark that the gluon split into is very small.  But this is also a lengthy discussion for another time!

But that’s how we experimentally measure cross-sections, from start to finish.  We need to: write computer code, make a checklists of what we are looking for, determine the efficiency of our selection technique, and then make our measurement.

So hopefully this gives you an idea as to what an experimental particle physicists actually does on a day to day basis.  This is by no means all we do, measuring cross-sections is only one part of the research being done at the LHC.  I could not hope to, in a single post, cover all of our research activities.

 

Until next time,

-Brian

 

 

 

Share

Jet spotting

Saturday, April 30th, 2011

Every week I try to take a few hours to study something different. The idea being that this will give me a broader sense of what’s going on within the ATLAS collaboration and the world of particle physics at large. Last week I was mostly watching Gavin Salam’s superb lectures on jets. They’re available as videos from here.

So what is a jet? It’s certainly nothing to do with aeroplanes. Jets are what we observe at ATLAS when a highly energetic quark or a gluon (collectively referred to here as partons) is produced in a collision.

I won’t take the time to explain the physics behind a jet and how they come into being. Those interested can see Flip’s excellent post. In essence, instead of the individual partons, what we see in the detector is a spray of collimated particles. This is what we refer to as a “jet”.

At hadron colliders, such as the LHC, jets are everywhere. In fact the vast majority of interactions at a hadron collider will result in the creation of multiple jets. They are our window on partons and on to the strong force itself.

Being so ubiquitous it’s important that we’re able to reliably identify these within our detector. Unfortunately this isn’t always such an easy task. The event display below illustrates a typical jet event. How many jets do you see?

 


 

Here is ATLAS’s answer.

 


 

In this case the jets have helpfully been colour coded. In real life, this doesn’t happen.

As you can tell the definition of a jet can be somewhat ambiguous. At ATLAS the trigger system has to quickly identify thousands of jets a second in order to pick out the interesting events to record. Identifying such a large number of jets is no easy feat.

To solve this problem we use jet algorithms. These are pieces of software which define jets based on what we see in the detector. They come in all sorts of shapes and sizes, from “simple” versions where a jet is defined as all the particles inside a cone, to more advanced versions which sequentially combine together individual particles based on their separation and energy.

Different algorithms have different strengths and weaknesses. Cone based jets are relatively simple and provide nice, round jets. Unfortunately though, the jets they identify can easily be altered by changes within the jet itself, or by small amounts of energy coming from unrelated collisions. This makes them very hard to compare to the predictions from theory. More complicated algorithms such as the “kT” algorithm remove these ambiguities, but often result in “ugly” irregularly shaped jets.

The current vogue algorithm both at CMS and ATLAS is the so called “anti-kT” algorithm. This starts from the most energetic single particles and sequentially combines them with everything nearby, stopping at some pre-defined distance. This algorithm results in the identification of nice, round jets, and does this consistently regardless of the small amounts of additional energy or the structure of the jets themselves.

Share

Science friendly browsing

Wednesday, April 27th, 2011

(Note: This post requires JavaScript to be enabled on your browser.)

For years the internet has been a wonderful tool for people all over the world, bringing distant communities together, changing the way think about communication and information and having a huge impact for the better in nearly every aspect of our lives. Unfortunately there are still major problems when it comes to sharing scientific knowledge. This is changing very quickly though, making this an exciting time for internet-savvy scientists.

LaTeX sets the high standards we have come to expect for mathematical markup.

How did this kind of situation arise? Science journals have had their own markup language, LaTeX, for decades, predating the internet by many years. LaTeX is available to anyone makes it very easy to generate simple, attractive documents with excellent support for a wide variety of mathematical symbols. (Making complicated documents isn’t quite so easy, but still possible!) Making documents like this can be very intensive, as every margin and the space between every character is analyzed, with restrictions imposed by paper sizes.

On the other hand, the hypertext markup language (HTML) and cascading style sheets (CSS) are the standards which are widely used on the internet, and they are focused mainly on the aesthetics of more popular kinds of journalism. The HTML standards are intended to work on any operating system, and they should give a semantic description of the content of a webpage, without consideration for style. The CSS then take over and decide how the information is displayed on the screen. (Check out the CSS Zen Garden to see the power of CSS.) In principle, writing a webpage that follows the HTML and CSS standards is quite easy, but in reality it’s it can be a very problematic and tedious task. The internet is a dynamic medium, with different developers trying different tricks, different browsers supporting different features and no real control concerning the best practices. Groups such as W3C have tried to standardize HTML and CSS, with quite a lot of success, but it’s a slow process and it has taken years to get to where we are today.

CSS makes the internet an aesthetically compelling medium. (CSS Zen Garden)

Trying to get mathematical markup with these kinds of constraints is quite tricky! Math is inherently two dimensional, making good use of subscripts, superscripts, indices, fraction, square roots… HTML is much better at handling long passages of text which flow from one line to the next, without much scope for anything as exciting as a nested superscript. And so for a long time it became very awkward to include math on a webpage.

Over the years there have been many approaches to this problem, including LaTeX2HTML, MathML, using images, or expecting the poor user to interpret LaTeX markup! Eventually, the CSS standards settled down, browsers started to conform to the same behavior, and it became possible to display math without the use of any images, plugins or other suboptimal solutions.

With the exciting developments of Web 2.0, we have access to MathJAX. We can take LaTeX markup and put it directly into a webpage and MathJAX can turn this:

\[
  \nabla \times \vec{H} & = & \vec{J} + \frac{\partial \vec{D}}{\partial t}
\]

into this:

\[
\nabla \times \vec{H} = \vec{J} + \frac{\partial \vec{D}}{\partial t}
\]

Beautiful! It also works inline like this: \(E=mc^2\) becomes \(E=mc^2\). (None of this will work if JavaScript is disabled on your browser, which is a shame for you, because it looks very pretty on the page!) Using MathJAX is as simple as writing normal LaTeX between \[ and \] symbols for block-level text, \( and \) symbols for inline text.

We finally have a way to show equations on any browser, with any operating system, that complies with all the standards laid out by the W3C. So much for math markup. What about technical drawings and graphs? Scientists have been using vector graphics in their work for decades, so it would also be nice to have a way to show these kinds of images.

This is the kind of image we can make with the canvas! Making graphs can be easy, and the output can be beautiful and interactive.

Some browsers have supported vector graphics for a few years, but once again, different browsers behave differently, and vector graphics have been developed rather late, so there are large performance issues. However, with the development of the next generation of HTML browsers should support a brand new kind of image, the HMTL5 canvas. It allows designers of websites to draw detailed images on the fly, even allowing the user to interact with the images! It will take some time before most of the users on the internet can have access to the HTML5 canvas, so until then we can’t rely on these new features to share information.

On the other hand it means that we living in a very exciting time where anyone can develop their own work using the canvas, and help shape our experiences with the internet in the future! The standards used online have always lagged behind how the latest developers are using the tools at their disposal, and when the standards get updated the ingenuity of the developers is taken into account. Soon the canvas will support 3D graphics, making our online experiences even richer! Want to help shape how this is developed? Then get involved! Try out the canvas today and see what you can create! There are dozens of fascinating examples at Canvas Demos. Here are some of my favorites:

  • MolGrabber 3D– a great way to visualize molecules in three dimensions.
  • Flot– how to show graphs on a webpage.
  • Pacman– a clone of the classic arcade game!

The internet is going to get very cool in the near future, giving us the ability to share information like never before! When anyone can create animations and simulations, blogs like this will become even more interactive, even more compelling and even more useful. I can’t wait to see what MathJAX and the HTML5 canvas will deliver!

Share

The CERN Accelerator Complex

Sunday, April 24th, 2011

With all the buzz this past week regarding the breaking of the world instantaneous luminosity record, I thought it might be interesting for our readers to get an idea of how we as physicists achieved this goal.

Namely, how do we accelerate particles?

(This may be a review for some of our veteran readers due to this older post by Regina)

 

The Physics of Acceleration

Firstly, physicists rely on a principle many of us learn in our introductory physics courses, the Lorentz Force Law.  This result, from classical electromagnetism, states that a charged particle in the presence of external electric and/or magnetic fields will experience a force.  The direction and magnitude (how strong) of the force depends on the sign of the particle’s electric charge and its velocity (or direction its moving, and with what speed).

So how does this relate to accelerators?  Accelerators use radio frequency cavities to accelerate particles.  A cavity has several conductors that are hooked up to an alternating current source.  Between conductors there is empty space, but this space is spanned by a uniform electric field.  This field will accelerate a particle in a specific direction (again, depending on the sign of the particle’s electric charge).  The trick is to flip this current source such that as a charged particle goes through a succession of cavities it continues to accelerate, rather than be slowed down at various points.

A cool Java Applet that will help you visualize this acceleration process via radio frequency cavities can be found here, courtesy of CERN.

Now that’s the electric field portion of the Lorentz Force Law, what about the magnetic?  Well, magnetic fields are closed circular loops, as you get farther and farther away from their source the radii of these loops continually increases.  Whereas electric fields are straight lines that extend out to infinity (and never intersect) in all directions from their source.  This makes the physics of magnetic fields very different from that of electric fields.  We can use magnetic fields to bend the track (or path) of charged particles.  A nice demonstration of this can be found here (or any of the other thousands of hits I got for Googling “Cathode Ray Tube + YouTube”).

Imagine, if you will, a beam of light; you can focus the beam (make it smaller) by using a glass lens, you can also change the direction of the beam using a simple mirror.  Now, the LHC ring uses what are called dipole and quadropole magnets to steer and focus the beam.  If you combine the effects of these magnets you can make what is called a magnetic lens, or more broadly termed “Magnetic Optics.”  In fact, the LHC’s magnetic optics currently focus the beam to a diameter of ~90 micro-meters  (the diameter of a human hair is ~100 micro-meters, although it varies from person to person, and where on the body the hair is taken from).  However, the magnetic optics system was designed to focus the beam to a diameter of ~33 micro-meters.

In fact, the LHC uses 1232 dipole magnets, and 506 quadrupole magnets.  These magnets have  a peak magnetic field of 8.3 Tesla, or 100,000 times stronger than Earth’s magnetic field.  An example of the typical magnetic field emitted by the dipole magnets of the LHC ring is shown here [1]:

Image courtesy of CERN

 

The colored portions of the diagram indicate the magnetic flux, or the amount of magnetic field passing through a given area.  Whereas the arrows indicate the direction of the magnetic field.  The two circles (in blue) in the center of the diagram indicate the beam pipes for beams one and two.  Notice how the arrows (direction of the magnetic field) point in opposite directions!  This allows CERN Accelerator physicists to control two counter-rotating beams of protons in the same beam pipe (Excellent Question John Wells)!

Thus, accelerator physicists at CERN use electric fields to accelerate the LHC proton/lead-ion beams and the magnetic fields to steer and squeeze these beams (Also, these “magnetic optics” systems are responsible for “Lumi Leveling” discussed by Anna Phan earlier this week).

However, this isn’t the complete story, things like length contraction, and synchrotron radiation affect the acceleration process, and design of our accelerators.  But these are stories best left for another time.

 

The Accelerator Complex

But where does this process start?  Well, to answer this let’s start off with the schematic of this system:

Image courtesy of CERN

One of our readers (thanks GP!) has given us this helpful link that visualizes the acceleration process at the LHC (however, when this video was made, the LHC was going to be operating at design specifications…but more on that later).

A proton’s journey starts in a tank of research grade hydrogen gas (impurities are measured in parts per million, or parts per billion).  We first take molecular hydrogen (a diatomic molecule for those of you keeping track) and break it down into atomic hydrogen (individual atoms).  Next, we strip hydrogen’s lone electron from the atom (0:00 in the video linked above).  We are now left with a sample of pure protons.  These protons are then passed into the LINear ACcelerator 2 (LINAC2, 0:50 in the video linked above), which is the tiny purple line in the bottom middle of the above figure.

The LINAC 2 then accelerates these protons to an energy of 50 MeV, or to a 31.4% percent of the speed of light [2].  The “M” stands for mega-, or times one million.  The “eV” stands for electron-volts, which is the conventional unit of high energy physics.  But what is an electron-volt, and how does it relate to everyday life?  Well, for that answer, Christine Nattrass has done such a good job comparing the electron-volt to a chocolate bar, that any description I could give pales in comparison to hers.

Moving right along, now thanks to special relativity, we know that as objects approach the speed of light, they “gain mass.”  This is because energy and mass are equivalent currencies in physics.  An object at rest has a specific mass, and a specific energy.  But when the object is in motion, it has a kinetic energy associated with it.  The faster the object is moving, the more kinetic energy, and thus the more mass it has.  At 31.4% the speed of light, a proton’s mass is ~1.05 times its rest mass (or the proton’s mass when it is not moving).

So this is a cruel fact of nature.  As objects increase in speed, it becomes increasingly more difficult to accelerate them further!  This is a direct result of Newton’s Second Law.  If a force is applied to a light object (one with little mass) it will accelerate very rapidly; however, the same force applied to a massive object will cause a very small acceleration.

Now at an energy of 50 MeV, travelling at 31.4% the speed of light, and with a mass of 1.05 times its rest mass, the protons are injected into the Proton Synchrotron (PS) Booster (1:07 in the video).  This is the ellipse, labeled BOOSTER, in the diagram above.  The PS Booster then accelerates the protons to an energy of 1.4 GeV (where  the “G” stands for giga- or a billion times!), and a velocity that is 91.6% the speed of light [2].  The proton’s mass is now ~2.49 times its rest mass.

The PS Booster then feeds into the Proton Synchrotron (labeled as PS above, see 2:03 in video), which was CERN’s first synchrotron (and was brought online in November of 1959).  The PS then further accelerates the protons to an energy of 25 GeV, and a velocity that is 99.93% the speed of light [2].  The proton’s mass is now ~26.73 times its rest mass!  Wait, WHAT!?

At 31.4% the speed of light, the proton’s mass has barely changed from its rest mass.  Then at 91.6% the speed of light (roughly three times the previous speed), the proton’s mass was only two and a half times its rest mass.  Now, we increased speed by barely 8%, and the proton’s mass was increase by a factor of 10!?

This comes back to the statement earlier, objects become increasingly more difficult to accelerate the faster they are moving.  But this is clearly a non-linear affect.  To get an idea of what this looks like mathematically, take a look at this link here [3].  In this plot, the Y-axis is in multiples of rest mass (or Energy), and the x-axis is velocity, in multiples of the speed of light, c.  The red line is this relativistic effect that we are seeing, as we go from ~91% to 99% the speed of light, the mass increases gigantically!

But back to the proton’s journey, the PS injects the protons into the Super Proton Synchrotron (names in high energy physics are either very generic, and bland, or very outlandish, e.g. matter can be charming).  The Super Proton Synchrotron (SPS, also labeled as such in above diagram, 3:10 in video above) came online in 1976, and it was in 1983 that the W and Z bosons (mediators of the weak nuclear force) were discovered when the SPS was colliding protons with anti-protons.  In today’s world however, the SPS accelerates protons to an energy of 450 GeV, with a velocity of 99.9998% the speed of light [2].  The mass of the proton is now ~500 times its rest mass.

The SPS then injects the proton beams directly into the Large Hadron Collider.  This occurs at 3:35 in video linked above, however, when this video was recorded the LHC was operating at design energy, with each proton having an energy of 7 TeV (“T” for tera-, a million million times).  However, presently the LHC accelerates the proton to half of the design energy, and a velocity of 99.9999964% the speed of light.  The protons are then made to collide in the heart of the detectors.  At this point the protons have a mass that is ~3730 times their rest mass!

 

 

So, the breaking of the world instantaneous luminosity record was not the result of one single instrument, but the combined might of CERN’s full accelerator complex, and in no small part by the magnetic optics systems in these accelerators (I realize I haven’t gone into much detail regarding this, my goal was simply to introduce you to the acceleration process that our beams undergo before collisions).

 

Until next time,

-Brian

 

 

 

References:

[1] CERN, “LHC Design Report,” https://ab-div.web.cern.ch/ab-div/Publications/LHC-DesignReport.html

[2] CERN, “CERN faq: The LHC Guide,” http://cdsweb.cern.ch/record/1165534/files/CERN-Brochure-2009-003-Eng.pdf

[3]  School of Physics, University of Southern Wales, Sydney Australia, http://www.phys.unsw.edu.au/einsteinlight/jw/module5_equations.htm

Share

We’ve mentioned jets a few times here on the US LHC blog, so I’d like to go into bit more detail about these funny unavoidable objects in hadron colliders. Fortunately, Cornell recently had a visit from David Krohn, a Simons Fellow at Harvard University who is an expert at jet substructure. With his blessing, I’d like to recap parts of his talk to highlight a few jet basics an mention some of the cutting edge work being done in the field.

Before jumping in, a public service announcement for physicists in this field: David is one of the co-organizers of the Boost 2011 workshop next month. It looks like it’ll be a great event for both theorists and experimentalists.

Hadronic Junk

Let’s review what we know about quantum chromodynamics (QCD). Protons and neutrons are composite objects built out of quarks which are bound together by gluons. Like electrons and photons in quantum electrodynamics (QED), quarks and gluons are assumed to be “fundamental” particles. Unlike electrons and photons, however, we do not observe individual quarks or gluons in isolation. You can pull an electron off of a Hydrogen atom without much ado, but you cannot pull a quark out of a proton without shattering the proton into a bunch of other very different looking things (things like pions).

The reason is that QCD is very nonperturbative at low energies. QCD hates to have color-charged particles floating around, it wants them to immediately bind into color-neutral composite objects, even if that means producing new particles out of the quantum vacuum to make everything neutral. These color-neutral composite objects are called hadrons. Unfortunately, usually the process of hadronizing a quark involves radiating off other quarks of gluons which themselves hadronize. This process continues until you end up with a messy spray of particles in place of the original colored object. This spray is called a jet. (Every time I write about jets I feel like I have to reference West Side Story.)

 

Jets

Simulated event from ATLAS Experiment © 2011 CERN

As one can see in the image above, the problem is that the nice Feynman diagrams that we know how to calculate do not directly correspond to the actual mess of particles that form the jets which the LHC experiments measure. And it really is mess. One cannot effectively measure every single particle within each jet and even if one could, it is impractically difficult to calculate Feynman diagrams for very large numbers of particles.

Thus we’re stuck having to work with the jets themselves. High energy jets usually correspond to the production of a single high-energy colored particle, so it makes sense to talk about jets as “single objects” even though they’re really a spray of hadrons.

Update 4/24: David has corrected me and explained that while the process of jet formation is associated with strong coupling, it isn’t really a consequence of non-perturbative physics. At the level of this blog, the distinction is perhaps too subtle to harp over. For experts, however, I should note for complete honesty that it is indeed true that a lot of jet physics is calculable using perturbative techniques while tiptoeing around soft and collinear singularities. David notes that a nice way to think about this is to imagine QED in the limit where the electromagnetic force were stronger, but not incalculably strong (“non-perturbative”). In this case we could still draw Feynman diagrams for the production of electrons, but as we dial up the strength of the electromagnetic force, the actual observation in our detectors won’t be single electrons, but a “jet” formed form an electron and a spray of photons.

Identifying Jets

So we’ve accepted the following fact of life for QCD at a particle collider:

Even though our high energy collisions produce ‘fundamental’ particles like quarks and gluons, the only thing we get to observe are jets: messy sprays of hadrons.

Thus one very important task is trying to make the correspondence between the ‘fundamental’ particles in our Feynman diagrams and the hadronic slop that we actually measure. In fact, it’s already very hard to provide a technical definition of a jet. Our detectors can identify most of the “hadronic slop,” but how do we go from this to a measurement of some number of jets?

This process is called clustering and involves developing algorithms to divide hadrons into groups which are each likely to have come from a single high energy colored particle (quarks or gluons). For example, for the simple picture above, one could develop a set of rules that cluster hadrons together by drawing narrow cones around the most energetic directions and defining everything within the cone to be part of the jet:

Jet Clustering

Simulated event from ATLAS Experiment © 2011 CERN

One can then measure the energy contained within the cone and say that this must equal the energy of the initial particle which produced the jets, and hence we learn something about fundamental object. I’ll note that this kind of “cone algorithms” for jet clustering can be a little crude and there are more sophisticated techniques on the market (“sequential recombination”).

Boosted Jets

Even though the above cartoon was very nice, you can imagine how things can become complicated. For example, what if the two cones started to approach each other? How would you know if there was one big jet or two narrow jets right next to each other? In fact, this is precisely what happens when you have a highly boosted object decaying into jets.

By “boosted” I mean that the decaying particle has a lot of kinetic energy. This means that even though the particle decays into two colored objects—i.e. two jets—the jets don’t have much time to separate from one another before hitting the detector. Thus instead of two well-separated jets as we saw in the example above, we end up with two jets that overlap:

Collimation of two jets into a single jet as the decaying particle is boosted. Image from D. Krohn.

Now things become very tricky. Here’s a concrete example. At the LHC we expect to produce a lot of top/anti-top pairs (tt-bar). Each of these tops immediately decays into a b-quark and a W. Thus we have

t, t-bar → b, b-bar, W W

(As an exercise, you can draw a Feynman diagram for top pair production and the subsequent decay.) These Ws are also fairly massive particles and can each decay into either a charged lepton and a neutrino, or a pair of quarks. Leptons are not colored objects and so they do not form jets; thus the charged lepton (typically a muon) is a very nice signal. One promising channel to look for top pair production, then, is the case where one of the Ws decays into a lepton and neutrino and the other decays into two quarks:

t, t-bar → b, b-bar, W Wb, b-bar, q, q-bar, lepton, ν

The neutrino is not detected, and all of the quarks (including the bottoms) turn into jets. We thus can search for top pair production by counting the number of four jet events with a high energy lepton. For this discussion we won’t worry about background events, but suffice it to say that one of the reasons why we require a lepton is to help discriminate against background.

Here’s what such an event might look like:

Simulated event from ATLAS Experiment © 2011 CERN

Here “pT” refers to the energy (momentum perpendicular to the beam) of the top quarks. In the above event the tops have a modest kinetic energy. On the other hand, it might be the case that the tops are highly boosted—for example, they might have come from the decay of a very heavy particle which thus gives them a lot of kinetic energy. In the following simulated event display, the tops have a pT that is ten times larger than the previous event:

Simulated event from ATLAS Experiment © 2011 CERN

Now things are tricky! Instead of four clean jets, it looks like two slightly fat jets. Even though this simulated event actually had the “b, b-bar, q, q-bar, lepton, ν” signal we were looking for, we probably wouldn’t have counted this event because the jets are collimated.

There are other ways that jets tend to be miscounted. For example, if a jet (or anything really) is pointed in the direction of the beam, then it is not detected. This is why it’s something of an art to identify the kinds of signals that one should look for at a hadron collider. One will often find searches where the event selection criteria requires “at least” some number of jets (rather than a fixed number) with some restriction on the minimum jet energy.

Jet substructure

One thing you might say is that even though the boosted top pair seemed to only produce two jets, shouldn’t there be some relic that they’re actually two small jets rather than one big jet? There has been a lot of recent progress in this field.

Distinguishing jets from a boosted heavy particle (two collimated jets) from a "normal" QCD jet with no substructure. The plot is a cylindrical cross section of the detector---imagine wrapping it around a toilet paper roll aligned with the beam. Image from D. Krohn.

The main point is that one can hope to use the “internal radiation distribution” to determine whether a “spray of hadrons” contains a single jet or more than one jets. As you can see from the plots above, this is an art that is similar to reading tea leaves. (… and I only say that with the slightest hint of sarcasm!)

[For experts: the reason why the QCD jets look so different are the Alterelli-Parisi splitting functions: quarks and gluons really want to emit soft, collinear stuff.]

There’s now a bit of an industry for developing ways to quantify the likelihood that a jet is really a jet (rather than two jets). This process is called jet substructure. Typically one defines an algorithm that takes detector data and spits out a number called a jet shape variable that tells you something about the internal distribution of hadrons within the jet. The hope is that some of these variables will be reliable and efficient enough to help us squeeze as much useful information as we can out of each of our events. There also seems to be a rule in physics that the longer you let theorists play with an idea, the more likely it is that they’ll give it a silly name. One recent example is the “N-subjettiness” variable.

Jet superstructure

In addition to substructure, there has also been recent progress in the field of jet superstructure, where one looks at correlations between two or more jets. The basic idea boils down to something very intuitive. We know that the Hydrogen atom is composed of a proton and an electron. As a whole, the Hydrogen atom is electrically neutral so it doesn’t emit an electric field. (Of course, this isn’t quite true; there is a dipole field which comes from the fact that the atom is actually composed of smaller things which are charged.) The point, however, is that far away from the atom, it looks like a neutral object so we wouldn’t expect it to emit an electric field.

We can say the same thing about color-charged particles. We already know that quarks and gluons want to recombine into color-neutral objects. Before this happens, however, we have high energy collisions with quarks flying all over the place trying to figure out how to become color neutral. Focusing on this time scale, we can imagine that certain intermediate configurations of quarks might already be color neutral and hence would be less likely to emit gluons (since gluons are the color-field). On the other hand, other intermediate configurations might be color-charged, and so would be more likely to emit gluons. This ends up changing the distribution of jet slop.

Here’s a nice example from one of the first papers in this line of work. Consider the production of a Higgs boson through “quark fusion,” i.e. a quark and an antiquark combining into a Higgs boson. We already started to discuss the Higgs in a recent post, where we made two important points: (1) once we produce a Higgs, it is important to figure out how it decays, and (2) once we identify a decay channel, we also have to account for the background (non-Higgs events that contribute to that signal).

One nice decay channel for the Higgs is b b-bar. The reason is that bottom quark jets have a distinct signature—you can often see that the b quark traveled a small distance in the detector before it started showering into more quarks and gluons. Thus the signal we’re looking for is two b-jets. There’s a background for this: instead of qq-bar → Higgs → b-jets, you could also have qq-bar → gluon → b-jets.

The gluon-mediated background is typically very large, so we would like to find a clever way to remove these background events from our data. It turns out that jet superstructure may be able to help out. The difference between the Higgs → b-jets decay versus the gluon → b-jets decay is that the gluon is color-charged. Thus when the gluon decays, the two b-quarks are also color-charged. On the other hand, the Higgs is color-neutral, so that the two b-quarks are also color neutral.

One can draw this heuristically as “color lines” which represent which quarks have the same color charge. In the image below, the first diagram represents the case where an intermediate Higgs is produced, while the second diagram represents an intermediate gluon.

Color lines for qq-bar → Higgs → b-jets and qq-bar → gluon → b-jets. Image from 1001.5027

For the intermediate Higgs, the two b-jets must have the same color (one is red, the other is anti-red) so that the combined object is color neutral. For the intermediate gluon, the color lines of the two b-jets are tied up to the remnants of the protons (the thick lines at the top and bottom). The result is that the hadronic spray that makes up the jets tend to be pulled together for the Higgs decays, while pushed apart for the gluon decays. This is shown heuristically below, where again we should understand the plot as being a cylindrical cross section of the detector:

Higgs decays into two b-jets (signal) versus gluon decays (background). Image from 1001.5027

One can thus define a jet superstructure variable (called ‘pull‘) to quantify how much two jets are pulled together or pushed apart. The hope is that this variable can be used to discriminate between signal and background and give us better statistics for our searches for new particles.

Anyway, that’s just a sample of the types of neat things that people have been working on to improve the amount of information we can get out of each event at hadron colliders like the LHC. I’d like to thank David Krohn, once again, for a great talk and very fun discussions. For experts, let me make one more plug for his workshop next month: Boost 2011.

Share

OK, the second part of the title isn’t actually true, but more on that in a moment….

The fill that is currently in the LHC started at an instantaneous luminosity over 4E32:

Not only is this the highest collision rate ever achieved at the LHC, it’s also the highest ever at a hadron collider, exceeding the largest instantaneous luminosity ever recorded by Fermilab’s venerable Tevatron collider. As has been discussed by many of the US LHC bloggers, luminosity is key at this point — the larger it is, the more collisions we record, and the greater the chance that we can observe something truly new. In the four hours since the fill started, CMS has already recorded about one sixth of the useful data that was recorded in all of 2010!

As for the Pulitzer, this week Mike Keefe of the Denver Post won the 2011 Pulitzer for editorial cartooning for a portfolio of twenty cartoons that included this one about the LHC. (I’d rather not actually run the cartoon here, as I’m not sure we have the rights to it.) Good to see that we are part of journalism history!

Share

Night and Day, Robert Weigand

 

My first note’s byline is UC Davis:

Good evening from California!

Well, it’s evening here (10:30 pm), but it’s already tomorrow on the east coast of the U.S., and in fact already time to wake up at CERN.  Here is where I usually face my conundrum: Do I stay awake and watch for the first email reports on the activities at the LHC?  Should I wait for our postdocs and students who live at CERN to respond to my emails?  If I go to sleep, what will I miss?

Unfortunately, I have an 8:00 am video meeting tomorrow, in which our postdocs on the CDF experiment at Fermilab are presenting an introduction to our new analysis, searching for signs of light dark matter in the CDF data.  (CDF has been in the news recently for an interesting result that has previously been discussed by Flip Tanedo and Michael Schmitt.)

The CDF meeting is followed by a 9:30 am video meeting with fellow CMS colleagues who are working on searches for fourth generation quarks, and it looks like one of our CMS postdocs may have something short to present there.  But this isn’t so bad: on Fridays my first meeting is at 6:30 am, which is 3:30 pm in Geneva, and many meetings are earlier than that, so I usually have to miss them. I am grateful that at least for tomorrow’s CMS meeting at 9:30, those at CERN are willing to have it at 6:30 pm, their time, which for me means I can at least get the kid to daycare and get to work formally before it begins.

Of course, even after these early meetings, the California workday still lies ahead for me, and it sometimes turns in to a very long day.  But the time is worth it— as you have gathered in these blogs, there are lots of exciting things going on in the world of particle physics these days, and I wouldn’t want to miss it!

With that, I’d better say “good night” (*yawn*) and get some sleep… the sun is fast approaching.

 

Day and Night, Google Earth

 

Share

What the L?!

Tuesday, April 19th, 2011

There are few things that particle physicists like to talk about more than luminosity (know affectionately as “L”). We measure it obsessively, we boast about it shamelessly and we never forget to mention it in our papers, plots and talks. So what’s the big deal? What is luminosity and why is it important?

The concept of instantaneous luminosity is borrowed from the field of astrophysics, and in that field it’s used to describe how much energy a star gives off. To calculate the instantaneous luminosity, simple measure how much energy flows through a surface in an interval of time. To get the instantaneous luminosity in particle physics simply swap energy for the number of particles and the definition is the same!

The instantaneous luminosity is a measure of how many particles (blue) pass through a surface of unit area (yellow) in unit time (not shown.)

Well, not quite. If you take a quick look at any of the experiments at the LHC you’ll notice that there are two beams, so to get any meaningful measurement of luminosity you’ll have to take the flows of particles in both beams, a task which doesn’t seem easy! In order to use the concept of instantaneous luminosity we need to apply some knowledge of special relativity. We imagine that the protons in one of the beams are all at rest, and see how many protons from the other beam pass through per unit area and unit time. (The instantaneous luminosity makes more sense for fixed target experiments, where there is only one beam and the other matter is kept at rest. This is how most early experiments operated, and we’ve been stuck using luminosity ever since!)

In itself, the instantaneous luminosity is useless to us, and to make any real use of it we must combine it with a cross section. A cross section used to describe how often some process happens, and the analogy is very simple! Imagine placing lots of targets in front of the beam of particles, each one representing a different process. The larger targets will be hit by more protons, so we’ll see those processes more often. A larger cross section means a higher rate of process! To get the number of events where that process happens (per unit time) we just multiply the cross section by the luminosity, and that tells us how many “hits” we can expect. Simple!

Since having a larger instantaneous luminosity means having more events, we want to do everything we can to increase instantaneous luminosity. We can do that in quite a few ways, and the most obvious way is to increase the number of protons in the beam. After all, each proton has its own tiny (very very tiny) targets, and since the cross section of a given process is the same for each proton, you can increase the total size of a given target by increasing the numbers of protons. Another way to increase the instantaneous luminosity is to cram the same number of protons into a narrower beam, and this is called squeezing. After a while we start to reach physical limits of what we can achieve (this is due to phase space factors, beam shape parameters and all sorts of fascinating properties of the beam that would make for another blog post!) so we need to resort to simpler methods. One of the most effective methods is to increase the number of bunches in the LHC ring, and this means that instead of cramming more protons into the same part of the ring at the LHC, we put more protons in the empty regions of the ring.

The protons presents many different processes, and each process has its own cross section. This diagram is not at all to scale, and the QCD cross section is much larger than the other cross section shown!

As usual, things aren’t quite as simple as this. There are many different processes and each with its own cross section. Some of them are much, much larger than others, and most of the larger cross sections are boring to us, so if we want to get to the interesting physics we need a way to artificially reduce the sizes of the boring cross sections. (It would be nice if we could increase the sizes of the interesting cross sections instead, but that’s not physically possible at the LHC!) The notoriously large cross section at the LHC is the quantum chromodynamical (QCD) cross section, which dominates everything we see and for most people it’s an annoyance that makes it harder to find the interesting physics. To reduce the cross sections of these processes we use a prescale, which is very simple. We only record events that fire the trigger, and the trigger looks for different kinds of events. A prescale tells the trigger to ignore a proportion of a specific kind of decay, and that way we can record fewer boring events and save our precious resources for the most interesting ones.

Now if you see a plot from a collaboration you’ll often see the luminosity written on the plot, but this is not the instantaneous luminosity, it’s the integrated luminosity. To get the integrated luminosity we multiply the instantaneous luminosity by the time interval when the instantaneous luminosity was delivered. This means that it has units of inverse area, and when we multiply it by a cross section we get a number of events. This is why the integrated luminosity is so important to us- if we know the cross section for a process, and we know the integrated luminosity we can work out how many events we expect to see, and compare that to how many we actually see. This tells us when to expect a discovery, and when we find something truly new and interesting!

A typical mass spectrum plot, proudly declaring the integrated luminosity for all to see. arXiv:1103.6218v1 hep-ex

It seems elegant and simple, but personally I find the whole thing is spoiled by the choice of units and converting things ever so slightly baffling (probably not something I should admit to in public!) Instantaneous luminosity is usually measured in cm2s-1, which is an odd choice. In these units a typical value is 1033, which is an unimaginably large number! This is almost inevitable because luminosity varies so widely between experiments and as new technologies become available. If we choose new units now to make the numbers more manageable, they’ll still become ridiculously large in the future. To confuse things further the integrated luminosity is usually measured in inverse barns (as in “You can’t hit a barn with that!”). A barn is 10-28m2, so this makes the integrated luminosity a little bit easier to express in terms that don’t make my head spin. But even after that, our integrated luminosities need prefixes to make the numbers nice, so you’ll often see integrated luminosities written in inverse picobarns (pb-1) or inverse femtobarns (fb-1) and then the smaller the prefix, the large the amount of integrated luminosity! I find that the easiest way to remember whether I need to multiply or divide by 1,000 to convert the units is to just go with what feels wrong and it’ll be right.  Smaller inverse areas mean larger numbers of events. If that isn’t a crazy choice of units, I don’t know what is!

To get an idea of a typical integrated luminosity, let’s think about how much data we’d need to see a standard model Higgs boson of mass 200GeV. Let’s imagine we see 100 events which are not consistent with known backgrounds. To make our job easier, let’s think about the “gold plated” decay of H→ZZ and Z→ll, where l is a charged lepton. The branching fraction for this decay is about 25% for H→ZZ and about 7% for Z→ll, and let’s assume we are 50% efficient at reconstructing a Z. Altogether we’d need to produce about 80,000 Higgs bosons to see 100 events of this type. Dividing by the cross section of Higgs production at 200GeV gives us an integrated luminosity of 16ab-1. That’s a lot of events! Luckily, there are many more final states we can explore, and when we add it all up, it turns out we’ll have enough data to be sensitive to a standard model Higgs before too long.

That’s all very impressive, but the punchline comes from the world of “low high energy physics”, for example the BaBar experiment. Whenever I want to tease my friends at the LHC, I remind them that my previous experiment had 550fb-1 of data, about 5,000 times what we have right now, and a number the LHC will not reach any time soon!

You can usually tell what kind of physicist you’re talking to immediately by asking them what the luminosity is at the LHC. An experimental physicist will tell you in terms of data (ie inverse barns) where as an accelerator physicist will tell you in terms of beams (ie cm-2s-1.) I find it quite amusing that the accelerator physicists generally find everything up to the point of collision deeply fascinating, and everything after that a frightful bore that makes their work even more complicated, whereas the experimental physicists thinks the other way around!

Share