# for B physics?

The title of today’s post is obviously a rhetorical question, because the answer is naturally LHCb. *winks* However I thought I would take some time to prove it with a particular $$B$$ meson decay…

One of the most interesting $$B_s^0$$ meson decays is that into a $$J/\psi$$ and a $$\phi$$ meson, shown below. This is because one of the quantities we can derive from this decay has a very small Standard Model prediction, so any measured excess would be a clear indication of new physics.[*]

This decay mode is so interesting that both ATLAS and CMS as well as LHCb are trying to detect it. Hence giving me the opportunity to directly compare the performance of the detectors. So without further ado, here are the results:So what are we looking at here? These are the invariant mass distributions of the identified $$B_s^0 \rightarrow J/\psi + \phi$$ decays in each detector. In every event, we look for the products of the particular decay we are interested in. In this case, we need to identify two muons from the decay of the $$J/\psi$$ and two kaons from the decay of the $$\phi$$. We then take these four particles and add their four-momenta together, if they did originate from the decay of a $$B_s^0$$ meson, we should see a peak around the $$B_s^0$$ mass of 5366.3 MeV / c$$^2$$. This is represented by the data points in the three plots from each of the experiments. The lines on each of the graphs are fits to the data using a normal distribution for the signal and a straight line for the background. [**]

So what do we look for in these graphs to learn about the performance of each detector? Actually, before we do any comparisons, we need to look the size of the datasets used in each analyses. Luckily for us, the datasets are fairly similar, with LHCb reporting results using 36 pb$$^{-1}$$ of data, CMS using 39 pb$$^{-1}$$ and ATLAS using 40 pb$$^{-1}$$. This means we can basically do a direct comparison of the graphs, though with the caveat that each of the analyses used different selection criteria to select their $$B_s^0$$ candidates. However, we can assume that they have been optimised to select as much signal as possible while rejecting as background as possible.

Okay, now we have established we can compare the graphs, let’s do so. The first thing you might notice is that the graphs look fairly similar. Each experiment has been able to reconstruct a nice $$B_s^0$$ peak from its decay products. Looking closer however, the results have some notable differences, despite each of the experiments looking for the same decay in very similar sized datasets and using the same signal and background distribution shapes.

I’m emphasising the fact that the datasets are similar sizes because you may notice that the number of signal events is fairly different between the three experiments, with 877 events in the signal peak for LHCb, while ATLAS and CMS only see 358 and 377 events respectively. This may not be immediately obvious looking at the height of the signal peaks, but if you notice that each experiment uses different mass binning, it becomes clearer.

So LHCb sees more $$B_s^0 \rightarrow J/\psi + \phi$$ decays than ATLAS and CMS. This is actually expected from the geometry of the detectors. As I mention in my very first post, $$B$$ meson production peaks in the forward region, shown below, where LHCb has coverage while ATLAS and CMS don’t.

Interesting, even though LHCb sees more signal events than ATLAS and CMS, it sees many less background events. This can be seen in the plots above by see how high above 0 the linear background fit is. We can see that LHCb sees less background then ATLAS, which see less background than CMS. The reason for this is that LHCb is much better at identifying kaons and muons at these energies thanks to the RICH subdetectors.

What else can we learn? If we look at the width of the signal fits of the $$B_s^0$$ mass peaks from each experiment, we can see that these are also quite different. The LHCb peak is very narrow at 7 MeV, while the CMS peak is a little wider at 16 MeV and the ATLAS peak is wider again at 27 MeV. These numbers tell us how accurately the momenta of the kaons and muons are measured, and how well the $$B_s^0$$ decay vertices are reconstructed. So we see that LHCb is better at measuring the kaon and muon momenta and reconstructing displaced decay vertices.

In summary, LHCb sees more signal, less background and better at measuring the particles involved in $$B_s^0 \rightarrow J/\psi + \phi$$ decays compared to CMS and ATLAS. It is therefore clearly the best detector to use for these types of decays. An obvious conclusion, since these decays are what the detector was designed and built to measure, but it is nontheless reassuring to see that the results confirm our hypothesis.

[*] I know that this really isn’t a satisfactory explanation of why this particular decay is interesting, but I didn’t want to get too sidetracked here. I’ll save the details for a future post. This one is long enough already!

[**] I have obviously simplified the selection and analysis process immensely. If you do want to find out more information about each of the analyses, and where I got the graphs and numbers, details can be found here for LHCb, here for ATLAS and here for CMS.

• nsetzer

Do you know why ATLAS is having such a hard time with the momenta (particularly in the forward region as I thought that the T of ATLAS meant they’d do better than CMS there)?

• Anna Phan

Hi Nick,

I didn’t really want to comment on the differences between the ATLAS and CMS results… However, since you asked, I think the mass resolution differences have more to do with the differences between the ATLAS and CMS tracking subdetectors than the muon spectrometers.

I would hazard a guess that the different magnetic field strengths make a difference on the accuracy of the momentum measurements, as would the different detector techonologies, the CMS tracker being fully silicon, while the ATLAS tracker being a combination of silicon and transition radiation straws.

Cheers,
Anna

• I hope that everyone has seen the article at interactions.org about the new project [email protected] 2.0 running on BOINC software. The project url is http://boinc01.cern.ch/.

The article is at http://www.interactions.org/cms/?pid=1030964.

BOINC software, developed at UC Berkeley, can provide a virtual supercomputer to the projects running on it. The largest project currently is [email protected], the birthplace of BOINC. [email protected] is at just about a half a PetaFLOP, which would put it at about 14th on the TOP500 if it was actually counted.

So, jump on board with [email protected] 2.0 and help us get there are rival [email protected]

There is nothing more important than the LHC in basic research today. We aim to be a real part of the search for Higgs.

• Brian Dorney

Anna,

Great Post, and great comparison.

But, how about we make a friendly wager on this; say, in the |eta|<1.4 region? Winning Collaboration get's a free lunch? ;-P

Joke's aside, LHCb is king of the hill for B-Physics at LHC. And I wouldn't have thought to use FWHM for a detector performance comparison, thanks for pointing this out!

-Brian

• anil

anna
tell me what is going on LHC experiment.what will be the future of its findings

• james jeans

Thanks for the clear explanation. I have a question about the graphs. Why are the event/energy scales are different at vertical axes? LHCb shows # of events per 5MeV while ATLAS and CMS graphs show number of events per wider energy intervals.