• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Sally
  • Shaw
  • University College London
  • UK

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

Adam Davis | USLHC | USA

Read Bio

CERN Had Dark Energy All Along; Uses It To Fuel Researchers

Tuesday, March 31st, 2015

I don’t usually get to spill the beans on a big discovery like this, but this time, I DO!

CERN Had Dark Energy All Along!!

That’s right. That mysterious energy making up ~68% of the universe was being used all along at CERN! Being based at CERN now, I’ve had a first hand glimpse into the dark underside of Dark Energy. It all starts at the Crafted Refilling of Empty Mugs Area (CREMA), pictured below.

One CREMA station at CERN

 

Researchers and personnel seem to stumble up to these stations at almost all hours of the day, looking very dreary and dazed. They place a single cup below the spouts, and out comes a dark and eerie looking substance, which is then consumed. Some add a bit of milk for flavor, but all seem perkier and refreshed after consumption. Then they disappear from whence they came. These CREMA stations seem to be everywhere, from control rooms to offices, and are often found with groups of people huddled around them. In fact, they seem to exert a force on all who use them, keeping them in stable orbits about the stations.

In order to find out a little bit more about this mysterious substance and its dispersion, I asked a graduating student, who wished to remain unnamed, a little bit about their experiences:

Q. How much of this dark stuff do you consume on a daily basis?

A. At least one cup in the morning to fuel up, I don’t think I could manage to get to lunchtime without that one. Then multiple other cups distributed over the day, depending on the workload. It always feels like they help my thinking.

Q. Do you know where it comes from?

A. We have a machine in our office which takes capsules. I’m not 100% sure where those capsules are coming from, but they seem to restock automatically, so no one ever asked.

Q. Have you been hiding this from the world on purpose?

A. Well our stock is important to our group, if we would just share it with everyone around we could run out. And no one of us can make it through the day without. We tried alternatives, but none are so effective.

Q. Do you remember the first time you tried it?

A. Yes, they hooked me on it in university. From then on nothing worked without!

Q. Where does CERN get so much of it?

A. I never thought about this question. I think I’m just happy that there is enough for everyone here, and physicist need quite a lot of it to work.

In order to gauge just how much of this Dark Energy is being consumed, I studied the flux of people from the cafeteria as a function of time with cups of Dark Energy. I’ve compiled the results into the Dark Energy Consumption As Flux (DECAF) plot below.

Dark Energy Consumption as Flux plot. Taken March 31, 2015. Time is given in 24h time. Errors are statistical.

 

As the DECAF plot shows, there is a large spike in consumption, particularly after lunch. There is a clear peak at times after 12:20 and before 13:10. Whether there is an even larger peak hiding above 13:10 is not known, as the study stopped due to my advisor asking “shouldn’t you be doing actual work?”

There is an irreducible background of Light Energy in the cups used for Dark Energy, particularly of the herbal variety. Fortunately, there is often a dangly tag hanging off of the cup  to indicate to others that they are not using the precious Dark Energy supply, and provide a clear signal for this study to eliminate the background.

While illuminating, this study still does not uncover the exact nature of Dark Energy, though it is clear that it is fueling research here and beyond.

Share

Looking Forward to 2015: Analysis Techniques

Tuesday, January 27th, 2015

With 2015 a few weeks old, it seems like a fine time to review what happened in 2014 and to look forward to the new year and the restart of data taking. Along with many interesting physics results, just to name a few, LHCb saw its 200th publication, a test of lepton universality. With protons about to enter the LHC, and the ALICE and LHCb detectors recording muon data from transfer line tests between the SPS and LHC (see also here), the start of data-taking is almost upon us. For some implications, see Ken Bloom’s post here. Will we find supersymmetry? Split Higgs? Nothing at all? I’m not going to speculate on that, but I would like to review two techniques which played a key role in two results from LHCb and a few analysis techniques which enabled them.

The first result I want to discuss is the \(Z(4430)^{-}\). The first evidence for this state came from the Belle Collaboration in 2007, with subsequent studies in 2009 and in 2013. BaBar also searched for the state, and while they did not see it, they did not rule it out.

The LHCb collaboration searched for this state, using the specific decay mode \(B^0\to \psi’ K^{+} \pi^{-} \), with \(\psi’\) decaying to two muons. For more reading, see the nice writeup from earlier in 2014. As in the Belle analyses, which looked using muons or electrons in the final \(\psi’\) state, the trick here is to look for bumps in the \(\psi’ \pi^{-}\) mass distribution. If a peak appears which is not described  by the conventional 2 and 3 quark states, mesons and baryons, we know and love, it must be from a state involving a \(c \overline{c}d\overline{u}\) quark combination. The search is performed in two ways: a model-dependent search, which looks at the \(K\pi\) and \(\psi’\pi\) invariant mass and decay angle distributions, and a “model independent” search which looks for structure induced in the \(K\pi\) system induced by a resonance in the \(\psi’\pi\) system and does not invoke any exotic resonances.

At the end of the day, it is found in both cases that the data are not described without including a resonance for the \(Z(4430)^-\).

Now, it appears that we have a resonance on our hands, but how can we be sure? In the context of the aforementioned model dependent analysis, the amplitude for the \(Z(4430)^{-}\) is modeled as a Breit-Wigner amplitude, which is a complex number. If this amplitude is plotted in the imaginary plane as a function of the invariant mass of the resonance, a circular shape is traced out. This is characteristic of a resonance. Therefore, by fitting the real and imaginary parts of the amplitude in six bins of \(\psi’\pi\) invariant mass, the shape can be directly compared to that of an exected resonance. That’s exactly what’s done in the plot below:

The argand plane for the Z(4430)- search. Units are arbitrary.

The argand plane for the Z(4430)- search. Units are arbitrary.

What is seen is that the data (black points) roughly follow the outlined circular shape given by the Breit-Wigner resonance (red). The outliers are pulled due to detector effects. The shape quite clearly follows the circular characteristic of a resonance. This diagram is called an Argand Diagram.

 

Another analysis technique to identify resonances was used to find the two newest particles by LHCb:

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig (http://italicpig.com/blog/)

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig

Or perhaps seen as

 

Xi_b resonances, depicted by Lison Bernet.

Xi_b resonances, depicted by Lison Bernet.

Any way that you draw them, the two new particles, the \(\Xi_b’^-\) and \(\Xi_b^{*-}\) were seen by the LHCb collaboration a few months ago. Notably, the paper was released almost 40 years to the day that the discovery of the \(J/\psi\) was announced, sparking the November Revolution, and the understanding that mesons and baryons are composed of quarks. The \(\Xi_b’^-\) and \(\Xi_b^{*-}\) baryons are yet another example of the quark model at work. The two particles are shown in \(\delta m \equiv m_{candidate}(\Xi_b^0\pi_s^-)-m_{candidate}(\Xi_b^0)-m(\pi)\) space below.

Xi_b'^- and Xi_b^{*-} mass peaks shown in delta(m_candidate) space.

\(\Xi_b’^-\) and \(\Xi_b^{*-}\) mass peaks shown in \(\delta(m_{candidate})\) space.

Here, the search is performed by reconstructing \(\Xi_b^0 \pi^-_s\) decays, where the \(\Xi_b^0\) decays to \(\Xi_c^+\pi^-\), and \(\Xi_c^+\to p K^- \pi^+\). The terminology \(\pi_s\) is only used to distinguish between that pion and the other pions. The peaks are clearly visible. Now, we know that there are two resonances, but how do we determine whether or not the particles are the \(\Xi_b’^-\) and \(\Xi_b^{*-}\)? The answer is to fit what is called the helicity distributions of the two particles.

To understand the concept, let’s consider a toy example. First, let’s say that particle A decays to B and C, as \(A\to B C\). Now, let’s let particle C also decay, to particles D and F, as \(C\to D F\). In the frame where A decays at rest, the decay looks something like the following picture.

Simple Model of A->BC, C->DF

Simple Model of \(A\to BC\), \(C\to DF\)

There should be no preferential direction for B and C to decay if A is at rest, and they will decay back to back from conservation of momentum. Likewise, the same would be true if we jump to the frame where C is at rest; D and F would have no preferential decay direction. Therefore, we can play a trick. Let’s take the picture above, and exactly at the point where C decays, jump to its rest frame. We can then measure the directions of the outgoing particles. We can then define a helicity angle \(\theta_h\) as the angle between the C flight in A’s rest frame and D’s flight in C’s rest frame. I’ve shown this in the picture below.

Helicity Angle Definition for a simple model

Helicity Angle Definition for a simple model

If there is no preferential direction of the decay, we would expect a flat distribution of \(\theta_h\). The important caveat here is that I’m not including anything about angular momentum, spin or otherwise, in this argument. We’ll come back to that later. Now, we can identify A as the \(\Xi_b’\) or \(\Xi_b^*\) candidate, C as the \(\Xi_b^0\) and D as the \(\Xi_C\) candidates used in the analysis. The actual data are shown below.

Helicity angle distributions for the Xi_b' and Xi_b* candidates (upper and lower, respectively).

Helicity angle distributions for the \(\Xi_b’ \)and \(\Xi_b*\) candidates (upper and lower, respectively).

While it appears that the lower mass may have variations, it is statistically consistent with being a flat line. Now the extra power of such an analysis is that if we now consider angular momentum of the particles themselves, there are implied selection rules which will alter the distributions above, and which allow for exclusion or validation of particle spin hypotheses simply by the distribution shape. This is the rationale for having the extra fit in the plot above. As it turns out, both distributions being flat allows for the identification of  the \(\Xi ‘_b^-\) and the \(\Xi_b^{*-}\), but do not allow for conclusive ruling out of other spins.

With the restart of data taking at the LHC almost upon us (go look on Twitter for #restartLHC), if you see a claim for a new resonance, keep an eye out for Argand Diagrams or Helicity Distributions.

Share

Let there be beam!

Wednesday, October 15th, 2014

It’s been a little while since I’ve posted anything, but I wanted to write a bit about some of the testbeam efforts at CERN right now. In the middle of July this year, the Proton Synchrotron, or PS, the second ring of boosters/colliders which are used to get protons up to speed to collide in the LHC, saw its first beam since the shutdown at the end Run I of the LHC. In addition to providing beam to experiments like CLOUD, the beam can also be used to create secondary particles of up to 15 GeV/c momentum, which are then used for studies of future detector technology. Such a beam is called a testbeam, and all I can say is WOOT, BEAM! I must say that being able to take accelerator data is amazing!

The next biggest milestone is the testbeams from the SPS, which started on the 6th of October. This is the last ring before the LHC. If you’re unfamiliar with the process used to get protons up to the energies of the LHC, a great video can be found at the bottom of the page.

Just to be clear, test beams aren’t limited to CERN. Keep your eyes out for a post by my friend Rebecca Carney in the near future.

I was lucky enough to be part of the test beam effort of LHCb, which was testing both new technology for the VELO and for the upgrade of the TT station, called the Upstream Tracker, or UT. I worked mainly with the UT group, testing a sensor technology which will be used in the 2019 upgraded detector. I won’t go too much into the technology of the upgrade right now, but if you are interested in the nitty-gritty of it all, I will instead point you to the Technical Design Report itself.

I just wanted to take a bit to talk about my experience with the test beam in July, starting with walking into the experimental area itself. The first sight you see upon entering the building is a picture reminding you that you are entering a radiation zone.

ps_entrance

The Entrance!!

Then, as you enter, you see a large wall of radioactive concrete.

the_wall

Don’t lick those!

This is where the beam is dumped. Following along here, you get to the control room, which is where all the data taking stuff is set up outside the experimental area itself. Lots of people are always working in the control room, focused and making sure to take as much data as possible. I didn’t take their picture since they were working so hard.

Then there’s the experimental area itself.

the_setup

The Setup! To find the hardhat, look for the orange and green racks, then follow them towards the top right of the picture.

Ah, beautiful. :)

There are actually 4 setups here, but I think only three were being used at this time (click on the picture for a larger view). We occupied the area where the guy with the hardhat is.

Now the idea behind a tracker testbeam is pretty straight forward. A charged particle flies by, and many very sensitive detector planes record where the charged particle passed. These planes together form what’s called a “telescope.” The setup is completed when you add a detector to be tested either in the middle of the telescope or at one end.

Cartoon of a test beam setup. The blue indicates the "telescope", the orange is the detector under test, and the red is the trajectory of a charged particle.

Cartoon of a test beam setup. The blue indicates the “telescope”, the orange is the detector under test, and the red is the trajectory of a charged particle.

 

From timing information and from signals from these detectors, a trajectory of the particle can be determined. Now, you compare the position which your telescope gives you to the position you record in the detector you want to test, and voila, you have a way to understand the resolution and abilities of your tested detector. After that, the game is statistics. Ideally, you want to be in the middle of the telescope, so you have the information on where the charged particle passed on either side of your detector as this information gives the best resolution, but it can work if you’re on one side or the other, too.

This is the setup which we have been using for the testbeam at the PS.  We’ll be using a similar setup for the testbeam at the SPS next week! I’ll try to write a follow up post on that when we finish!

And finally, here is the promised video.

 

Share

Snowmass, P5, HEPAP, HEP and what it all means to you

Tuesday, May 20th, 2014

I know that the majority of the posts I’ve written have focused on physics issues and results, specifically those related to LHCb. I’d like to take this opportunity, however, to focus on the development of the field of High Energy Physics (HEP) and beyond.

As some of you know, in 2013, we witnessed an effectively year-long conversation about the state of our field, called Snowmass. This process is meant to collect scientists in the field, young and old alike, and ask them what the pressing issues for the development of our field are. In essence, it’s a “hey, stop working on your analysis for a second and let’s talk about the big issues” meeting. They came out with a comprehensive list of questions and also a bunch of working papers about the discussions. If you’re interested, go look at the website. The process was separated into “frontiers,” or groups that the US funding agencies put together to divide the field into the groups that they saw fit. I’ll keep my personal views on the “frontiers” language for a different day, and instead share a much more apt interpretation of the frontiers, which emerged from Jonathan Asaadi, of Snowmass Young and Quantum Diaries. He emphasizes that we are coming together to tackle the biggest problems as a team, as opposed to dividing into groups, illustrated as Voltron in his slide below.

snowmass_young_asaadi

Slide from presentation of Jonathan Asaadi at the USLUO (now USLUA) 2013 annual meeting in Madison, Wisconsin. The point here is collaboration between frontiers to solve the biggest problems, rather than division into separate groups.

And that’s just what happened. While I willingly admit that I had zero involvement in this process aside from taking the Snowmass Young survey, I still agree with the conclusions which were reached about what the future of our field should look like. Again, I highly encourage you to go look at the outcome.

Usually, this would be the end of the story, but this year, the recommendations from Snowmass were passed to a group called P5 (Particle Physics Project Prioritization Panel). The point of this panel was to review the findings of Snowmass and come up with a larger plan about how the future of HEP will proceed. The big ideas had effectively been gathered, now the hard questions about which projects can pursue these questions effectively are being asked. This specifically focuses on what the game plan will be for HEP over the next 10-20 years, and identifies the distinct physics reach in a variety of budget situations. Their recommendation will be passed to HEPAP (High Energy Physics Advisory Panel), which reviews the findings, then passes its recommendation to the US government and funding agencies. The P5 findings will be presented to HEPAP  on May 22nd, 2014 at 10 AM, EST. I invite you to listen to the presentation live here. The preliminary executive report and white paper can be found after 10 EST on the 22nd of May on the same site, as I understand.

This is a big deal.

There are two main points here. First, 10-20 years is a long time, and any sort of recommendation about the future of the field over such a long period will be a hard one. P5 has gone through the hard numbers under many different budget scenarios to maximize the science reach that the US is capable of. Looking at the larger political picture, in 2013, the US also entered the Sequester, which cut spending across the board and had wide implications for not only the US but worldwide. This is a testament to the tight budget constraints that we are working in now, and will most certainly face in the future. Even considering such a process as P5 shows that the HEP community recognizes this point, and understands that without well defined goals and tough considerations of how to achieve them, we will endanger the future funding of any project in the US or with US involvement.

Without this process, we will endanger future funding of US HEP.

We can take this one step further with a bit more concrete example. The majority of HEP workings are done through international collaboration, both experiment and theory alike. If any member of such a collaboration does not pull their weight, it puts the entire project into jeopardy. Take, for example, the US ATLAS and CMS programs, which have 23% and 33% involvement from the US, respectively, in both analysis and detector R&D. If these projects were cut drastically over the next years, there would have to be a massive rethinking about the strategies of their upgrades, not to mention possible lack of manpower. Not only would this delay one of the goals outlined by Snowmass, to use the Higgs as a discovery tool, but would also put into question the role of the US in the future of HEP. This is a simple example, but is not outside the realm of possibility.

The second point is how to make sure a situation like this does not happen.

I cannot say that communication of the importance of this process has been stellar. A quick google search yields no mainstream news articles about the process, nor the impact. In my opinion, this is a travesty and that’s the reason why I am writing this post. Symmetry Magazine also, just today, came out with an article about the process. Young members of our community who were not necessarily involved in Snowmass, but seem to know about Snowmass, do not really know about P5 or HEPAP. I may be wrong, but I draw this conclusion from a number of conversations I’ve had at CERN with US postdocs and students. Nonetheless, people are quite adamant about making sure that the US does continue to play a role in the future of HEP. This is true across HEP, the funding agencies and the members of Congress. (I can say this as I went on a trip with the USLUO, FNAL and SLAC representatives to lobby congress on behalf of HEP in March of this year, and this is the sentiment which I received.) So the first step is informing the public about what we’re doing and why.

The stuff we do is really cool! We’re all organized around how to solve the biggest issues facing physics! Getting the word out about this is key.

Go talk to your neighbor!

Go talk to your local physicist!

Go talk to your congressperson!

Just talk about physics! Talk about why it excites you and talk about why it’s interesting to explore! Maybe leave out the CLs plots, though. If you didn’t know, there’s also a whole mess of things that HEP is good for besides colliding particles! See this site for a few.

The final step is understanding the process. The biggest worry I have is what happens after HEPAP reviews the P5 recommendations. We, as a community, have to be willing to endure the pains of this process. Good science will be excluded. However, there are not infinite funds, nor was a guarantee of funding ever given. Recognition of this, while focusing on the big problems at hand and thinking about how to work within the means allowed is *the point* of the conversation. The better question is, will we emerge from the process unified or split? Will we get behind the Snowmass process and answer the questions posed to us, or fight about how to answer them? I certainly hope the answer is that we will unify, as we unified for Snowmass.

An allegorical example is from a slide from Nima Arkani-Hamed at Pheno2014, shown in the picture.

One slide from Nima Arkani-Hamed's presentation at Pheno2014

One slide from Nima Arkani-Hamed’s presentation at Pheno2014

 

The take home point is this: If we went through the exercise of Snowmass, and cannot pull our efforts together to the wishes of the community, are we going to survive? I would prefer to ask a different question: Will we not, as a community, take the opportunity to answer the biggest questions facing physics today?

We’ll see on the 22nd and beyond.

 

*********************************************

Update: May 27, 2014

*********************************************

As posted in the comments, the full report can be found here, the presentation given by Steve Ritz, chair of P5 can be found here, and the full P5 report can be found here.  Additionally, Symmetry Magazine has a very nice piece on the report itself. As they state in the update at the bottom of the page, HEPAP voted to accept the report.

Share

B Decays Get More Interesting

Friday, February 28th, 2014

While flavor physics often offers a multitude of witty jokes (read as bad puns), I think I’ll skip one just this time and let the analysis speak for itself. Just recently, at the Lake Louise Winter Institute, a new result was released for the analysis looking for \( b\to s\gamma\) transitions. Now this is a flavor changing neutral current, which cannot occur at tree level in the standard model. Therefore, the the lowest order diagram which this decay can proceed by is the one loop penguin shown below to the right.

\(b\to s\gamma \\)

One loop penguin diagram representing the transition \(b \to s \gamma \).

From quantum mechanics, photons can have either left handed or right handed circular polarization. In the standard model, the photon in the decay \(b\to s\gamma\) is primarily left handed, due to spin and angular momentum conservation. However, models beyond the standard model, including some minimally super symmetric models (MSSM) predict a larger than standard model right handed component to the photon polarization. So even though the decay rates observed for \(b\to s\gamma\) agree with those predicted by the standard model, the photon polarization itself is sensitive to new physics scenarios.

As it turns out, the decays \(B^\pm \to K^\pm \pi^\mp \pi^\pm \gamma \) are well suited to explore photon polarizations after playing a few tricks. In order to understand why, the easies way is to consider a picture.

Definition of \(\theta\)

Picture defining the angle \(\theta\) in the analysis of \(B^\pm\to K^\pm \pi^\mp \pi^\pm \gamma\). From the Lake Louise Conference Talk

In the picture to the left, we consider the rest frame of a possible resonance which decays into \(K^\pm \pi^\mp \pi^\pm\). It is then possible to form the triple product of \(p_\gamma\cdot(p_{\pi,slow}\times p_{\pi,fast})\). Effectively, this defines the angle \(\theta\) defined in the picture to the left.

Now for the trick: Photon polarization is odd under parity transformation, and so is the triple product defined above. Defining the decay rate as a function of this angle, we find:

\(\frac{d\Gamma}{d \cos(\theta)}\propto \sum_{i=0,2,4}a_i cos^i\theta + \lambda_i\sum_{j=1,3} a_j \cos^j \theta\)

This is an expansion in Legendre Polynomials up to the 4th order. The odd moments are those which would contribute to photon polarization effects. The lambda is the photon polarization. Therefore, by looking at the decay rate as a function of this angle, we can directly access the photon polarization. However, another way to access the same information is by taking the asymmetry between the decay rate for events where theta is above the plane and those where theta is below the plane. This is then proportional to the photon polarization as well and allows for direct statistical calculation. We will call this the up-down asymmetry, or \(A_{ud}\). For more information, a useful theory paper is found here.

Enter LHCb. With the 3 fb\(^{-1}\) collected over 2011 and 2012 containing ~14,000 signal events, the up-down asymmetry was measured.

Up-down asymmetry for the analysis of \(b\to s\gamma\).

Up-down asymmetry for the analysis of \(b\to s\gamma\). From the Lake Louise Conference Talk

In bins of invariant mass of the \(K \pi \pi\) system, we see the asymmetry is clearly non-zero, and varies across the mass range given. As seen in the note posted to the arXiv, the shapes of the fit of the Legendre moments are not the same in differing mass bins, either. This corresponds to a 5.2\(\sigma\) observation of photon polarization in this channel. What this means for new physics models, however, is not interpreted, though I’m sure that the arXiv will be full of explanations given about a week.

Share

Oh what a beautiful day

Tuesday, July 23rd, 2013

In case you hadn’t heard, the past few days have been big days for B physics, i.e. particle physics involving a b quark. On the 18th and 19th, there were three results released in particular, two by LHCb and one by CMS. Specifically, on the 18th LHCb released their analysis of \( B_{(s)}\to\mu\mu\) using the full 3 fb\(^{-1}\) dataset, corresponding to 1 fb\(^{-1}\) of 2011 data at 7 TeVand 2 fb\(^{-1}\) of 2012 data at 8 TeV. Additionally, CMS also released their result using 5 fb\(^{-1}\) of 7 TeV and 30 fb\(^{-1}\) of 8 TeV data.

no FCNC

The decay \(B_{(s)}\to\mu\mu\) cannot decay via tree-level processes, and must proceed by higher level processes ( shown below)

These analyses have huge implications for SUSY. The decay \( B_{(s)}\to\mu\mu\) cannot proceed via tree-level processes, as they would involve flavor changing neutral currents which are not seen in the Standard Model (picture to the right). Therefore, the process must proceed at a higher order than tree level. In the language of Feynman Diagrams, the decay must proceed by either loop or penguin diagrams, show in the diagrams below. However, the corresponding decay rates are then extremely small, about \(3\times10^{-9}\). Any deviation from this extremely small rate, however, could therefore be New Physics, and many SUSY models are strongly constrained by these branching fractions.

The results reported are:

Experiment    \(\mathcal{B}(B_{s}\to\mu\mu)\) Significance \(\mathcal{B}(B\to\mu\mu)\)
LHCb \( 2.9^{+1.1}_{-1.0} \times 10^{-9}\) 4.0\(\sigma\) \(<7.4\times 10^{-10}(95\% CL) \)
CMS \(3.0^{+1.0}_{-0.9}\times 10^{-9}\) 4.3 \(\sigma\) \(< 1.1\times 10^{-9} (95\% CL)\)
bs_loop_penguin

Higher order diagrams

Both experiments saw an excess of events events for the \(B_{s}\to\mu\mu)\) channel, corresponding to \(4.o\sigma\) for LHCb (updated from \(3.5 \sigma\) of last year), and \(4.3\sigma\) for CMS. The combined results will, no doubt, be out very soon. Regardless, as tends to happen with standard model results, SUSY parameter space has continued to be squeezed. Just to get a feel of what’s happening, I’ve made a cartoon of the new results overlaid onto an older picture from D. Straub to see what the effect of the new result would be. SUSY parameter space is not necessarily looking so huge. The dashed line in the figure represents the old result. Anything shaded in was therefore excluded. By adding the largest error on the branching fraction of \(B_s\to\mu\mu\), I get the purple boundary, which moves in quite a bit. Additionally, I overlay the new boundary for \(B\to\mu\mu\) from CMS in orange and from LHCb in green. An interesting observation is that if you take the lower error for LHCb, the result almost hugs the SM result. I won’t go into speculation, but it is interesting.

Cartoon of updated limits

Cartoon of Updated Limits on SUSY from \(B\to\mu\mu\) and \(B_s\to\mu\mu\). Orange Represents the CMS results and green represents LHCb results for \(B_s\to\mu\mu\) . Purple is the shared observed upper limit on \(B\to\mu\mu\). Dashed line is the old limit. Everything outside the box on the bottom left is excluded. Updated from D. Straub (http://arxiv.org/pdf/1205.6094v1.pdf)

 

Additionally, for a bit more perspective, see Ken Bloom’s Quantum Diaries post.

As for the third result, stay tuned and I’ll write about that this weekend!

Share

Back From Hibernation, and a Puzzling Asymmetry

Monday, March 4th, 2013

I know in my life at least, there are periods when all I want to do is talk to the public about physics, and then periods where all I would like to do is focus on my work and not talk to anyone. Unfortunately, the last 4 or so months falls into the latter category. Thank goodness, however, I am now able to take some time and write about some interesting physics which had been presented both this year and last. And while polar bears don’t really hibernate, I share the sentiments of this one.

Okay, I swear I'm up this time! Photo by Andy Rouse, 2011.

A little while ago, I posted on Dalitz Plots, with the intention of listing a result. Well, now is the time.

At the 7th International Workshop on the CKM Unitarity Triangle, LHCb presented preliminary results

Dalitz Plot Asymmetry for \(B^\pm\to\pi^\pm\pi\pi\)

Asymmetry of \(B^{\pm}\to\pi^{\pm}\pi^+\pi^-\) as a function of position in the Dalitz Plot. Asymmetry is mapped to the z-axis. From LHCb-CONF-2012-028

for CP asymmetry in the channels \(B\to hhh\), where \(h\) is either a \(K\) or \(\pi\). Specifically, the presentation was to report on searches for direct CP violation in the decays \(B^{\pm}\to \pi^{\pm} \pi^{+} \pi^{-}\) and \(B^{\pm}\to\pi^{\pm}K^{+}K^{-}\).  If CP was conserved in this decay, we would expect decays from \(B^+\) and \(B^-\) to occur in equal amounts. If, however, CP is violated, then we expect a difference in the number of times the final state comes from a \(B^+\) versus a \(B^-\). Searches of this type are effectively “direct” probes of the matter-antimatter asymmetry in the universe.

Asymmetry of \(B^\pm\to\pi^\pm K K\). From LHCb-CONF-2012-028

Asymmetry of \(B^\pm\to\pi^\pm K K\) as a function position in the Dalitz plot. Asymmetry is mapped onto the z-axis.From LHCb-CONF-2012-028

By performing a sophisticated counting of signal events, CP violation is found with a statistical significance of \(4.2\sigma\) for \(B^\pm\to\pi^\pm\pi^+\pi^-\) and \(3.0\sigma\) for \(B^\pm\to\pi^\pm K^+K^-\). This is indeed evidence for CP violation, which requires a statistical significance >3\(\sigma\).The puzzling part, however, comes when the Dalitz plot of the 3-body state is considered. It is possible to map the CP asymmetry as a function of position in the Dalitz plot, which is shown on the right. It’s important to note that these asymmetries are for both signal and background. Also, the binning looks funny in this plot because all bins are of approximately equal populations. In particular, notice red bins on the top left of the \(\pi\pi\pi\) Dalitz plot and the dark blue and purple section on the left of the \(\pi K K\) Dalitz plot. By zooming in on these regions, specifically \(m^2(\pi\pi_{high})>\)15 GeV/c\(^2\) and \(m^2(K K)<\)3 GeV/c\(^2\), and separating by \(B^+\) and \(B^-\), a clear and large asymmetry is shown (see plots below).

Now, I’d like to put these asymmetries in a little bit of perspective. Integrated over the Dalitz Plot, the resulting asymmetries are

\(A_{CP}(B^\pm\to\pi^\pm\pi^+\pi^-) = +0.120\pm 0.020(stat)\pm 0.019(syst)\pm 0.007(J/\psi K^\pm)\)

and

\(A_{CP}(B^\pm\to\pi^\pm K^+K^-) = -0.153\pm 0.046(stat)\pm 0.019(syst)\pm 0.007(J/\psi K^\pm)\).

Whereas, in the regions which stick out, we find:

\(A_{CP}(B^\pm\to\pi^\pm\pi^+\pi^-\text{region}) = +0.622\pm 0.075(stat)\pm 0.032(syst)\pm 0.007(J/\psi K^\pm)\)

and

\(A_{CP}(B^\pm\to\pi^\pm K^+K^-\text{region}) = -0.671\pm 0.067(stat)\pm 0.028(syst)\pm 0.007(J/\psi K^\pm)\).

These latter regions correspond to a statistical significance of >7\(\sigma\) and >9\(\sigma\), respectively. The interpretation of these results is a bit difficult: the asymmetries are four to five times that of the integrated asymmetries, and are not necessarily associated with a single resonance. We would expect in the \(\rho^0\) and \(f_0\) resonances to appear in the lowest region of \(\pi\pi\pi\) Dalitz plot, in the asymmetry. In the \(K K\pi\) Dalitz plot, there are really no scalar particles which we expect to give us an asymmetry of the kind we see. One possible answer to both these problems is that the quantum mechanical amplitudes are only partially interfering and giving the structure that we see. The only way to check this would be to do a more detailed analysis involving a fit to all of the possible resonances in these Dalitz plots. All I can say is that this result is certainly puzzling, and the explanation is not necessarily clear.

Zoom onto \(m^2(\pi\pi)\) lower axis.Zoom of \(m^2(K K)\)

Zoom onto \(m^2(\pi\pi)\) lower axis (left) and \(m^2(K K)\) axis (right) . Up triangles are \(B^+\), down are \(B^-\)

Share

Mixing it up

Wednesday, November 14th, 2012

One of the other results presented at the Hadron Collider Physics Symposium this week was the result of a search for \( D^{0}–\bar{D}^{0}\) mixing at LHCb.

Cartoon: If a \(D^0\) is produced, at some time t later, it is possible that the system has "oscillated" into a \(\bar{D}^0\). This is because the mass eigenstates are not the same as the flavor eigenstates.

Neutral meson mixing is predicted for any neutral meson system, and has been verified for the \(K^0–\bar{ K}^0\), \(B^0–\bar{B}^0\) and \(B_s^0–\bar{B_s}^0\) systems. However, for the \(D^0–\bar{D}^0\) system, no one measurement has provided a result with greater than \(5\sigma\) significance that mixing actually occurs, until now.

 

 

The actual measurement is of \(R(t)=R\), which is effectively the Taylor expansion of the time dependent ratio of \( D^0 \rightarrow K^+ \pi^-\) (“Wrong Sign” decay) to \( D^0\rightarrow K^- \pi^+\) (“Right Sign” decay). Charge conjugates of these decays are also included. There is a “Wrong Sign” and a “Right Sign” because the Right Sign decays are much more probable, according to the standard model.

The mixing of the \(D^0–\bar{D}^0\) system is described by the parameters \(x = \Delta m /\Gamma\) and \(y = \Delta \Gamma / 2\Gamma\), where \(\Delta m\) is the mass difference between the \(D^0\) and \(\bar{D}^0\), \(\Delta \Gamma\) is the difference of widths of the mass peaks, and \( \Gamma\) is the average width. What appears in the description of \(R\), however, is \( x’\) and \( y’\), which give the relations between the \(x\) and \(y\) with added information about the strong phase difference between the Right Sign and Wrong Sign decays. The important part about \(x’\) and \(y’\) are that they appear in the time dependent terms of the Taylor expansion of \(R\). If there were no mixing at all, then we would expect the ratio to remain constant, and the higher order time dependence to vanish. If mixing does occur, however, then a clear, non-flat trend should be seen, and hence a measurement of \(x’\) and \(y’\). That is why the time dependent analysis is so important.

Fit of ratio of WS and RS decays as a function of decay time of the D meson. Flat line would be no mixing, sloped line indicates mixing. From http://arxiv.org/pdf/1211.1230.pdf

Result of the mixing parameter fit of the neutral D meson system. 1,3 and 5 standard deviation contours are shown, and the + represents no mixing. From http://arxiv.org/pdf/1211.1230.pdf

The result is a 9.1 \(\sigma\) evidence for mixing, which is also in agreement with previous results from BaBar, Belle and CDF. On top of confirming that the neutral D meson system does mix, this result is of particular importance because, coupled with the result of CP violation in the charm system, it begs the question whether or not there is much more interesting physics beyond the standard model involving charm just waiting to be seen. Stay tuned!

Share

A Dalitz What Now?

Friday, November 2nd, 2012

Perhaps in your wanderings of physics papers, you’ve seen plots which look like this:

\( D^{+}\rightarrow K^{-} \pi^{+} \pi^{+}\)Dalitz Plot. Borrowed from Lectures by Brian Meadows, Cincinnati.

While yes, you may think that Easter has come early, this is actually an honest-to-goodness physics analysis technique. Developed by R.H. Dalitz in 1953, this plot illustrates visually the interference of the quantum mechanical amplitudes of the final state particles. Let’s take a step-by-step walk through of the plot.

The Setup

Dalitz plots were originally used to investigate a three body final state, for instance \( D^{+}\rightarrow K^{-} \pi^{+} \pi^{+}\). Taking this example, let’s imagine we’re in the \( D^{+}\) rest frame (it’s just sitting there), then the \( D^{+}\) decays. The decay products can go a variety of directions, so long as momentum is conserved.

The directions in which the particles fly and with what momentum will determine where we are in the plot. For reference, we can label the daughters as 1, 2 and 3, then assign them masses \( m_1, m_2\) and \( m_3 \), and momenta \( p_1, p_2\) and \(p_3\), respectively. Finally, let the \( D^{+} \) have mass M. It’s momentum is 0 since it’s just sitting there. With a bit of algebraic manipulation, and Einstein’s relation \(E^2=p^2+m^2\) (c=1, for simplicity of calculation), we can define a whole host of new variables, for instance \( m_{12}^2 = (E_1+E_2)^2-(p_1+p_2)^2\).

The Axes

Let’s take  \( m_{12}^2 = (E_1+E_2)^2-(p_1+p_2)^2\) as our guinea pig. Physically, we can think of this as combining particles 1 and 2 into a single particle, and then plot its effective invariant mass spectrum. This is quite similar to looking at the invariant dimuon mass squared of the Higgs searches. In this case, however, we then plot either \( m_{13}^2 \) or \(m_{23}^2\) on the remaining axis. Since all of the momenta and energies are related, picking either \( m_{13}^2 \) or \(m_{23}^2\) fully defines the system. This gives us all the ingredients we need for the plot!

The Boundary


PDG, Review of Kinematics

After setting up the axes above, we need to plot the actual figure. The boundary is completely described by energy and momentum conservation. For example, if you can ask “What is the minimum energy squared that particle 12 could have?” After a bit of consideration, you would say “why the addition of the two masses, then squared!” Likewise, the maximum energy it could have is the mass of the parent minus the mass of the other daughter, then squared. In this case, all of the momentum is then available to the \( m_{12}\) system. Repeating this process for all values of \( m_{12}^2 \) then gives the complete boundary of the Dalitz plot. Some special spots are shown in the PDG plot above. Forming the complete boundary is not necessarily a simple task, especially if the particles are indistinguishable. For the sake of explanation, we will stick to our simple example here.

The Innards

Finally, the bulk of the Dalitz plot is defined by interactions of the final state particles. If these particles did not interact, then we would expect a completely flat distribution along the inside of the plot. The fact that these particles do interfere is due to the quantum mechanical probability of the initial state transforming into the final state given the interaction potential of the system. The result is a vast array of structure and symmetries across the plot. For the example of  \(D^{+}\rightarrow K^{-} \pi^{+} \pi^{+}\), the result is shown above.  Each little dot is one event, and we can clearly see that there are places where the density is high (resonances, the so called “isobar model”), and places where there is almost no density at all (destructive interference).

The structures can be quite different depending on the spin of the resonance as well. For instance, the first plot shown below shows the resonance (where the boxes are bigger). This plot is actually Monte Carlo simulation for the process \( \pi^- p\rightarrow f_0 n\rightarrow\pi^0 \pi^0 n\), produced with a \(f_0\) mass of 0.4 GeV/c2. Since the \(f_0\) is a scalar (spin 0), the resonance extends across the entire plot. In the second plot, the \(\rho(770)\) is produced in the decay \(D^{-}\rightarrow K^{-} \pi^{+} \pi^{-}\). This too is Monte Carlo. The fact that the \(\rho(770)\) is a vector (spin 1) is what produces the distinct shape shown below.  This simple example shows how one can identify the spin of a resonance by visually inspecting the Dalitz plot.

 

MC \(f_0\)Dalitz Plot. From Crystal Ball Collaboration : http://arxiv.org.proxy.libraries.uc.edu/abs/nucl-ex/0202007"

\(\rho(770)\) resonance in \(D^{-}\rightarrow K^{-} \pi^{+} \pi^{-}\) From lectures by Brain Meadows.

Now, there’s a lot more to Dalitz plot analysis that what I’ve presented here. There can be reflections across the plot and different resonances interfering with each other in quite complicated ways. For example, in the decay \(D^{-}\rightarrow K^{-} \pi^{+} \pi^{-}\), if we had a \(K^{*}_{0} (800)\) interfere with the \(f_0\), the Dalitz plot might look something like this:

 

\(K^{*}_{0} (800)\) interfering with \(f_0\) in decay \(D^{-}\rightarrow K^{-} \pi^{+} \pi^{-}\). From Brian Meadows.

The distinct shape, which looks to my eye a bit like a butterfly, is due to the phase difference between the two resonances.

 

So now you at least have a bit of an intro to the Dalitz plot, in this all too brief and quite simplified example.

Share