• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

USLHC | USA

Live Blogging of the LHCb Bs pi result

Adam Davis
Monday, March 21st, 2016

There has been a lot of press about the recent DØ result on the possible \(B_s \pi\) state. This was also covered on Ricky Nathvani’s blog. At Moriond QCD, Jeroen Van Tilburg showed a few plots from LHCb which showed no signal in the same mass regions as explored by D∅. Tomorrow, there will be a special LHC seminar on the LHCb search for purported tetraquark, where we will get the full story from LHCb. I will be live blogging the seminar here! It kicks off at 11:50 CET, so tune in to this post for live updates.


Mar 22, 2016 -12:23. Final answer. LHCb does not confirm the tetraquark. Waiting for CMS, ATLAS, CDF.


Mar 22, 2016 – 12:24. How did you get the result out so fast? A lot of work by the collaboration to get MC produced and to expedite the process.


Mar 22, 2016 – 12:21. Is the \(p_T\) cut on the pion too tight? The fact that you haven’t seen anything anywhere else gives you confidence that the cut is safe. Also, cut is not relative to \(B_s\).


Mar 22, 2016 – 12:18. Question: What are the fractions of multiple candidates which enter? Not larger than 1.2. If you go back to the cuts. What selection killed the combinatoric background the most? Requirement that the \(\pi\) comes from the PV, and the \(p_T\) cut on the pion kill the most. How strong the PV cut? \(\chi^2\) less than 3.5 for the pion at the PV, you force the \(B_s\) and the pion to come from the PV, and constrain the mass of \(B_s\) mass.


Mar 22, 2016 – 12:17: Can you go above the threshold? Yes.


Mar 22, 2016 – 12:16. Slide 9: Did you fit with a floating mass? Plan to do this for the paper.


Mar 22, 2016 – 12:15. Wouldn’t \(F_S\) be underestimated by 8%? Maybe maybe not.


Mar 22, 2016 – 12:13. Question: Will LHCb publish? Most likely yes, but a bit of politics. Shape of the background in the \(B_s\pi\) is different in LHCb and DØ. At some level, you expect a peak from the turn over. Also CMS is looking.


Mar 22, 2016 – 12:08-12:12. Question: did you try the cone cut to try to generate a peak? Answer: Afraid that the cut can give a biased estimate of the significance. From DØ seminar, seems like this is the case. For DØ to answer. Vincenzo Vagnoni says that DØ estimation of significance is incorrect. We also don’t know if there’s something that’s different between \(pp\) and \(p \bar{p}\).


Mar 22, 2016 – 12:08. No evidence of \(X(5568)\) state, set upper limit. “We look forward to hearing from ATLAS, CMS and CDF about \(X(5568)\)”


Mar 22, 2016 – 12:07. What if the production of the X was the same at LHCb? Should have seen a very large signal. Also, in many other spectroscopy plots, e.g. \(B*\), look at “wrong sign” plots for B and meson. All results LHCb already searched for would have been sensitive to such a state.


Mar 22, 2016 -12:04. Redo the analysis in bins of rapidity. No significant signal seen in any result. Do for all pt ranges of the Bs.


Mar 22, 2016 – 12:03. Look at \(B^0\pi^+\) as a sanity check. If X(5568) is similar to B**, then the we expect order 1000 events.


Mar 22, 2016 – 12:02.Upper limits on production given.


Mar 22, 2016 – 12:02. Check for systematics: changing mass and width of DØ range, and effect of efficiency dependence on signal shape are the dominant sources of systematics. All measurements dominated by statistics.


Mar 22, 2016 – 12:00. Result of the fits all consistent with zero. The relative production is also consistent with zero.


Mar 22, 2016 – 11:59. 2 fits with and without signal components, no difference in pulls. Do again with tighter cut on the transverse momentum of the \(B_s\). Same story, no significant signal seen.


Mar 22, 2016 – 11:58. Fit model: S-wave Breit-Wigner, mass and width fixed to DØ result. Backgrounds: 2 sources. True \(B_s^0\) with random track, and fake \(B_s\).


Mar 22, 2016 – 11:56.  No “cone cut” applied because it is highly correlated with reconstructed mass.


Mar 22, 2016 – 11:55. LHCb strategy: Perform 3 independent searches, confirm a qualitative approach, move forward with single approach with Run 1 dataset. Cut based selection to match D∅ strategy. Take home point. Statistics is 20x larger and much cleaner.


Mar 22, 2016 – 11:52. Review of DØ result. What could it be? Molecular model is disfavored. Diquark-Antidiquark models are popular. But could not fit into any model. Could also be feed down of  radiative decays. All predictions have large uncertainties


Mar 22, 2016 –  11:49. LHCb-CONF-2016-004 posted at cds.cern.ch/record/2140095/


Mar 22, 2016 – 11:47. The speaker is transitioning to Marco Pappagallo .


Mar 22, 2016 – 11:44. People have begun entering the auditorium for the talk, at the end of Basem Kanji’s seminar on \(\Delta m_d\)

 

Share

LHC 2015: what’s different in four years?

Ken Bloom
Monday, December 21st, 2015

After a long-anticipated data run, LHC proton-proton running concludes in early November.  A mere six weeks later, on a mid-December afternoon, the ATLAS and CMS collaborations present their first results from the full dataset to a packed CERN auditorium, with people all over the world watching the live webcast.  Both collaborations see slight excesses in events with photon pairs; the CMS excess is quite modest, but the ATLAS data show something that could be interpreted as a peak.  If it holds up with additional data, it would herald a major discovery.  While the experimenters caution that the results do not have much statistical significance, news outlets around the world run breathless stories about the possible discovery of a new particle.

December 15, 2015? No — December 13, 2011, four years ago.  That seminar presented what we now know were the first hints of the Higgs boson in the LHC data.  At the time, everyone was hedging their bets, and saying that the effects we were seeing could easily go away with more data.  Yet now we look back and know that it was the beginning of the end for the Higgs search.  And even at the time, everyone was feeling pretty optimistic.  Yes, we had seen effects of that size go away before, but at this time four years ago, a lot of people were guessing that this one wouldn’t (while still giving all of the caveats).

But while both experiments are reporting an effect at 750 GeV — and some people are getting very excited about it — it seems to me that caution is needed here, more so than we did with the emerging evidence for the Higgs boson.  What’s different about what we’re seeing now compared to what we saw in 2011?

I found it instructive to look back at the presentations of four years ago.  Then, ATLAS had an effect in diphotons around an invariant mass of 125 GeV that had a 2.8 standard deviation local significance, which was reduced to 1.5 standard deviations when the “look elsewhere effect” (LEE) was taken into account.  (The LEE exists because if there is a random fluctuation in the data, it might appear anywhere, not just the place you happen to be looking, and the statistical significance needs to be de-weighted for that.)  In CMS, the local significance was 2.1 standard deviations.  Let’s compare that to this year, when both experiments see an effect in diphotons around an invariant mass of 750 GeV.  At ATLAS, it’s a 3.6 standard deviation local effect which reduced to 2.0 standard deviations after the LEE.  For CMS the respective values are 2.6 and 1.2 standard deviations.  So it sounds like the 2015 signals are even stronger than the 2011 ones, although, on their own, still quite weak, when we consider that five standard deviations is the usual standard to claim a discovery because we are sure that a fluctuation of that size would be very unlikely.

But the 2011 signals had some other things going for them.  The first were experimental.  There were simultaneous excesses in other channels that were consistent with what you’d expect from a Higgs decay.  This included in particular the ZZ channel, which had a low expected rate, but also very low backgrounds and excellent mass resolution.  In 2011, both experiments had the beginning of signals in ZZ too (although at a slightly different putative Higgs mass value) and some early hints in other decay channels.  There were multiple results supporting the diphotons, whereas in 2015 there are no apparent excesses in other channels indicating anything at 750 GeV.

And on top of that, there was something else going for the Higgs in December 2011: there was good reason to believe it was on the way.  From a myriad of other experiments we had indirect evidence that a Higgs boson ought to exist, and in a mass range where the LHC effects were showing up.  This indirect evidence came through the interpretation of the “standard model” theory that had done an excellent job of describing all other data in particle physics and thus gave us confidence that it could make predictions about the Higgs too.  And for years, both the Tevatron and the LHC had been slowly but surely directly excluding other possible masses for the Higgs.  If a Higgs were going to show up, it made perfect sense for it to happen right where the early effects were being observed, at just that level of significance with so little data.

Do we have any of that with the 750 GeV effect in 2015?  No.  There are no particular reasons to expect this decay with this rate at this mass (although in the wake of last week’s presentations, there have been many conjectures as to what kind of new physics could make this happen).  Thus, one can’t help but to think that this is some kind of fluctuation.  If you look at enough possible new-physics effects, you have a decent chance of seeing some number of fluctuations at this level, and that seems to be the most reasonable hypothesis right now.

But really there is no need to speculate.  In 2016, the LHC should deliver ten times as much data as it did this year.  That’s even better than what happened in 2012, when the LHC exceeded its 2011 performance by a mere factor of five.  We can anticipate another set of presentations in December 2016, and by then we will know for sure if 2015 gave us a fluctuation or the first hint of a new physical theory that will set the research agenda of particle physics for years to come.  And if it is the latter, I will be the first to admit that I got it wrong.

Share

What have we learned from the LHC in 2015?

Ken Bloom
Saturday, December 5th, 2015

The Large Hadron Collider is almost done running for 2015.  Proton collisions ended in early November, and now the machine is busy colliding lead nuclei.  As we head towards the end-of-year holidays, and the annual CERN shutdown, everyone wants to know — what have we learned from the LHC this year, our first year of data-taking at 13 TeV, the highest collision energies we have ever achieved, and the highest we might hope to have for years to come?

We will get our first answers to this question at a CERN seminar scheduled for Tuesday, December 15, where ATLAS and CMS will be presenting physics results from this year’s run.  The current situation is reminiscent of December 2011, when the experiments had recorded their first significant datasets from LHC Run 1, and we saw what turned out to be the first hints of the evidence for the Higgs boson that was discovered in 2012.  The experiments showed a few early results from Run 2 during the summer, and some of those have already resulted in journal papers, but this will be our first chance to look at the broad physics program of the experiments.  We shouldn’t have expectations that are too great, as only a small amount of data has been recorded so far, much less than we had in 2012.  But what science might we hope to hear about next week?  

Here is one thing to keep in mind — the change in collision energy affects particle production rates, but not the properties of the particles that are produced.  Any measurement of particle production rates is inherently interesting at a new collision energy, as will be a measurement that has never been done before.  Thus any measurement of a production rate that is possible with this amount of data would be a good candidate for presentation.  (The production rates of top quarks at 13 TeV have already been measured by both CMS and ATLAS; maybe there will be additional measurements along these lines.)

We probably won’t hear anything new about the Higgs boson.  While the Higgs production rates are larger than in the previous run, the amount of data recorded is still relatively small compared to the 2010-12 period.  This year, the LHC has delivered about 4 fb-1 of data, which could be compared to the 5 fb-1 that was delivered in 2011.  At that time there wasn’t enough data to say anything definitive about the Higgs boson, so it is hard to imagine that there will be much in the way of Higgs results from the new data (not even the production rate at 13 TeV), and certainly nothing that would tell us anything more about its properties than we already know from the full Run 1 dataset of 30 fb-1.  We’ll all probably have to wait until sometime next year before we will know more about the Higgs boson, and if anything about it will disagree with what we expect from the standard model of particle physics.

If there is anything to hope for next week, it is some evidence for new, heavy particles.  Because the collision energy has been increased from 8 TeV to 13 TeV, the ability to create a heavy particle of a given mass has increased too.  A little fooling around with the “Collider Reach” tool (which I had discussed here) suggests that even as little data as we have in hand now can give us improved chances of observing such particles now compared to the chances in the entire Run 1 dataset as long as the particle masses are above about 3 TeV.  Of course there are many theories that predict the existence of such particles, the most famous of which is supersymmetry.  But so far there has been scant evidence of any new phenomena in previous datasets.  If we were to get even a hint of something at a very high mass, it would definitely focus our scientific efforts for 2016, where we might get about ten times as much data as we did this year.

Will we get that hint, like we did with the Higgs boson four years ago?  Tune in on December 15 to find out!

Share

Double time

Ken Bloom
Thursday, August 27th, 2015

In particle physics, we’re often looking for very rare phenomena, which are highly unlikely to happen in any given particle interaction. Thus, at the LHC, we want to have the greatest possible proton collision rates; the more collisions, the greater the chance that something unusual will actually happen. What are the tools that we have to increase collision rates?

Remember that the proton beams are “bunched” — there isn’t a continuous current current of protons in a beam, but a series of smaller bunches of protons, each only a few centimeters long, with gaps of many centimeters between each bunch.  The beams are then timed so that bunches from each beam pass through each other (“cross”) inside one of the big detectors.  A given bunch can have 10E11 protons in it, and when two bunches cross, perhaps tens of the protons in each bunch — a tiny fraction! — will interact.  This bunching is actually quite important for the operation of the detectors — we can know when bunches are crossing, and thus when collisions happen, and then we know when the detectors should really be “on” to record the data.

If one were to have a fixed number of protons in the machine (and thus a fixed total amount of beam current), you could imagine two ways to create the same number of collisions: have N bunches per beam, each with M protons, or 2N bunches per beam with M/sqrt(2) protons.  The more bunches in the beam, the more closely spaced they would have to be, but that can be done.  From the perspective of the detectors, the second scenario is much preferred.  That’s because you get fewer proton collisions per bunch crossing, and thus fewer particles streaming through the detectors.  The collisions are much easier to interpret if you have fewer collisions per crossing; among other things, you need less computer processing time to reconstruct each event, and you will have fewer mistakes in the event reconstruction because there aren’t so many particles all on top of each other.

In the previous LHC run (2010-12), the accelerator had “50 ns spacing” between proton bunches, i.e. bunch crossings took place every 50 ns.  But over the past few weeks, the LHC has been working on running with “25 ns spacing,” which would allow the beam to be segmented into twice as many bunches, with fewer protons per bunch.  It’s a new operational mode for the machine, and thus some amount of commissioning and tuning and so forth are required.  A particular concern is “electron cloud” effects due to stray particles in the beampipe striking the walls and ejecting more particles, which is a larger effect with smaller bunch spacing.  But from where I sit as one of the experimenters, it looks like good progress has been made so far, and as we go through the rest of this year and into next year, 25 ns spacing should be the default mode of operation.  Stay tuned for what physics we’re going to be learning from all of this!

Share

Finding a five-leafed clover

Adam Davis
Wednesday, July 15th, 2015
Photo Credit: Cathy Händel, Published on http://www.suttonelms.org.uk/olla12.html

Photo Credit: Cathy Händel, Published on http://www.suttonelms.org.uk/olla12.html

Sometimes when you’re looking for something else, you happen across an even more exciting result. That’s what’s happened at LHCb, illustrated in the paper “Observation of \(J/\psi p\) resonances consistent with pentaquark states in \(\Lambda_b^0\to J/\psi K^-p\) decays”, released on the arXiv on the 14th of July.

I say this is lucky because the analysts found these states while they were busy looking at another channel; they were measuring the branching fraction of \(B^0\to J/\psi K^+ K^-\). As one of the analysts, Sheldon Stone, recalled to me, during the review of the \(B^0\) analysis, one reviewer asked if there could be a background from the decay \(\Lambda_b^0\to J/\psi K^- p\), where the proton was misidentified as a kaon. As this was a viable option, they looked at the PDG to see if the mode had been measured, and found that it had not. Without a certain knowledge of how large this contribution would be, the analysts looked. To their surprise, they found a rather large rate of the decay, allowing for a measurement of the lifetime of the \(\Lambda_b^0\). At the same time, they noticed a peak in the \(J/\psi p\) spectrum. After completing the above mentioned analysis of the \(B^0\), they returned to the channel.

It’s nice to put yourself in the analysts shoes and see the result for yourself. Let’s start by looking at the decay \(\Lambda_b^0\to J/\psi p K^-\). As this is a three body decay, we can look at the Dalitz Plots.

Dalitz plots from the decay Lambda_b^0\to J/\psi K p. Compiled from http://arxiv.org/abs/1507.03414

Dalitz plots from the decay \(\Lambda_b^0\to J/\psi K^- p\). Compiled from http://arxiv.org/abs/1507.03414

The above Dalitz plots show all combinations of possible axes to test. In the one on the left, around \(m^2=2.3\) GeV\(^2\), running vertically, we see the \(\Lambda(1520)\) resonance, which decays into a proton and a kaon. Running horizontally is a band which does not seem to correspond to a known resonance, but which would decay into a \(J/\psi\) and a proton. If this is a strong decay, then the only option is to have a hadron whose minimum quark content is \(uud\bar{c}c\). The same band is seen on the middle plot as a vertical band, and on the far right as the sloping diagonal band. To know for sure, one must perform a complete amplitude analysis of the system.

You might be saying to yourself “Who ordered that?” and think that something with five quarks hadn’t been postulated. This is not the case. Hadrons with quark content beyond the minimum were already thought about by Gell-Mann and Zweig in 1964 and quantitatively modeled by Jaffe in 1977  to 4 quarks and 5 quarks by Strottman in 1979. I urge you to go look at the articles if you haven’t before.

It appears as though a resonance has been found, and in order to be sure, a full amplitude analysis of the decay was performed. The distribution is first modeled without any such state, shown in the figures below.

Projections of the fits of the Lambda_b^0\to J/\psi K^- p spectrum without any additional components. From http://arxiv.org/abs/1507.03414

Projections of the fits of the\( \Lambda_b^0\to J/\psi K^- p\) spectrum without any additional components. Black is the data, and red is the fit. From http://arxiv.org/abs/1507.03414

Try as you might, the models are unable to explain the invariant mass distribution of the \(J/\psi p\). Without going into too much jargon, they wrote down from a theoretical standpoint what type of effect a five quark particle would have on the Dalitz plot, then put this into their model. As it turns out, they were unable to successfully model the distribution without the addition of two such pentaquark states. By adding these states, the fits look much better, as shown below.

Mass projection onto the J/\psi p axis of the total fit to the Dalitz plot. Again, Black is data, red is the fit. The inset image is for the kinematic range...  From http://arxiv.org/abs/1507.03414

Mass projection onto the \(J/\psi p\) axis of the total fit to the Dalitz plot. Again, Black is data, red is the fit. The inset image is for the kinematic range \(m(K p)>2 GeV\).
From http://arxiv.org/abs/1507.03414

The states  are called the \(P_c\) states. Now, as this is a full amplitude analysis, the fit also covers all angular information. This allows for determination of the total angular momentum and parity of the states. These are defined by the quantity \(J^P\), with \(J\) being the total angular momentum and \(P\) being the parity. All values for both resonances are tried from 1/2 to 7/2, and the best fit values are found to be with one resonance having \(J=3/2\) and the other with \(J=5/2\), with each having the opposite parity as the other. No concrete distinction can be made between which state has which value.

Finally, the significance of the signal is described by under the assumption \(J^P=3/2^-,5/2^+\) for the lower and higher mass states; the significances are 9 and 12 standard deviations, respectively.

The masses and widths turn out to be

\(m(P_c^+(4380))=4380\pm 8\pm 29 MeV\)

\(m(P_c^+(4450))=4449.8\pm 1.7\pm 2.5 MeV\)

With corresponding widths

Width\((P_c^+(4380))=205\pm 18\pm 86 MeV\)

Width\((P_c^+(4450))=39\pm 5\pm 19 MeV\)

Finally, we’ll look at the Argand Diagrams for the two resonances.

Argand diagrams for the two P_c states. From http://arxiv.org/abs/1507.03414

Argand diagrams for the two \(P_c\) states.
From http://arxiv.org/abs/1507.03414

 

Now you may be saying “hold your horses, that Argand diagram on the right doesn’t look so great”, and you’re right. I’m not going to defend the plot, but only point out that the phase motion is in the correct direction, indicated by the arrows.

As pointed out on the LHCb public page, one of the next steps will be to try to understand whether the states shown are tightly bound 5 quark objects or rather loosely bound meson baryon molecule. Even before that, though, we’ll see if any of the other experiments have something to say about these states.

Share

Starting up LHC Run 2, step by step

Ken Bloom
Thursday, June 11th, 2015

I know what you are thinking. The LHC is back in action, at the highest energies ever! Where are the results? Where are all the blog posts?

Back in action, yes, but restarting the LHC is a very measured process. For one thing, when running at the highest beam energies ever achieved, we have to be very careful about how we operate the machine, lest we inadvertently damage it with beams that are mis-steered for whatever reason. The intensity of the beams — how many particles are circulating — is being incrementally increased with successive fills of the machine. Remember that the beam is bunched — the proton beams aren’t continuous streams of protons, but collections that are just a few centimeters long, spaced out by at least 750 centimeters. The LHC started last week with only three proton bunches in each beam, only two of which were actually colliding at an interaction point. Since then, the LHC team has gone to 13 bunches per beam, and then 39 bunches per beam. Full-on operations will be more like 1380 bunches per beam. So at the moment, the beams are of very low intensity, meaning that there are not that many collisions happening, and not that much physics to do.

What’s more, the experiments have much to do also to prepare for the higher collision rates. In particular, there is the matter of “timing in” all the detectors. Information coming from each individual component of a large experiment such as CMS takes some time to reach the data acquisition system, and it’s important to understand how long that time is, and to get all of the components synchronized. If you don’t have this right, then you might not be getting the optimal information out of each component, or worse still, you could end up mixing up information from different bunch crossings, which would be disastrous. This, along with other calibration work, is an important focus during this period of low-intensity beams.

But even if all these things were working right out of the box, we’d still have a long way to go until we had some scientific results. As noted already, the beam intensities have been low, so there aren’t that many collisions to examine. There is much work to do yet in understanding the basics in a revised detector operating at a higher beam energy, such as how to identify electrons and muons once again. And even once that’s done, it will take a while to make measurements and fully vet them before they could be made public in any way.

So, be patient, everyone! The accelerator scientists and the experimenters are hard at work to bring you a great LHC run! Next week, the LHC takes a break for maintenance work, and that will be followed by a “scrubbing run”, the goal of which is to improve the vacuum in the LHC beam pipe. That will allow higher-intensity beams, and position us to take data that will get the science moving once again.

Share

CERN Had Dark Energy All Along; Uses It To Fuel Researchers

Adam Davis
Tuesday, March 31st, 2015

I don’t usually get to spill the beans on a big discovery like this, but this time, I DO!

CERN Had Dark Energy All Along!!

That’s right. That mysterious energy making up ~68% of the universe was being used all along at CERN! Being based at CERN now, I’ve had a first hand glimpse into the dark underside of Dark Energy. It all starts at the Crafted Refilling of Empty Mugs Area (CREMA), pictured below.

One CREMA station at CERN

 

Researchers and personnel seem to stumble up to these stations at almost all hours of the day, looking very dreary and dazed. They place a single cup below the spouts, and out comes a dark and eerie looking substance, which is then consumed. Some add a bit of milk for flavor, but all seem perkier and refreshed after consumption. Then they disappear from whence they came. These CREMA stations seem to be everywhere, from control rooms to offices, and are often found with groups of people huddled around them. In fact, they seem to exert a force on all who use them, keeping them in stable orbits about the stations.

In order to find out a little bit more about this mysterious substance and its dispersion, I asked a graduating student, who wished to remain unnamed, a little bit about their experiences:

Q. How much of this dark stuff do you consume on a daily basis?

A. At least one cup in the morning to fuel up, I don’t think I could manage to get to lunchtime without that one. Then multiple other cups distributed over the day, depending on the workload. It always feels like they help my thinking.

Q. Do you know where it comes from?

A. We have a machine in our office which takes capsules. I’m not 100% sure where those capsules are coming from, but they seem to restock automatically, so no one ever asked.

Q. Have you been hiding this from the world on purpose?

A. Well our stock is important to our group, if we would just share it with everyone around we could run out. And no one of us can make it through the day without. We tried alternatives, but none are so effective.

Q. Do you remember the first time you tried it?

A. Yes, they hooked me on it in university. From then on nothing worked without!

Q. Where does CERN get so much of it?

A. I never thought about this question. I think I’m just happy that there is enough for everyone here, and physicist need quite a lot of it to work.

In order to gauge just how much of this Dark Energy is being consumed, I studied the flux of people from the cafeteria as a function of time with cups of Dark Energy. I’ve compiled the results into the Dark Energy Consumption As Flux (DECAF) plot below.

Dark Energy Consumption as Flux plot. Taken March 31, 2015. Time is given in 24h time. Errors are statistical.

 

As the DECAF plot shows, there is a large spike in consumption, particularly after lunch. There is a clear peak at times after 12:20 and before 13:10. Whether there is an even larger peak hiding above 13:10 is not known, as the study stopped due to my advisor asking “shouldn’t you be doing actual work?”

There is an irreducible background of Light Energy in the cups used for Dark Energy, particularly of the herbal variety. Fortunately, there is often a dangly tag hanging off of the cup  to indicate to others that they are not using the precious Dark Energy supply, and provide a clear signal for this study to eliminate the background.

While illuminating, this study still does not uncover the exact nature of Dark Energy, though it is clear that it is fueling research here and beyond.

Share

Ramping up to Run 2

Ken Bloom
Thursday, March 19th, 2015

When I have taught introductory electricity and magnetism for engineers and physics majors at the University of Nebraska-Lincoln, I have used a textbook by Young and Freedman. (Wow, look at the price of that book! But that’s a topic for another day.) The first page of Chapter 28, “Sources of Magnetic Field,” features this photo:

28_00CO-P

It shows the cryostat that contains the solenoid magnet for the Compact Muon Solenoid experiment. Yes, “solenoid” is part of the experiment’s name, as it is a key element in the design of the detector. There is no other magnet like it in the world. It can produce a 4 Tesla magnetic field, 100,000 times greater than that of the earth. (We actually run at 3.8 Tesla.) Charged particles that move through a magnetic field take curved paths, and the stronger the field, the stronger the curvature. The more the path curves, the more accurately we can measure it, and thus the more accurately we can measure the momentum of the particle.

The magnet is superconducting; it is kept inside a cryostat that is full of liquid helium. With a diameter of seven meters, it is the largest superconducting magnet ever built. When in its superconducting state, the magnet wire carries more than 18,000 amperes of current, and the energy stored is about 2.3 gigajoules, enough energy to melt 18 tons of gold. Should the temperature inadvertently rise and the magnet become normal conducting, all of that energy needs to go somewhere; there are some impressively large copper conduits that can carry the current to the surface and send it safely to ground. (Thanks to the CMS web pages for some of these fun facts.)

With the start of the LHC run just weeks away, CMS has turned the magnet back on by slowly ramping up the current. Here’s what that looked like today:

dbTree_1426788776816

You can see that they took a break for lunch! It is only the second time since the shutdown started two years ago that the magnet has been ramped back up, and now we’re pretty much going to keep it on for at least the rest of the year. From the experiment’s perspective, the long shutdown is now over, and the run is beginning. CMS is now prepared to start recording cosmic rays in this configuration, as a way of exercising the detector and using the observed muon to improve our knowledge of the alignment of detector components. This is a very important milestone for the experiment as we prepare for operating the LHC at the highest collision energies ever achieved in the laboratory!

Share

Video contest: Rock the LHC!

Ken Bloom
Tuesday, March 17th, 2015

Do you think the Large Hadron Collider rocks? I sure do, and as the collider rocks back to life in the coming weeks (more on that soon), you can celebrate by entering the Rock the LHC video contest. It’s simple: you make a short video about why you are excited about research at the LHC, and submit it to a panel of physicists and communication experts. The producer of the best video will win an all-expenses paid trip for two to Fermilab in Batavia, IL, the premier particle physics laboratory in the United States, for a VIP tour. What a great way to celebrate the restart of the world’s most powerful particle collider!

This contest is funded by the University of Notre Dame, and please note the rules — you must be over 18 and a legal U.S. resident currently residing in the 50 states or the District of Columbia to enter. The deadline for entries is May 31.

As we get ready for the collider run of a lifetime, let’s see how creative you can be about the exciting science that is ahead of us!

RocktheLHC

Share

Behind the scenes of our “Big Bang Theory” post

Ken Bloom
Tuesday, February 10th, 2015
Well, that was fun!

At 8 PM ET on February 5, 2015, Quantum Diaries ran a post that was tied to “The Troll Manifestation”, an episode of “The Big Bang Theory” (TBBT) that was being aired at exactly that time.  This was generated in partnership with the show’s writers, staff and advisers. What happens when you couple a niche-interest website to one of the most popular TV shows in the United States? The QD bloggers and support staff had a great time getting ready for this synergistic event and tracking what happened next.  Here’s the story behind the story.

I’ve mentioned previously, in my largely unheralded essay about the coffee culture at CERN, that I have known David Saltzberg, UCLA faculty member and science adviser to TBBT, for a very long time, since we were both students in the CDF group at The University of Chicago.  On January 14, David contacted me (and fellow QD blogger Michael DuVernois) to say that Quantum Diaries was going to be mentioned in an episode of the show that was going to be taped in the coming week.  David wanted to know if I could sign a release form allowing them to use the name of the blog.

I couldn’t — the blog is not mine, but is operated by the InterAction Collaboration, which is an effort of the communications organizations of the world’s particle physics laboratories.  (They signed the release form.)  But I did come up with an idea.  David had said that the show would refer to a Quantum Diaries blog post about a paper that Leonard and Sheldon had written.  Why not actually write such a post and put it up on the site?  A real blog post about a fake paper by fake scientists.  David was intrigued; he discussed it with the TBBT producers, and they liked the idea too.  The show was to air on February 5.  Game on!

David shared the shooting script with me, and explained that this was one of the rare TBBT episodes in which he didn’t just add in some science, but also had an impact on the plot.  He had described his own experience of talking about something with a theorist colleague, and getting the response, “That’s an interesting idea — we should write a paper about it together!”  I myself wouldn’t know where to get started in that situation.  This gave me the idea for how to write about the episode.  The script had enough information about Leonard and Sheldon’s paper for me to say something intelligible about it.  The fun for me in writing the post was in figuring out how to point to the show without giving it all away too quickly.  I ran my text by David, who passed it on to the show’s producers, and everyone enjoyed it.  We knew there was some possibility that the show’s social media team would promote the QD site through their channels; their Facebook page has 33 million likes and their Twitter account has 3.1 million followers.

Meanwhile, the Quantum Diaries team sprung into action.  Kelen Tuttle, the QD webmaster, told the other bloggers for the site about our opportunity to gain national recognition for the blog, and encouraged everyone to generate some exciting new content.  Regular QD readers might have noticed all the bloggers becoming very voluble in the past week!  Kevin Munday and his team at Xeno Media prepared the site for the possible onslaught of visitors  — remember, twenty million people watch TBBT each week! — by migrating the site to the CloudFlare content delivery network, with 30 data centers worldwide, and protecting the site against possible security issues.

We all crossed our fingers for Thursday night.  I spent Thursday at Fermilab, and was flying back to Lincoln in the afternoon, scheduled to land at 6:43 PM, a few minutes before the 7 PM air time.  When I got home, I started keeping an eye on the computer.  The blog post was up, but was TBBT going to say anything about it?  Alas, their Twitter feed was quiet during the show.  (No, I didn’t watch — I have to admit that we watch so little television that we couldn’t figure out which channel it might be on!)

All of us involved were a bit disappointed that evening.  But David took up the case again with the CBS interactive team the next day, and was told that they’d put out a tweet as long as we changed our blog post to link to the archive of the show.  We did that, and then at 12:45 Central Time, we got the shout-out that we were hoping for:

So what happens when a TV audience of around 20 million people hear a website (which may or may not be real) mentioned in a show?  Or when 3.1 million people who are fans of a TV show receive a tweet pointing to a blog post?  The Quantum Diaries traffic metrics tell the tale. Here is a plot of the number of visitors to the site during the past four weeks, including the February 5 air date and the February 6 tweet date:
qd_traffic
When the blog was mentioned on the air, there was a definite spike in activity, and an even bigger spike on the day after, when the tweet went out. Traffic on the site was up by a factor of four thanks to TBBT!

However, the plot doesn’t show the absolute scale. On February 6, the site had about 4600 visitors, compared to a typical level of 800-1000 visitors. This means that only 0.1% of people who saw the TBBT tweet actually went and clicked on the link that took them to QD. This is nowhere near the level of activity we saw when the Higgs boson was discovered. TBBT may be a great TV show, but it’s no fundamental scientific discovery.

However, the story did have some pretty strong legs in Nebraska.  My employer, the University of Nebraska-Lincoln, graciously wrote a story about my involvement in the show and promoted it pretty heavily through social media.  This led to a couple of appearances on some news programs that enjoy making local links to national stories (if you could call this a national story).  I found it a bit surreal and was reminded that I need to get a haircut and clean my desk.

Thank you to David Salzberg for making this possible, and to the TBBT producers and writers who were supportive, and of course to all of my colleagues at Quantum Diaries who did a lot of writing and technical preparation last week.  (A special shout-out to Kelen Tuttle, who left QD for a new position at Invitae this week; at least we sent her off with a Big Bang!) And if you have discovered this blog because of “The Troll Manifestation”, I hope you stay for a while!  These are great times for particle physics — the Large Hadron Collider starts up again this year, we’re planning an exciting international program of neutrino physics that will be hosted in the United States, and we’re scanning the skies for the secrets of cosmology.  We particle physicists are excited about what we do and want to share some of our passion with you.  And besides, now we know that Stephen Hawking reads Quantum Diaries — shouldn’t you read it too?

Share