• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for February, 2012

Greetings, from Thesis Limbo

Tuesday, February 28th, 2012

Hello, hello, hello! After a hiatus of unexpected duration, I’m back — with stories to share, topics to discuss, and photos to tag. 🙂

When last I wrote, I was preparing for a big life transition: from research, analysis approval, and wine at CERN to thesis-writing, job-hunting, and micro-brews in NYC. I felt ready to move on, but it proved… more difficult than anticipated. Of course. I spent my first six weeks as a guy on my very generous friend’s couch, working 14-hour days to fix a stubbornly problematic final result that wasn’t ready for approval, or publication. In the process I learned quite a bit about limit-setting and statistical inference (totally thrilling topics that I’ll regale you with next time, I promise), but before long I was at wits’ end.

In all seriousness, folks, I considered dropping out of my PhD program to join the disillusioned ranks of “all but dissertation” grad students seeking work in a tough job market. Scary stuff! From what I’m told, an MS in Physics is about as useful as a BA in English… (Just kidding, English majors!) Fortunately, we were able to work out a solid and defensible result, I took a much-needed break for the holidays, moved off the couch and into my own apartment, and pursuant to a number of New Years resolutions, I began to write my thesis. Crisis averted.

office

An office with a view...

Of course, writing a thesis isn’t easy, either. I solicited a lot of advice, but nobody thought to mention a useful and obvious starting point: an outline. It helps to have a plan! Especially one divided up into convenient and manageable chunks. After that, all that remains is filling in the blanks… which I’ve been doing for the past two months. 🙂

That’s not all I’ve been doing, though. The analysis I’ve worked on is a day away from its final ATLAS approval, but that’s after a seemingly endless series of comments, revisions, and approval meetings. (Anna recently wrote about LHCb’s publication procedure — yes, ATLAS requires a similar number of hoops to jump through. UGH.) My advisor has obliged me to re-do a study I did three years ago and stuff it in an appendix to my thesis, do a couple of future sensitivity studies for the main analysis itself, continue giving talks, reading papers, and attending meetings, present my results at an upcoming APS conference, and even change the title of my thesis. Dammit! What was wrong with “Leptoquarks: the Particles That Go Both Ways“? At the same time, I’m dealing with graduation logistics, updating my CV, thinking about (and training for!) a job and career post-PhD, and generally trying to become a real, non-student adult.

This is Thesis Limbo, less favorably known as Dissertation Hell. It’s a strange point in one’s life. But I’m working through it.

I’m lucky to have many friends around to keep me sane. Here are two of particular relevance: Daisy, who coordinates this ragtag group of physics bloggers, and Flip, whose ongoing saga of Electroweak Symmetry Breaking has enthralled us all.

Daisy

Daisy at the Audubon Society, with Joe Physicist

Flip

Flip in my apartment, borrowing my books

TOLD YOU I WOULD POST THESE!!!

– Burton 😛

Share

Giovanni Schiaparelli (1835 – 1910) is mainly remembered for his discovery of “canali” on Mars. What a fate, to be remembered only for discovering something that does not exist. I suppose it could be worse; he could be remembered only as the uncle of the fashion designer, Elsa Schiaparelli (1890–1973).  But he was not alone in seeing canals on Mars. The first recorded instant of the use of the word “canali” was by Angelo Secchi (1818 – 1878 ) in 1858. The canals were also seen by William Pickering (1818 – 1878) and most famously by Percival Lowell (1855 – 1916). That’s right, the very Lowell whom the Lowell observatories on Mars Hill Road in Arizona are named after. Unfortunately, after about 1910, better telescopes failed to find them and they faded away. Either that or Marvin the Martian filled them in while fleeing from Bugs Bunny. However, they did provided part of the backdrop for H.G. Wells’s War of the World. But it is interesting that the canals were observed by more than one person before being shown to be optical illusions.

Another famous illusion (delusion?) was the n-ray discovered by Professor Blondlot (1844 – 1930) in 1903. These remarkable rays, named after his birth place, Nancy, France, could be refracted by aluminum prisms to show spectral lines. One of their more amazing properties was that they were only observed in France, not England or Germany. About 120 scientists in 300 papers claimed to have observed them (note the infallibility of peer review). But then Prof. Robert Wood (1868 – 1955), at the instigation of the magazine Nature, visited the laboratory. By judiciously and surreptitiously removing and reinserting the aluminum prism, he was able to show that the effect was physiological, not physical. And that was the end n-rays and also of poor Prof. Blondlot’s reputation.

Probably the most infamous example of nonsense masquerading as science is, Homo piltdownensis, otherwise known as the Piltdown man.  This was the English answer to the Neanderthal man and the Cro-Magnon man discovered on the continent. A sculptured elephant bone, found nearby, was even jokingly referred to as a cricket bat. Seems appropriate. While there was some scepticism of the find, the powers that be declared it to be a breakthrough and it was only forty years later that someone had the brilliant idea that it might be a fake. Once the signs of faking were looked for, they were easily found. What we see here is an unholy combination of fraud, delusion, and people latching onto something that confirmed their preconceived ideas.

These examples are not unique. Most exciting new results are wrong[1]: polywater, the 17 kev neutrino cold fusion, superheavy element 118, pentaquarks, and the science on almost any evening news cast. Cancer has been cured so often it is a surprise that any cancer cells are left. So, why so many exciting wrong results? First, to be exciting means, almost by definition, that the results have a low prior probability of being correct. The discovery of a slug eating my garden plants is not exiting, annoying but not exciting. It is what we expect to happen. But a triceratops in my garden, that would be exciting and almost certainly specious (pink elephants are another matter).  It is the unexpected result that is exciting and gets hyped. One can say over hyped. There is pressure to get exciting results out quickly and widely distributed so you get the credit; a pressure to not check as carefully as one should, a pressure to ensure priority by not checking with one’s peers.

Couple the low prior probability and the desire for fame with the ubiquity of error and you have the backdrop to most exciting new results being wrong. Not all exciting new results are wrong, of course. For example, the discovery of high temperature superconductors (high = liquid nitrogen). This had crackpot written all over it. The highest temperature recorded earlier was 30deg Kelvin. But with high temperature superconductors, that jumped to 90 in 1986 and then shortly afterwards to 127deg kelvin. Surely something was wrong, but it wasn’t and a Nobel Prize was awarded in 1987. The speed at which the Nobel Prize was awarded was also the subject of some disbelief. Why was the result accepted so quickly? Reproducibility. The results were made public and quickly and widely reproduced. It was not just the cronies of the discoverers who could reproduce them.

The lesson here is to distrust every exciting new science result: canals on Mars, n-ray, high temperature superconductors, faster than light propagation of neutrinos (which coincidentally just released some new interesting information), the Higgs bosons and so on. Wait until they have been independently confirmed and then wait until they have been independently confirmed again. There is a pattern with these results that turn out to be wrong. In almost every example given above, the first attempts at reproducing the wrong results succeeded.  People are in a hurry to get on the band wagon; they want to be first to reproduce the results. But after the initial excitement fades, sober second thought kicks in. People have time to think about how to do the experiment better, time to be more careful. In the end, it is this third generation of experiments that finally tells the tale.  Yah, I know I should not have been sucked in by pentaquarks but they agreed with my preconceived ideas and the second generation experiments did see them in almost the right place. Damn. Oh well, I did get a widely cited paper out of it.

Once burnt, twice shy. So scientists become very leery of the next big thing. Here again, science is different than the law. In the law, there is a presumption of innocence until proven guilty. In other words, the persecution must prove guilt; the suspect does not need to prove innocence. In science, the burden of proof is the other way around. The suspect—in this case, the exciting new result—must prove innocence. It is the duty of the proponents of the exciting new result to demonstrate the validity or usefulness of the new result.  It is the duty of his peers to look for holes, because as the above examples indicate, they are frequently there.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] For more information on the examples, Google is your friend.

Share

This week the OPERA experiment released a statement about their famous “faster than light” neutrino measurement. In September scientists announced that they had measured the speed of neutrinos traveling from CERN to Gran Sasso and they found that they arrived slightly sooner than they should do according to special relativity. There was a plethora of scientific papers, all kinds of rumors and speculation, and most physicists simply refused to believe that anything had traveled faster than light. After months of diligent study, OPERA announced that they may have tracked down two sources of experimental error, and they are doing their best to investigate the situation.

But until we get the results of OPERA’s proposed studies we can’t say for sure that their measurement is right or wrong. Suppose that they reduce the lead time of the neutrinos from 60ns to 40ns. That would still be a problem for special relativity! So let’s investigate how we can get faster than light neutrinos in special relativity, before we no longer have the luxury of an exciting result to play with.

The OPERA detector (OPERA Collaboration)

The OPERA detector (OPERA Collaboration)

Special relativity was developed over a hundred years ago to describe how electromagnetic objects act. The electromagnetic interaction is transferred with electromagnetic waves and these waves were known to travel extremely quickly, and they seemed to travel at the same speed with respect to all objects, no matter how those objects were moving. What Einstein did was to say that the constancy of the speed of light was a fundamental law of nature. Taking this to its logical conclusion meant that the fastest speed possible was the speed of light. We can call the fastest possible speed \(s\) and the speed of light \(c\). Einstein then says \(c=s\). And that’s how things stood for over a century. But since 1905 we’ve discovered a whole range of new particles that could cast doubt on this conclusion.

When we introduce quantum mechanics to our model of the universe we have to take interference of different states into account. This means that if more than one interaction can explain a phenomenon then we need to sum the probabilities for all these interactions, and this means we can expect some strange effects. A famous example of this is the neutral kaon system. There two lightest neutral kaons are called \(K^0\) and \(\bar{K}^0\) and the quark contents of these mesons are \(d\bar{s}\) and \(s\bar{d}\) respectively. Now from the “outside” these mesons look the same as each other. They’ve got the same mass, they decay to the same particles and they’re made in equal numbers in high energy processes. Since they look identical they interfere with each other, and this gives us clues about why we have more matter than antimatter in the universe.

Since we see interference all over the place in the Standard Model it makes sense to ask if we see interference with a photon. It turns out that that we do! The shape of the Z mass peak is slightly asymmetric because of interference between virtual Z bosons and virtual photons. There are plenty of other particles that the photon can interfere with, including the \(J/\psi\) meson, and the \(\rho\) meson. In fact, any neutral vector meson with no net flavor will do. Einstein didn’t know about any of these particles, and even if he did he never really accepted the conclusions of quantum mechanics, so it’s no surprise that his theory would require that the speed of light is the fastest speed (that is, \(c=s\).) But if the photon interferes with other particles then it’s possible that the speed of light is slightly lower than the fastest possible speed (\(c<s\)). Admittedly, the difference in speed would have to be very small!

In terms of quantum mechanics we would have something like this:
\[
|light>_{Einstein} = |\gamma>
\]
\[
|light>_{reality} = a_\gamma |\gamma> + a_{J/\psi} |J/\psi> + a_Z |Z> + \ldots
\]

As you can see there are a lot of terms in this second equation! The contributions would be tiny because of the large difference in mass between the massive particles and the photon. Even so, it could be enough to make sure that the speed of light is ever so slightly slower than the fastest possible speed.

At this point we need to make a few remarks about what this small change in speed would mean for experiments. It would not change our measurements of the speed of light, since the speed of light is still extremely fast and no experiment has ever showed a deviation from this extremely fast speed. Unless somebody comes up with an ingenious experiment to show that the difference between the speed of light and the fastest possible speed is non-zero we would probably never notice any variation in the speed of light. It’s a bit unfortunate that since 1983 it’s been technically impossible to measure the speed of light, since it is used in the definition of our unit of length.

Now we know that photons can interfere with other particles it makes sense to ask the same question about neutrinos. Do they interfere with anything? Yes, they can interfere, so of course they do! They mix with neutrinos of other flavors, but beyond that there are not many options. They can interfere with a W boson and a lepton, but there is a huge penalty to pay in the mass difference. The wavefunction looks something like this:
\[
|\nu_e>(t) = a(t)_{\nu_e}|\nu_e> + a(t)_{\nu_{\mu}}|\nu_\mu> + a(t)_{\nu_{\tau}}|\nu_\tau> + a(t)_{We}|We>
\]
(I’ve had to add a time dependence due to neutrino mixing, but it’s essentially no more complicated than what we had for the photon.)

That means that the photon could get slowed down slightly by the interference with other particles (including particles in the vacuum) and that neutrinos could get slowed down more slightly by their interference terms with other particles. And that way we could get neutrinos traveling faster than the speed of light and special relativity could remain intact. (In this description of the universe we can do what used to seem impossible, we can boost into the rest frame of a photon. What would it mean to do that? Well I suppose it would mean that in this frame the photon would have to be an off-shell massive particle at rest.)

The SN 1987 supernova, a rich source of slower than light electron neutrinos (Hubble, ESA/NASA)

Now I’ll sit back and see people smarter than I am pick holes in the argument. That’s okay, this isn’t intended to be a serious post, just a bit of fun! There are probably predictions of all kinds of weird effects such as shock waves and time travel that have never been observed. And there are plenty of bits I’ve missed out such as the muon neutrinos traveling faster than electron neutrinos. It’s not often we get an excuse to exercise our analytic muscles on ideas like this though, so I think we should make the most of it and enjoy playing about with relativity.

Share

New Information on “FTL Neutrinos”

Thursday, February 23rd, 2012

We have new information, but my position on the OPERA experiment’s FTL neutrino measurement hasn’t changed.

First, here’s what we know. Members of the OPERA experiment has been working diligently to improve their measurement, better understand their uncertainties, and look for errors. Yesterday, the discovery of some possible problems was leaked anonymously (and vaguely) in Science Insider. This compelled OPERA to release a statement clarifying the status of their work: there are two possible problems, which would have opposite effects on the results. (Nature News has a good summary here.)

The important thing to learn here, I think, is that the work is actually ongoing. The problems need further study, and their overall impact needs to be assessed. New measurements will be performed in May. What we’ve gotten is a status update whose timing was forced by the initial news article, not a definitive repudiation of the measurement.

Of course, we already knew with incredible confidence that the OPERA result is wrong. I wrote about that last October, but I also wrote that we still need a better understanding of the experiment. Good scientific work can’t be dismissed because we think it must have a mistake somewhere. I’m standing by that position: it’s worth waiting for the final analysis.

Share

Physicists did a lot of planning for data analysis before the LHC ever ran, and we’ve put together a huge number of analyses since it started. We’ve already looked for most of the things we’ll ever look for. Of course, many of the things we’ve been looked for haven’t shown up yet; in fact, in many cases including the Higgs, we didn’t expect them to show up yet! We’ll have to repeat the analysis on more data. But that’s got to be easier than it was to collect and analyze the data the first time, right? Well, not necessarily. We always hope it will be easier the second or third time around, but the truth is that updating an analysis is a lot more complicated than just putting more numbers into a spreadsheet.

For starters, every time we add new data, it was collected under different conditions. For example, going from 2011 to 2012, the LHC beam energy will be increasing. The number of collisions per crossing will be larger too, and that means the triggers we use to collect our data are changing too. All our calculations of what the pileup on top of each interesting collision looks like will change. Some of our detectors might work better as we fix glitches, or they might work worse as they are damaged in the course of running. All these details affect the calculations for the analysis and the optimal way to put the data together.

But even if we were running on completely stable conditions, there are other reasons an analysis has to be updated as you collect more data. When you have more events to look at, you might be interested in limiting the events you look at to those you understand best. (In other words, if an analysis was previously limited by statistical uncertainties, as those shrink, you want to get rid of your largest systematic uncertainties.) To get all the power out of the new data you’ve got, you might have to study new classes of events, or get a better understanding of questions where your understanding was “good enough.”

So analyzing LHC data is really an iterative process. Collecting more data is always presenting new challenges and new opportunities that require understanding things better than before. No analysis is ever the same twice.

Share

Chamonix 2012

Sunday, February 19th, 2012

At the start of each calendar year, the CERN management holds a workshop in Chamonix to discuss the LHC run plan for the coming year and beyond. This year’s meeting was held two weeks ago, and this past week CERN announced the outcomes. Now, after last year’s Chamonix, the plan came out differently than many of us had been expecting. But this year’s workshop results were consistent with this year’s rumors.

There is a clear physics goal for this year: both CMS and ATLAS should each individually be in a position to discover the standard-model Higgs boson, if it exists. There are two ways that the LHC will try to make this possible. The first is to deliver as many collisions (i.e. as much integrated luminosity) as the LHC can manage. The integrated luminosity target for this year is fifteen inverse femtobarns for each experiment, three times as much as was delivered last year. It will still be a challenge to discover the Higgs with that much data; the experiments will have to run efficiently and the experiments will have to be as clever as ever. But it is possible. CERN is also prepared to extend the LHC run if necessary to meet this luminosity target. This is important, as the LHC will enter a long shutdown after 2012, so this year is our last shot for a while at making a discovery, of a Higgs or anything else. We should remember that last year’s target was a mere one inverse femtobarn, yet we got five times that. Can we hope that the LHC will outperform expectations again this year?

The second way to improve our chances of discovery is to raise the energy of the beams. The production rate for the Higgs and many other hypothetical particles increases with beam energy. Thus the LHC will run with 4 TeV per beam rather than the 3.5 TeV of last year. The operational experience of the past two years gives the LHC physicists confidence that this beam energy will be safe for the machine. This means that the LHC will probably never run at 3.5 TeV/beam again; the data we have recorded will now be unique in human history. It means that we’ll have to think about how to juggle resources so that people can look at both the old and the new data, and how to properly archive it for future use. Also, all sorts of measurements that we have made before at the LHC become new again: we can ask how does the production rate for phenomenon X change as you change the beam energy from 3.5 TeV to 4 TeV.

One change that the experiments had hoped for, but will not come to pass, was a change in the time interval between collisions. It was 50 ns during 2011, and it will stay that way. That means that we are now expecting an average of 30 simultaneous proton-proton collisions per beam crossing. Had the bunch spacing been reduced to 25 ns, we could have hoped to record a similar amount of data, but with much simpler events. However, the LHC experts weren’t sure that they could provide as much integrated luminosity at 25 ns spacing as at 50 ns; it is a very different way to operate the machine. Integrating data is the need for the year, so 50 ns it is. The experiments have shown that they can handle the complex events, although it would be a stretch to call it a pleasant experience.

Finally, the plan for the longer term was sketched out. The LHC will enter a “long technical stop” (as CERN likes to put it) at the end of the year, which will go on for twenty months. Given that we’ll need some time afterwards to re-commission the accelerator and the detectors, it’s probably two years from “physics to physics.” This will give the machine and the experiments time to implement some needed and useful upgrades and repairs. On the machine side, this includes the preparations to run the LHC at much closer to the design energy. That is 7 TeV per beam, although it is sounding like 6.5 TeV/beam is much more likely to be the safe operating point. At this point, we can only guess what the particle physics landscape will look like, but a higher-energy LHC will allow us to explore it thoroughly.

That’s the plan — let’s get ready to re-start the search for new physics in under two months!

Share

As discussed in this blog post in Scientific American (see blog post here) the Tevatron experiments may have a few last interesting things to say when it comes to the Higgs Boson at the March meetings.

At the American Association for the Advancement of Science meeting Collider Detector at Fermilab (CDF) spokes person Rob Roser said that we can expect “something interesting” coming from the Tevatron in the coming month.

Now normally I don’t get into the excitement of “hints” of the Higgs because now it seems you can’t sneeze with out causing a “3-sigma” deviation in you data. However, if we are to take the last results from the LHC seriously and there is an intriguing deviation around 125 GeV for the Higgs search the data from the Tevatron might be very well suited to being sensitive to seeing evidence for the Higgs.

Atlas results for the search for the Higgs boson with an intriguing "peak" around 125 GeV

For me, this only goes back to a debate that was going on almost a year ago, and this was whether or not we should extend the run of the Tevatron. One of the more compelling arguments that was made was exactly the scenario that is playing out and goes something like this…

“If the Higgs is low mass as other experimental results suggest then the Tevatron is well posed to be sensitive to the Higgs mass and can provide a completely independent discovery of this elusive particle and aid in measuring many of the properties of the Higgs and unlock many of the mysteries to the universe and the origins of mass.”

However, this didn’t compel enough people to make this happen, so we are left with this opportunity for the Tevatron to contribute to the Higgs search at a maximum of 3-sigma confirmation due to limited data samples.

Now of course this whole discussion is predicated on the fact that the Higgs lives in a low mass range (if it lives anywhere) all of which is not proven anywhere yet…

So this is just to say that the Tevatron is/was a great experiment and is still actively contributing to the discovery process unfolding every day in High Energy Physics and we should all stay tuned for this possible independent confirmation or refutation of the claims of where the Higgs boson may live.

Some great posts about the Higgs from my fellow bloggers:

Why do we expect a Higgs Boson? Part II Unitarization of Vector Boson Scattering

It Might Look Like a Higgs, But Does it Really Sing Like One?

 

Share

Error Control in Science

Friday, February 17th, 2012

Scientists are subject to two contradictory criticisms. The first is that they are too confident of their results, to the point of arrogance. The second is that they are too obsessed about error control–all this nonsense about double blind trials, sterilized test tubes, lab coats and the like. It evidently has not occurred to the critics that the reason scientists are confident of their results is that they have obsessed over error control. Or conversely they obsess over error control so they can be confident of their results.

Now, most people outside science, do not realize a scientist’s day job is error control. There is this conception of scientists having brilliant ideas, going into immaculate labs where they effortlessly confirm their results to the chagrin of their competitors. That is, of course, when they are not plotting world domination like Pinky and the Brain.  But scientists neither spend their time plotting world domination (be with you in a minute Brain) nor doing effortless experiments. But rather they are thinking about what might be wrong; how do I control that error? As for theorists, they must be a part of a wicked and adulterous generation[1] because they are always seeking after a sign–a minus sign that is.

So what do scientists do to control errors? There are very few arrows in their quiver. Really only three: care in doing the experiment or calculation, care in doing the experiment or calculation, and care in doing the experiment or calculation. Well actually there are two others: peer review and independent repetition. Let’s take the first three first: care, care, and care. As previously noted, scientists are frequently criticized here. Why do double blind studies when we have Aunt Martha’s word for it that Fyffe’s Patented Moose Juice cured her lumbago? Well actually, testimonials are notoriously unreliable. A book[2] I have, had an example of from the early 1900’s of testimonials for cures for consumption and then had the dates the person died of consumption. The death was frequently quite close to the date of the testimonial. So no, I will not trust Aunt Martha’s testimonial[3]. To quote Robert L. Park: The most important discovery of modern medicine is not vaccines or antibiotics, it is the randomized double-blind test, by means of which we know what works and what doesn’t. This has now carried over into subatomic physics where blind analyses are common. By blind, I mean that the people doing the analysis cannot tell how close they are to the expected answer (the theoretically predicted answer or the results of a previous experiment) until most of the analysis has been completed. Otherwise, as one of my experimental colleagues said: data points are like sheep, they travel in flocks. Even small biases can influence the results. Blind analysis is just one example of the extremes scientists go to, to ensure that their results are reliable. All this rigmarole that scientists go through is one of the reasons life expectancy increased by about 30 years between 1900 and 2000, perhaps the major reason. The lack of this care is the reason I distrust alternative medicine.

We now move on to the other two aspects of error control: peer review and independent replication of results. Both of these depend on the results being made public. Since these are crucial to error control, results that have not been made available for scrutiny should be treated with suspicion. Peer review has been discussed in the previous post and is just the idea that new results should be run past the people who are most knowledgeable so they can check for errors.

Replication is, in the end, the most important part of error control. Scientists are human, they make mistakes, they are deluded, and they cheat. It is only through attempted replication that errors, delusions, and outright fraud can be caught. And it is very good at catching them. In the next post, I will go into the examples but it is a good practice not to trust any exciting new result until it has been independently confirmed. However replication and reproducibility are not simple concepts. I go out doors and it is nice and sunny, I go out twelve hours later and it is dark and cold. The initial observation is not reproduced. I look up, I see stars. An hour later I go out and the stars are in different places. And the planets, over time, they wander hither, thither and yon. In a very real sense the observations are not reproduced. It is only within the context of model or paradigm that we can understand what reproducible means. The models, either Ptolemaic or Newtonian, tell us where to look for the planets and we can reproducibly check they are where the models say they should be at any given time. Reproducibility is always checking against a model prediction.

Replication is also not just doing the same things over and over again. Then you would make the same mistakes and get the same results over and over again. You do things differently, guided by the model being tested, to see if the effect observed is an artifact of the experimental procedures or real. Is there really a net energy gain or have you just measured a hot spot in your flask. The presence of the hot spot can be reproduced, but put in a stirrer to test the idea of energy gain and, damn, the effect went away. Another beautiful model was slain by an ugly observation. Oh, well, happens all the time.

So science advances, we keep testing our previous results in new and inventive ways. The wrong results fall by the wayside and are forgotten. The correct ones pile up and we progress. To err is human, to control errors–science.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] Matthew 12:39,16:4

[2] Pseudoscience and the Paranormal, Terence Hines, Prometheus Books, Buffalo (1988).

[3] My grandfather died of consumption.

Share

Releasing LHCb results

Thursday, February 16th, 2012

Winter conference season [*] is upon us, which means everybody is busy preparing new results. Today, instead of talking about the physics itself, I’m going to discuss the process around it; namely the procedure which the results of an LHCb analysis [**] need to go through before being released.

There are two ways in which analysis results are released: either through a conference note, meaning it is a preliminary result, or to a paper. I’m only going to discuss the former because I’m currently going through that procedure at the moment, though as a referee of the analysis and not a proponent.

The preliminary result approval procedure is constantly in flux, but currently, it looks something like this:
which is a simplified (and coloured) version of what can publicly be accessed on the LHCb editorial board webpage.

I think the most important points to note are the levels of the scrutiny that each analysis goes through before release. When I say that “everybody is busy preparing new results”, I’m not just referring to the people who are performing the specific analyses which are being released, I also include all the assigned analysis referees and editorial board members, and the physics coordinator as well as interested members of the collaboration, who can review the public notes and attend the approval presentations.

Believe me when I tell you that there have been/will be lot of extra emails and meetings this month due to all the paper and conference note reviews and approval presentations… Here’s looking forward to the Moriond conferences where the new results will be presented!

——————————————————————————–
[*] Winter conference season for experimental particle physics refers to the cluster of conferences held in February and March every year. The most well known of these are Aspen, Lake Louise, La Thuile and Moriond. Yes, these conferences are held annually at ski resorts. The conference organisers are understanding enough to give participants time to take advantage of the location, with sessions in the morning and evening, but none in the afternoon. I personally call these conferences “skiing conferences”. I have never been to any of them, but I would love to some day. They sound like the perfect combination of work and fun.

[**] I should probably mention that all experimental particle physics collaborations have some sort of publication procedure, most of which involve some sort of detailed internal document, followed by the public document.

Share

The EMMA accelerator ring

Working with an international team, three physicists from Brookhaven Lab have helped to demonstrate the feasibility of a new kind of particle accelerator that may be used in future physics research, medical applications, and power-generating reactors. The team reported the first successful acceleration of particles in a small-scale model of the accelerator in a paper published in Nature Physics.

The device, named EMMA and constructed at the Daresbury Laboratory in the UK, is the first non-scaling fixed field alternating gradient accelerator, or non-scaling FFAG, ever built. It combines features of several other accelerator types to achieve rapid acceleration of subatomic particles while keeping the scale — and therefore, the cost — of the accelerator relatively low. (more…)

Share