• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for June, 2011

Searching for gold…

Wednesday, June 29th, 2011

G’day all! Today I will be continuing the Australian theme and discuss panning for gold. Which being completely correct isn’t really Australian, since it was probably practiced in any gold rich area. However in my defense, an integral part of Australian history is the gold rush in the late 19th century and a visit to one of the old gold areas is an excursion most Australian school children take.

For those who are wondering what gold panning is, it’s a method of searching for gold in stream beds using a pan. It doesn’t tend to yield high quantities of the precious metal, but it doesn’t take much equipment and can be used to locate gold rich areas. It requires lots of patience to sit by a stream and slowly separate the dense precious metal from the less dense, less interesting rocks and sand.

What does this have to do with particle physics and LHCb I hear you all ask? Well, gold panning is a fairly good analogy for trying to identify collisions in which B mesons are produced, and from those collisions trying to find the particular B meson decays we are interested in.

To give you some numbers, the rate of collisions at the LHCb interaction point is 40 MHz, of which only about 10 MHz will contain particles which are within the acceptance of the LHCb detector. Events where all the decay products of a B meson can be detected by LHCb have a rate of about 15 kHz, while the rate of specific B meson decays that are interesting for physics analysis is around a few Hz. So we are only interested in approximately one out of ten million collisions that the detector sees per second.

The first level of event selection is performed by an online electronic system, called the trigger, that selects which events will be stored on disk for offline analysis. The trigger is a very important system, since it is not possible to record every event on disk due to limited bandwidth; we must make sure that events containing interesting B meson decays are kept.


Schematically shown above, the LHCb trigger system operates on two levels. The first, called L0, is comprised of custom electronics and uses information from the VELO, the calorimeter, and the muon systems. From the ten million proton collisions that LHCb sees per second, it selects around one million events per second for further processing, while discarding the remaining nine million. The first level trigger works incredibly fast, making its decision in just four millionths of a second.

After filtering by the first level trigger, an overwhelming number of events still remains. These are fed into a farm of over two thousand computers, which make up the HLT, the second level trigger, located deep underground at the LHCb site. These machines select interesting events to save for analysis, further trimming the one million events per second to a more manageable two thousand. This second level trigger uses the full detector information and has more time to make a decision than its first level counterpart.

If you’ve been paying very close attention to all the numbers you might have noticed that we’re writing events to disk at a rate of 2 kHz, while the interesting physics rate is a few Hz. Due to computing resources, it is not possible to analyse the full dataset when the signal to background ratio is so low, so there is second level of event selection, called stripping[*]. The major difference between trigger and stripping is that events which the trigger rejects are lost forever; stripping selections on the other hand, can be rerun if necessary.

Stripping contains a set of preselection algorithms defined by each physics analysis in LHCb, which are run offline after data taking to produce a set of selected events for further individual analysis. The events that pass the defined stripping selection criteria will be fully reconstructed, recreating the full information associated with each event in preparation for detailed analysis.

Returning to the gold panning analogy, we started out with a pan full of generic proton collisions. Triggering removed all the collisions which are obviously not gold, which don’t look like B meson decays at all. Stripping removed what we think isn’t gold, but we put the rejected collisions to the side, just in case they could be gold. With what’s remaining, with a bit more work, hopefully we can find what we are looking for… Gold!

[*] Yes, it is actually called stripping. I’m too new to the experiment to know the history of the term, though I have been privy to enough discussions on the topic that I don’t find it amusing anymore.

Share

Laisser des traces

Tuesday, June 28th, 2011

Tandis qu’un torrent de données inonde les expériences du LHC, le flux d’articles scientifiques exposant les mesures et résultats les plus récents ne tarit pas. Pour qui n’est pas versé en physique des particules, ces considérations pleines de sections transversales, de multiplicités et autres étouffement des jets, sont bien obscures. Si les nombres et les graphiques permettent au physicien d’y voir clair, les autres ont besoin de quelque chose de plus parlant, ce qui est sans doute l’une des raisons pour lesquelles les images d’événements sont si appréciées, notamment par les journalistes. Les images de traces de particules, aux couleurs vives, vous donnent au moins l’impression de voir l’invisible. Elles peuvent aussi être d’une beauté renversante.

En cette année où l’on célèbre plusieurs centaines de découvertes liées aux recherches menées au CERN – découverte de la supraconductivité, découverte du noyau atomique – il convient de rappeler le centenaire de la première observation de traces de particules, dans un appareil appelé « chambre à brouillard ». Le 9 juin 1911, la Royal Society of London publiait des photographies de Charles Wilson représentant de « fines volutes et traînées de brouillard » – en fait, des gouttelettes de condensation formées le long des trajectoires des particules alphas (noyaux d’hélium), qui avaient ionisé la vapeur sursaturée de la chambre. Au départ, Wilson avait mis au point sa chambre en vue d’étudier la formation de vrais nuages mais, constatant que l’appareil était sensible aux rayonnements, il l’a ensuite modifiée pour rendre visibles les traces des particules, procédant à ce qu’on appellerait aujourd’hui un transfert de technologie.

La chambre à brouillard de Wilson est l’ancêtre direct de bon nombre des détecteurs se trouvant au cœur des énormes expériences qui enregistrent aujourd’hui les traces des centaines de particules issues des collisions frontales se produisant dans le LHC. Le principe de base est le même : rendre visibles les traînées ionisées laissées par les particules lorsqu’elles traversent un gaz ; même s’il est vrai que les chambres modernes détectent l’ionisation de manière plus directe, sous forme d’impulsions électriques, plutôt qu’en la rendant visible à l’aide de traînées de gouttelettes. Faites jouer la magie de l’informatique pour colorer les traces, et les images que vous obtiendrez pourront être tout aussi séduisantes pour une génération qui connaît la télévision couleur et les jeux vidéos que l’étaient il y a un siècle les photographies de Wilson. Elles ne rendront peut-être pas plus compréhensible le monde étrange des particules, mais le rapprocheront certainement un peu de notre réalité.

Alors, la prochaine fois que vous apercevrez dans le ciel des traînées de condensation, ayez une pensée pour Wilson, et même, levez votre verre au centenaire de la première trace apparue dans sa chambre à brouillard.

Christine Sutton

Share

— by T. “Isaac” Meyer, Head, Strategic Planning & Communications

One thing we have to add to this discussion is how media, news, and analysis enter into the political and policy-making process.  One clear objective of science communications and even any corporate communications activity is to influence decision makers.  But are the traditional streams of media still relevant?

Fortunately, our excellent and thoughtful friends at the National Journal have just publicly released a detailed study of U.S. federal senior executive, Capitol Hill staff, and professional lobbyists that documents how information arrives and is used “inside the Beltway” in Washington, D.C.   The study is entitled “Washington in the Information Age” and is, lightly put, brilliant.

With grateful flattery, I reproduce some of their conclusions here.

1. As the dust settles, traditional platforms (TV, print media, and radio) remain essential components of the media mix.  The report compiles hundreds of interviews and surveys to map out how U.S. political and policy staff receive their news.  Perhaps as a surprise, it is NOT all by Twitter and Facebook. Rather, the new technologies serve as alert mechanisms with trusted, credible analysis still being sought from the traditional sources.

2. Despite the plethora of choices, opinion makers associated with long-established brans carry the most influence online. We all worry that a random citizen in Darkmoor, Pennsylvania, or Blackwater, California, can publish an online blog and start a slanted or even misinforming news source.  It looks like the folks in Washington still rely on verifiable,  credible, long-established names and resources to gather their views.

3. Yet, Washington insiders value a long tail of unique opinion makers.  More than 400 distinct names were cited as credible sources for opinion from among the survey group.  So the Beltway doesn’t follow one columnist or one voice; rather, each person tends to accumulate a set of trusted brands/thought-leaders and then sticks to them over time.  So less fly-by-night than perhaps expected!

4. Washington insiders favour news sources that share political point of view. Perhaps obvious, but results show that Washingtonians cluster around columnists, news sources, and so on that reflect their own ideologies.

5. No longer just for e-mail, mobile devices are a gateway to news and information.  Many Washington insiders now read news and analysis on the small screen and some actually do a good portion of their composition and analysis on the small screen.

6. Mobile devices and new digital communication tools continue to blur the line between the personal the professional. As in, with 24 hour news cycles and multiple streams of referrals and content providers, Washington insiders often mix work and play when communicating digitally.  As anyone who has visited Washington knows, this is supported by the standard screens at a sports bar.  Not only are two or three games showing at the same time, but at least one TV shows CNN and CSPAN.

7. Online video and audio have yet to infringe on the dominance of TV and radio.  Despite the prevalence of online videos and podcasts, few Washington insiders report that they rely on these sources for content.  They are viewed primarily as entertaining.

8. The national obsession with Twitter fades inside the Beltway. Results suggest that Twitter is not a preferred communication tool and the common conception is that 50% of tweets are pointless babble, and the next 30% shameless self-promotion. Beyond that, there is some real content.

9. Social networking sites are popular inside the Beltway. As a tool to track contacts, trade views, and keep up with the vast network of potential wanna-know-yous, social networking tools are growing in use. Perhaps not surprisingly, the growth area for these tools is with Capitol Hill staff who have a tendency to involve more younger people than senior executives or K Street lobbyists.

10. The more things change, the more they stay the same.  Washington’s reliance on proven relationships extends online.  That is, the influencers of the influencers still have specific, personal, trusted connections. Other results of the study show that Washington insiders filter their e-mail by known e-mail addresses, then subject lines, again caring more about WHO than WHAT.

The study is powerful insight into how Washington is adapting to the age of information overload.

When I look at my own day, I can see some parallels to the report’s results.  I spend quality time with print media most often in the form of magazines (monthly more often than weekly) and I rely on news aggregators and other alerts to queue me to new content, but I hunt down my favourite sources to find out “what is really going on.”

Graphic depicting how Washingtonians "flip" between news sources to follow a story.

Please read, compare, and comment!

Share

The Sound of CERN

Monday, June 27th, 2011

I recently had an encounter with an unusual reporter. He had been roaming about CERN in search of a story, and by chance that brought him far afield to the test beam control room at CERN’s SPS North Area where I was spending the day on shift with a colleague. Because of technical difficulties (or was it a worker’s strike?), the beam was off, and our devices under test weren’t taking any data, so when the reporter came by and asked to speak with us, it was an emphatic “Yes, but I need coffee first.”

Now, to be fair, it wasn’t the reporter himself that was unusual, it was the story he sought: “The Sound of CERN.” Um, what? This gentleman had been taking field recordings all over in hopes of piecing together an aural representation of the world’s largest particle physics laboratory, and now he wished to speak with us. After seeing — hearing! — so much of CERN’s infrastructure and equipment, all of it humming whirring beeping pulsing clicking, he had decided to widen the scope of his project to include the scientists inhabiting this soundscape. Sounded reasonable to me.

After my colleague rather eloquently explained the work we were doing at the test beam and why it mattered in the grander scheme of things, our conversation moved on to the reporter’s work. Why sound? Well, it’s awfully difficult for people to relate to fundamental physics on a visceral level, and intellectually isn’t much easier — there’s a reason we spend so much time in grad school! But understanding is facilitated whenever you can associate a concrete bit of sensory information with an abstract concept. What sound does a Higgs boson make when it decays into two W bosons? What color is an electron? (No, I’m not talking quantum chromodynamics…) If you could reach out and touch a proton, would it be soft like a plushy, or hard like a billiard, or squishy like a hard-boiled egg?

Or, in this reporter’s case, What does CERN sound like? Well, I will give you some hints — in words, until I acquire the means to record the audio myself:

  • like a room full of computers, fans blowing hard while merciless physicsts bang on their keyboards
  • like a 1.6 Tesla Morpugo magnet thrumming softly while a beam of 180 GeV pions passes through it at close to the speed of light
  • like a flock of sheep grazing in a field 100 m directly above the low-energy end of CERN’s accelerator complex, the bells on their necks chiming pleasantly
  • like the coffee machines in R1 dispensing liquid energy to tired grad students
  • like the electronic tone emanating from television screens once every forty seconds, indicating when the SPS is spilling beam your way
  • like water cascading down the inside of the ATLAS cooling towers at Point 1 on the LHC
  • like two people chatting excitedly as you pass them by, “… saw good agreement in the high-pT tails, but something strange …”
  • like this, maybe

Until next time — keep your eyes open and your ears tuned!

Burton

Share

This article first appeared in symmetry breaking June 24.

The MINOS far detector is located in a cavern half a mile underground in the Soudan Underground Laboratory in Minnesota. The collaboration records about 1,000 neutrinos per year. A tiny fraction of them seem to be electron neutrinos. Photo: Peter Ginter

Step by step, physicists are moving closer to understanding the evolution of our universe.  Neutrinos — among the most abundant particles in the universe –  could have played a critical role in the unfolding of the universe right after the big bang. They are strong candidates for explaining why the big bang produced more matter than antimatter, leading to the universe as it exists today.

Scientists of the MINOS experiment at the Department of Energy’s Fermi National Accelerator Laboratory announced today the results from a search for a rare phenomenon, the transformation of muon neutrinos into electron neutrinos. If this type of neutrino transformation did not exist, neutrinos would not break the matter-antimatter symmetry, and a lot of scientists would be scratching their heads and wonder what else could have caused the dominance of matter of antimatter in our universe.

The MINOS result is consistent with and significantly constrains a measurement reported 10 days ago by the Japanese T2K experiment, which announced an indication of this type of transformation.

The observation of electron neutrino-like events allows MINOS scientists to extract information about a quantity called sin2 2θ13. If muon neutrinos don’t transform into electron neutrinos, sin2 2θ13 is zero. The new MINOS result constrains this quantity to a range between 0 and 0.12, improving on results it obtained with smaller data sets in 2009 and 2010. Credit: Fermilab

The Main Injector Neutrino Oscillation Search (MINOS) at Fermilab recorded a total of 62 electron neutrino-like events. If muon neutrinos do not transform into electron neutrinos, then MINOS should have seen only 49 events. The experiment should have seen 71 events if neutrinos transform as often as suggested by recent results from the Tokai-to-Kamioka (T2K) experiment in Japan. The two experiments use different methods and analysis techniques to look for this rare transformation.

To measure the transformation of muon neutrinos into other neutrinos, the MINOS experiment sends a muon neutrino beam 450 miles (735 kilometers) through the earth from the Main Injector accelerator at Fermilab to a 5,400-ton neutrino detector, located half a mile underground in the Soudan Underground Laboratory in northern Minnesota.  The experiment uses two almost identical detectors: the detector at Fermilab is used to check the purity of the muon neutrino beam, and the detector at Soudan looks for electron and muon neutrinos. The neutrinos’ trip from Fermilab to Soudan takes about one four hundredths of a second, giving the neutrinos enough time to change their identities.

For more than a decade, scientists have seen evidence that the three known types of neutrinos can morph into each other. Experiments have found that muon neutrinos disappear, with some of the best measurements provided by the MINOS experiment. Scientists think that a large fraction of these muon neutrinos transform into tau neutrinos, which so far have been very hard to detect, and they suspect that a tiny fraction transform into electron neutrinos.

The observation of electron neutrino-like events allows MINOS scientists to extract information about a quantity called sin2 2θ13. If muon neutrinos don’t transform into electron neutrinos, sin2 2θ13 is zero. The new MINOS result constrains this quantity to a range between 0 and 0.12, improving on results it obtained with smaller data sets in 2009 and 2010. Credit: Fermilab

The observation of electron neutrino-like events in the detector in Soudan allows MINOS scientists to extract information about a quantity called sin2 2 theta-13 (pronounced sine squared two theta one three). If muon neutrinos don’t transform into electron neutrinos, this quantity is zero. The range allowed by the latest MINOS measurement overlaps with but is narrower than the T2K range. MINOS constrains this quantity to a range between 0 and 0.12, improving on results it obtained with smaller data sets in 2009 and 2010. The T2K range for sin2 2 theta-13 is between 0.03 and 0.28.

“MINOS is expected to be more sensitive to the transformation with the amount of data that both experiments have,” said Fermilab physicist Robert Plunkett, co-spokesperson for the MINOS experiment. “It seems that nature has chosen a value for sin2 2 theta-13 that likely is in the lower part of the T2K allowed range. More work and more data are really needed to confirm both these measurements.”

The MINOS measurement is the latest step in a worldwide effort to learn more about neutrinos. MINOS will continue to collect data until February 2012. The T2K experiment was interrupted in March when the severe earth quake in Japan damaged the muon neutrino source for T2K. Scientists expect to resume operations of the experiment at the end of the year. Three nuclear-reactor based neutrino experiments, which use different techniques to measure sin2 2 theta-13, are in the process of starting up.

The MINOS far detector is located in a cavern half a mile underground in the Soudan Underground Laboratory in Minnesota. The collaboration records about 1,000 neutrinos per year. A tiny fraction of them seem to be electron neutrinos. Photo: Peter Ginter

“Science usually proceeds in small steps rather than sudden, big discoveries, and this certainly has been true for neutrino research,” said Jenny Thomas from University College London, co-spokesperson for the MINOS experiment. “If the transformation from muon neutrinos to electron neutrinos occurs at a large enough rate, future experiments  should find out whether nature has given us two light neutrinos and one heavy neutrino, or vice versa. This is really the next big thing in neutrino physics.”

A large value of sin2 2 theta-13 is welcome news for the worldwide neutrino physics community and a boon for the NOvA neutrino experiment, under construction at Fermilab. The experiment is designed to determine the neutrino mass hierarchy. It will find out whether there are one light and two heavy neutrinos, or whether there are two light neutrinos and a heavy one. Together with several nuclear physics experiments, such as EXO and Majorana, NOvA will help scientists determine what early-universe theories are the most viable ones.

To measure directly the matter-antimatter asymmetry hidden among the neutrino transformations, scientists have proposed the Long-Baseline Neutrino Experiment. It would send neutrinos on a 1,300-kilometer trip from Fermilab to a detector in South Dakota. This would give muon neutrinos more time to transform into other neutrinos than any other experiment. It would give scientists the best shot at observing whether neutrinos break the matter-antimatter symmetry and by how much. For more information about MINOS, NOvA and LBNE, visit the Fermilab neutrino website:
http://www.fnal.gov/pub/science/experiments/intensity/

The MINOS experiment involves more than 140 scientists, engineers, technical specialists and students from 30 institutions, including universities and national laboratories, in five countries: Brazil, Greece, Poland, the United Kingdom and the United States. Funding comes from: the Department of Energy Office of Science and the National Science Foundation in the U.S., the Science and Technology Facilities Council in the U.K; the University of Minnesota in the U.S.; the University of Athens in Greece; and Brazil’s Foundation for Research Support of the State of São Paulo (FAPESP) and National Council of Scientific and Technological Development (CNPq).

Kurt Riesselmann

Share

Quantum bugs

Monday, June 27th, 2011

Physicists have got a great sense of humor when it comes to software bugs. A “bug” is simply something which stops the software from running when we want it to continue. When a bug is known, but seemingly unavoidable, it can be called a “feature”, in an attempt to make it sound a little less ominous. There’s usually a little chuckle in a meeting when someone announces that there’s “an interesting feature in this code”. When I discovered the terminology behind the quantum bug I was almost in hysterics. Here’s a selection of them:

Heisenbug

Hey you!  Get out of my code!

Hey you! Get out of my code!

Named after the Heisenberg uncertainty principle, this kind of bug disappears when you try to study it! The Heisenberg uncertainty principle usually reads \(\Delta p_x \Delta x \ge \hbar /2\), where \(\Delta p_x\) is the uncertainty in the momentum (in the \(x\) direction, say) and \(\Delta x\) is the uncertainty in the \(x\) position. When we know one of these quantities with a given precision, we lose precision in the other quantity. For a Heisenbug we have:

\[
\Delta K_c \Delta K_b \ge I
\]
This time \(\Delta K_c\) is the uncertainty in our knowledge of the conditions that caused the bug, and \(\Delta K_b\) is the uncertainty in our knowledge of the behavior of the bug, and \(I\) is some (unknown) amount of information we can get about the bug. So we know what caused the bug, we turn on the debugger, or try to get some other information about the bug to try to “catch it in in the act”, and in doing so we lose information about what caused the bug, and the bug disappears! Weird stuff!

Schroedingbug

This kind of bug is usually found in code that never gets executed. When it does get executed the outcome is unpredictable. For example, think about this:

if(a<0 && a>0) {
  a = a/b ;
}

Since a cannot be both negative and positive, the code inside the curly braces would never get executed. Some clever programmer comes along and changes things:

if(a<0 || a>0) {
  a = a/b ;
}

Now the code executes if a is positive or negative (ie not zero.) But in fixing this line of code the programmer has introduced another bug. When happen when b is zero? The programmer has introduced a Shroedinbug, with a wavefunction:

\[
\psi_{code} = P(b \ne 0)\psi_{succeed} + P(b=0)\psi_{fail}
\]

Good luck to whoever has to debug that!

Cosmic bug

Also know known as the alpha particle bug, this happens when a small part of the data set used is corrupt in some way. In a lot of cases this would be the same as having normal data, and then a cosmic ray comes in from space, hits a part of the memory or hard disk, and some 1s and 0s get all messed up, leading to corrupt data. Finding these bugs is usually quite easy, but it can also be frustrating. After all, your code can run perfectly fine for several hours, processing millions of events, only to have it fail in a single event. Wishing away corrupt data is just like wishing away cosmic rays. It can’t be done. And very often, nobody knows where they come from.

Bonus material: How to use the cosmic bug your advantage!

Fermi gas bug

Okay, I made this name up and it just a fancy word for “memory leak”, but it goes hand in hand with the cosmic bug. Suppose you think you have a cosmic bug and you suspect that event number 24601 is corrupt. How do you test this? Well you exclude the first 24600 events and take a closer look at what is happening. (This just skips about 24,000 events, saving you lots of time.) Everything works fine until you get to about event 50,000, then things crash again. So you start from event 50,000, and find that things are okay until you get to event 75,000. It seems that these three events are simultaneously cosmic events and not cosmic events! In fact, it’s a Fermi gas bug. The software uses more and more memory until eventually it runs out of memory and crashes. That’s a bit like how electrons act in a Fermi gas. You can only fit so many electrons in to the available states, and the bits of memory act like electrons. Once you’ve filled the states, you’re out of luck!

Mandelbug

The original Mandelbug?

The original Mandelbug?

Named after Benoit B Mandelbrot, this kind of bug is very very complicated. (By the way, the B in Benoit B Mandelbrot stands for Benoit B Mandelbrot…) The conditions that cause the bug are so amazingly complicated that they’re almost impossible to reproduce, and may even be chaotic. Even worse, they can be the product of lots of other much smaller bugs. The conditions are still deterministic, but good luck reproducing them! It’s a bit like making a fractal or predicting the weather. The answer is out there, but until you let the system go through the motions you never know what the outcome will be. Failed fits are a great example of this kind of bug. Changing the initial values of the parameters by a tiny amount can be the difference between success and failure.

Bohr bug

A Bohr bug is the simplest bug of all. Just like the semiclassical quantum mechanics of the early 20th century, the Bohr bug is in a definite state and you can measure all of its properties. These come up all the time and usually easy to spot and fix. But just like in physics, we’d be fooling ourselves if we thought that everything was this simple.

Generic quantum bug

So what was my bug? Well it didn’t seem to fit into any of these categories. Using the same conditions sometimes it would work and sometimes it wouldn’t. If I had to write a wavefunction for it might look something like this:

\[
\psi_{code} = a\psi_{succeed} + b\psi_{fail}
\]

Unfortunately I don’t know the size of \(a\) or \(b\)! So I’d better work out how to fix this.

I suppose we could add a few more bugs to the list:

Bose bug

Like two bosons occupying one state, two variables can use the same memory (for example, two pointers to the same object). When the state of one changes, so does the state of the other! Hours of hilarious debugging!

Quantum tunnel bug

Suppose you declare an unsigned int (a positive integer) and assign it a negative value. What happens? Well usually you get a huge positive number instead. The value of the variable tunneled from 0 (a sensible value) to 4294967295, which is unphysically huge!

Nuclear chain reaction bug

Suppose you have a bug, which causes two more (identical) bugs, which cause four more identical bugs… You’ve got a chain reaction which isn’t going to end any time soon! We all have these from time to time and they can be nearly impossible to see in advance. For example, let’s say you apply a weight of 0 to a sample. Anything weight that gets multiplied by this weight will also be 0. Hours later you find your code worked and you got an output. But everything is 0.

Any more suggestions? Leave them in the comments!

Share

Last month had the unique pleasure of making my first trip to CERN (more on that in a later post). I made a point to stop by the CERN gift shop to pick up a snazzy mug to show off to my colleagues back in the US, and am now the proud owner of a new vessel for my tea:

My brand new "Standard Model Lagrangian" mug from CERN.

The equation above is the Standard Model Lagrangian, which you can think of as the origin of all of the Feynman rules that I keep writing about. Each term on the right-hand side of the above equation actually encodes several Feynman rules. Roughly speaking, terms with an F or a D contain gauge fields (photon, W, Z, gluon), terms with a ψ include fermions, and terms with a ϕ include the Higgs boson. Some representative diagrams coming from each of the terms are depicted below:

Representative Feynman rules coming from each term in the Lagrangian.

But alas, there’s a bit of a problem with the design. It appears that there’s an extra term which isn’t included in the usual parametrization of the Standard Model:

This term really shouldn't be here. It's not necessarily "wrong," but it is misleading and doesn't match what is written in textbooks. Technically, it is not `canonically normalized.'

I won’t go so far as to call this a mistake because technically it’s not wrong, but I suspect that whoever designed the mug didn’t mean to write this term. Let me put it this way: if I had written the above expression, my adviser would pretend he didn’t know me. The “h.c.” means Hermitian conjugate, which is a generalization of the complex conjugate of a complex number. In terms of Feynman diagrams, this “+h.c.” term means “the same diagram with antiparticles.”

The problem is that the term above,


already
includes its Hermitian conjugate. In physics-speak, we say that the kinetic term is self-conjugate (or Hermitian, or self-adjoint). This just means that there is no additional “+h.c.” necessary. In fact, including the “+h.c.” means that you are writing the same term twice and the equation is no longer “canonically normalized.” This just means that you ought to rescale some of your variables.

I was mulling over this not-quite-correct term on my mug while looking over photos from CERN when I discovered the same ‘error’ in a chalkboard display in the “Universe of Particles” exhibit:

Display at the "Universe of Particles" exhibit in The Globe of Science and Innovation at CERN.

The “+h.c.” on the top right is the same ‘error’ as printed on the CERN mug. I wonder who wrote this?

To be clear: this expression does summarize the basic structure of the Standard Model in the sense that it does give all of the correct Feynman rules. However, the extra “+h.c.” introduces a factor of two that needs to be accounted for by weird conventions elsewhere (that would not match any of the usual literature or textbooks).

Nit picky remarks for experts. It is worth noting that the above expression does get one thing absolutely right: it writes everything in terms of Weyl (two-component) fermions, as appropriate for a chiral theory like the Standard Model. One can see that these as Weyl fermions because the Yukawa term contains two un-barred fermions (the “+h.c.” gives two barred fermions). Note that even for Weyl fermions, one shouldn’t have a “+h.c.” on the kinetic term. In fact, I would typically write the D-slash with a bar since it contains a barred Pauli matrix, but this is a matter of personal convention. The “+h.c.” is not “personal convention” since it means the kinetic term is not canonically normalized.

Anyone who has done tedious physics calculations is familiar with the frequent agony of being off by a factor of 2. Now when people make remarks about this ‘error’ on my mug, I’m quick to tell them that the factor of 2 mistake just makes it more authentic.

Share

ISSP11 (I) — Forewords

Sunday, June 26th, 2011

——

It is more than two months since my first blogging. There were several events that could be good topics for blogging, such as the Chinese traditional dragon boat festival, the launching of the AMS detector, the recent indication of a large theta13 by the T2K accelerator neutrino experiment, etc. However, I was fully occupied and even not have a chance to write one word. This is not so good as a blogger. I need write something down. Let me start with a recent interesting event—ISSP.

——

When walking on the small path of Erice (a beautiful town in Sicily, Italy) and breathing the mixture atmosphere from Mediterranean and this old town, I believe I’m back to here again. This is my second time to be Erice to participate in the International School of Subnuclear Physics (ISSP). This year is the first of the three devoted to the celebrations of the 50th Anniversary of the Subnuclear Physics School. Why the celebration will be during three year? The school was first started in 1961 by Prof. Antonino Zichichi and John Bell at CERN and formally established in 1962 with Bell, Blackett, Weisskopf and Rabi in Geneva (CERN); the first Course at Erice being in June 1963. So they have three celebrations.

 ISSP is a high level school, under <ETTORE MAJORANA> Foundation and Centre for Scientific Culture (EMFCSC). Experimental physicists like to talk about numbers. Here are some about the EMFCSC activities since 1963:

123 schools, 1,497 courses, 103,484 participants (124 of which NOBEL Laureates) coming from 932 universities and laboratories of 140 nations

 ISSP is also very interesting. At the end of the School twenty-three Diplomas will be awarded to the Best New Talents by a Committee composed by the Lecturers and the Invited Scientists. Each Deploma is named after a late physicist, to stimulate the young talents. It is really like a ‘school’, rather than the summer school I’ve participated or heard.

It will be a great enjoyment to discuss physics with the  top physicists, S. Ting, A. Zichichi, G. ‘t Hooft, H. Fritzsch, P. Minkowski, M. J. Tannenbaum, etc.

Share

–by T. “Isaac” Meyer, Head of Strategic Planning and Communications

I spent last night at the Vancouver Aquarium with some of my most talented colleagues and a few fish. We were attending the launch of the Vancouver branch office of the Science Media Centre of Canada. The event featured a panel discussion led by Canadian science icon Jay Ingram and a short reception in a darkened exhibit area surrounding by smiling sea animals. It was fantastic—and it prompted some existential conversations over bite-sized appies and the drive home.

The most important feature of the evening was that it was a PERFECT Vancouver evening. Literally. 65 degF, clear sky, amazing sunset. Oh, and then we went inside for the event.

A tough day in Vancouver.

Jay Ingram is a celebrity of Canadian science and communications. Most recently, he hosted and produced Discovery Channel’s Daily Planet¸ perhaps the most-watched and most-loved science show on Canadian television. For years, Jay would find something new in science, make it simple and inspiring, and work to share it with the public each day of the week. That’s commitment.

The panel included Lisa Johnson (CBC news reporter), Jennifer Gardy (BC CDC scientist and communicator), Candis Callison (UBC professor of journalism), and Marcello Pavan (a graduate of Quantum Diaries and TRIUMF’s outreach coordinator). Jay did something very clever and actually interviewed each of them separately on the stage for 3-4 minutes before starting the panel discussion. This provided an intimate conversation for the audience to get to know each panelist instead of the usual “prepared remarks going down along the table.”

Lisa talked about the timeline of a story. She might find out at 10am what she has to research, interview, shoot, edit, and air by 6pm that same day. That means a 30 minute delay in reaching someone credible could be a deal breaker. Jennifer talked about how important it is to give the journalist freedom to choose the angle of the story that works for them; she also said that the highest honour a journalist can pay a scientist is a chance to review the final copy of the story for any errors. Candis spoke about the skyrocketing role of new media and the challenges of communicating science as it evolves and changes at the forefronts. Marcello talked about the challenge of talking to people who have already made up their mind; he said his #1 piece of advice to journalists interviewing scientists is to give up that science is hard and that it’s too technical to make sense. As a scientist, its hard to do an interview with someone who has already decided you speak gibberish and cannot be understood!

The Q&A discussion with the audience covered some tough topics.

When science or science results are unpopular, surprising, or complex, who is responsible for championing the cause and getting them out there? Everyone has heard examples and allegations about governments around the world muzzling scientists for sharing research results that undermine policy positions or policy decisions. Are scientists themselves accountable for fighting the machine and having their truths known? What role should the media play? What about when scientists don’t know what the truth is, such as in the first few days of the Fukushima disaster where misinformation was 10 times more available than facts and yet everybody wanted a rock-solid assessment.

In the age of internet democracy, everyone and anyone can be a credible expert. It used to be that the newspaper was credible and if you saw it there, there were good odds it was true and verifiable. Nowadays, anyone can write a blog, run an online newspaper, or make a viral YouTube video that claims to be the truth. In some cases, crowd-sourced journalism can allow the public instant and immediate access to ground truth. In other cases, it means that a credible analysis can be excoriated by an anonymous user with only an e-mail address.

How can an organization like SMCC have an impact in this environment? The goal of SMCC is to raise the level of public discourse in Canada by helping journalists access evidence-based research. With this intention, the organization was formed to act as a bridge and a reliable clearinghouse and resource for scientists and the media alike. There was a lot of discussion about how to ensure that the organization could remain independent while also acting like a partner in the crucial moments when science hits the headlines. Likewise, instead of “science” sections in the newspapers, there is now science in almost every front-page story. SMCC will be helping the non-science reporters get the information they need so that the front-page headlines are accurate, timely, and useful to the public.

A fascinating evening and hats off to Jay Ingram and the panelists! Well done, and let’s do it again soon.

Share

A rock < me < A hard place

Wednesday, June 22nd, 2011

Particle physics is hard work. I don’t just mean that the theory is difficult, I mean that the day to day work is difficult even if you know what you’re doing. The only thing more complicated than working with people is working with computers, and at CERN we have to do a lot of both. It’s even more difficult when you don’t know what you’re doing it, and since we work in research there are plenty of things that nobody will know! In that kind of scenario, having more than a passing knowledge of some area of hardware or computing can give you the title “expert”. The past two weeks have been a non-stop barrage of really hard things happening, and having to make tough choices about how to deal with them. The humidity and heat hasn’t helped, and only made it harder for everyone to concentrate! But today, the skies opened up, the rain poured down and the work finally became manageable again!

In the corridors of CERN, no one can hear you scream!

In the corridors of CERN, no one can hear you scream!

In the past couple of weeks I’ve had an awkward dealing with the IT admin. I have a series of computing tasks which must work each night for the Trigger Rates group, and if they don’t work, we run into problems with the monitoring of the trigger rates. Blips happen and jobs fail from time to time, but when it happens for more than a few days people get unhappy. Unfortunately these jobs required large resources, and as the software migrated from one version to another this lead to huge problems with allocations of resources. The IT people noticed this and contacted me immediately! What could I do? If I continued to run the jobs the problems with resources would continue, and I’d get into some serious trouble. If I stopped the jobs we’d have problems with the rates monitoring system. That’s not a nice position to be in! After speaking the experts it looks like there’s a solution and today I got an E-mail telling me how to get the jobs to work without annoying anyone. That’s one crisis averted!

If things were looking bad for my service work, what about my analysis? That was looking a bit shaky as well. I needed to perform a study on the data to see how often we see QCD events (These are events that produce huge numbers of quarks and gluons and they dominate the data. Our poor simulations just aren’t vast enough to model these backgrounds properly!) The problem involved using two pieces of software which were written by different people. That’s usually okay, I’d just E-mail the software experts for help and arrange an informal meeting. Unfortunately neither of these experts were available for various reasons, and when they became available I was still handling the trigger rates problem. So I was submitting analysis jobs to the GRID (the huge internationally distributed network of computers designed for handling analysis jobs) and hoping they would work. At around 2am last night, after failing to get things to work for the fifth time I gave up and went to sleep. Then I awoke suddenly at 7am with a revelation! All I had to do was a simple search function to find out where the problem came from, change a few lines and it should work fine. I resubmitted the jobs and went back to bed. Then this afternoon, with a few precious hours left before the deadline I had to get the output, rescale it properly and perform a fit to count the QCD events. This part had never worked for me in the past, and I’d already failed to show results of the study twice. With 10 minutes left before the start of the meeting I changed a line of code, pressed enter and it worked perfectly! After more than two weeks of struggling with GRID jobs, ROOT fits and late nights it all fell into place very quickly.

Science should always be this much fun!

Science should always be this much fun!

Now spending that time on a study of the QCD events meant that I couldn’t spend it on other areas of the analysis. I would have liked to see if I could get a better efficiency with a different choice of trigger (picking out events with b-jets instead of missing energy.) I was speaking to a colleague and he was under the impression that this was an easy change to make and would give me better results for very little effort. In a sense he was right, I’d just change one line of code and get more events. But the problem comes back to working with other people. My group is just one small part of a larger collaboration and most of our time is spent coordinating our efforts. We perform cross check upon cross check to make sure that we agree on various definitions and values. It’s necessarily tedious, but it makes sure that we get it right. So if I was to change the choice of trigger I’d have to then convince the other groups that this is a good idea, perform all kinds of studies and then perform cross checks with other people. That means weeks of work and the deadlines for the European Physical Society (EPS) are rapidly approaching! On the one hand I want to see our work presented at EPS, but on the other hand I want to spend time improving the analysis. There just isn’t enough time to do both, so I had to make the decision to pursue EPS. It’s frustrating all on its own, but when a colleague doesn’t see the dilemma it’s made all the more worse. After all, physics is fun! We all love to play around with different ideas and see how we can best improve our results. But bureaucracy is a necessary part of getting a respectable publication, so we must practice some restraint and spend time getting it right. It’s a bit like going on a road trip, only to get stuck in traffic! We’re moving forward slowly and safely, but I want to speed up and see the exciting parts.

So now that these stumbling blocks have passed can I relax and enjoy the fruits of my work? Well I can for one evening. Tomorrow I need to get started on the next project- a high profile talk to the Trigger experts on another part of my service work. That involves all the things that make this work more complicated. I need to coordinate responses from lots of people, each with their own agenda and niche knowledge, and then running jobs on computers (without getting in trouble with the IT admin!) Don’t get me wrong, I enjoy working with people and I enjoy it when the computers work properly, but it can quickly lead to information overload, and when it’s for a high profile talk it can also be stressful. On the other hand it indicates that this work is coming to an end and I can focus on something else such as control room shifts (and more blogging!)

It’s hard and at times it’s frustrating, but when it works properly physics is amazing. Days like today make the weeks that precede them worth while. I just hope I get more days like today and less stressful ones this summer! With so many interesting results around the corner I want to be able to enjoy them all and have time to make the most of one of the most exciting years in the history of high energy physics.

Share