• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USA

Latest Posts

Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for May, 2012

This essay was motivated by a question from an engineering colleague. It would be presumptuous to say “friend,” as scientist and engineers are in a state of “friendly” rivalry, however, not to the extent as with arts. I once saw a sign in an engineering department hallway that read: Friends do not let friends study arts. Be that as it may, my colleague’s question was why scientists do not show the same order in all their work as they show in writing papers. That question I will attempt to answer in this essay.

Engineering is far older than science, being perhaps the second oldest profession, dating back at least to the building of the pyramids (Imhotep from the 27th century BCE is the oldest named engineer) and Stonehenge and probably back to when the first club was engineered.  Stonehenge is amazing as it was probably built without the documentation that is the hallmark of modern engineering practice. Unfortunately, that means we do not know what the initial requirements[1] were and this has led to much futile speculation as to its purpose.

Science and engineering are sibling disciplines, frequently mentioned together and have much in common. The main similarity is that they both deal with the observable universe and are judged by their ability to make correct predictions regarding its behaviour. For example, that the Higgs boson will be found at the Large Hadron Collider (LHC) or that the building will not collapse in an earthquake. Secondarily they use similar techniques, placing high importance on analytic reasoning, to the extent that Asperger’s syndrome is sometimes called the engineer’s disease. The relation between Asperger’s syndrome and engineers or scientists may be an urban myth but it does indicate the relation of extreme analytic thought to both science and engineering. The solution to problems in both relies on the same problem solving skills, analytic thinking and mathematics. Do not let anyone tell you that either does not require a high degree of intellectual activity.

Science and engineering rely on each other. Behind every engineering project is a great deal of science, from the basic understanding of Newtonian mechanics in the building of a bridge to the advanced materials science in the construction of a cell phone. Actually, the cell phone is a good example of all the science needed: it depends on Newtonian mechanics (the construction of the cell phone towers), quantum mechanics (the operation of the transistors), classical electromagnetism i.e. Maxwell’s equations (the propagation of the signal from the tower to the cell phone), materials science (almost all the cell phone itself), and general and special relativity (the GPS timing that is necessary in some cell phone technologies).

Equally, science is beholden to engineering. From simple things like the buildings that house scientific equipment to complicated things like the ATLAS detector at the Large Hadron Collider (LHC). Making a building may seem simple but, as I see with the new ARIEL building at TRIUMF, nothing is simple and even something as basic as a laboratory building relies on engineering expertise. The ATLAS detector is another story. Its size and complexity are a marvel of engineering virtuosity. Back to TRIUMF, the IEEE has recognized the TRIUMF cyclotron, commissioned in 1974 and the main driver for much of TRIUMF’s science program, as an Engineering Milestone. Even the slide rule I used back in ancient history as an undergraduate[2] was an engineering achievement.

Despite the close relationship between science and engineering the two are different. The difference can be summarized in this statement: “In engineering you do not start a project unless you know the answer while in science you do not start a project if you know the answer.” Engineering is based on everything being predictable; you do not start building a bridge unless you know you can complete it. In science, the purpose of a project is to answer a question to which the answer is currently unknown. For example, if the properties of the Higgs boson were known, it would not have been necessary to build the LHC. Good engineering practice is based on order but at the center of science is chaos. We are exploring the unknown; great discoveries can come from serendipity. In science, something not working as expected can lead to the next big breakthrough. In engineering, something not working as expected can lead to the bridge collapsing. Advances in science are frequently due to creativity, not following rules.

This difference in perspective leads to very different cultures in the two disciplines. The engineer is much more concerned with process and following procedure. The scientist with following up his most recent hunch—after all, it could lead to a Nobel Prize.  Engineering versus science: order versus creative chaos. This is clearly an oversimplification as there is no clean separation between engineering and science, but it is a good indication of the divergence between the two mindsets. Thus, although engineering and science are closely related and indeed intertwined, the two, in their heart of hearts, are very different; engineering uses science in order to build and science uses engineering in order to explore.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] Project management jargon alert: requirements used in technical project management sense.

[2] HP produced the first pocket calculator when I was an undergraduate student.


Needle in a haystack

Thursday, May 10th, 2012

We are back to discussing B physics today, with the observation of the rare decay: \(B^- \rightarrow \pi^- \mu^+ \mu^-\). So what is this decay? It’s a \(B^-\) meson (made of a b and an anti-u quark) decaying into a \(\pi^-\) meson (made of a d and an anti-u quark) and two muons. And why is it so rare? Well, it’s a flavour changing neutral current decay. Which means that there’s a change in quark flavour in the decay, but not charge. This type of decay is forbidden at tree level in the Standard Model and so has to proceed via a loop, which can be seen in the centre of the Feynman diagram below.

If you look closer at the loop, you can see that for the decay to occur, a b quark needs to change flavour to a t or c quark, which then needs to change to a d quark. This is another reason why this decay is so rare. Transitions in quark flavour are governed by the CKM matrix, which I illustrate on the right, where the larger squares indicate more likely transitions. So while the transition from b to t is likely, the transition from t to d is very unlikely, and the b to c and c to d transitions are both fairly unlikely. This means, that whichever path is taken, the b to d quark transition is very very unlikely.

Okay, now to the LHCb result. Below I have a plot of the fitted invariant mass for selected \(\pi^-\mu^+ \mu^-\) candidates, showing a clear peak for \(B-\) decays (green long dashed line). Also shown are the backgrounds from partially reconstructed decays (red dotted line) and misidentified \(K^-\mu^+ \mu^-\) decays (black dashed line). Candidates for which the \(\mu^+ \mu^-\) pair is consistent with coming from a \(J/\psi\) or \(\psi(2S)\) are excluded.

We see around 25 \(B^- \rightarrow \pi^- \mu^+ \mu^-\) events and measure a branching ratio of approximately 2 per 100 million decays. This result makes this decay the rarest \(B\) decay ever observed!


Whenever we come across a new result one of the first things we ask is “How many sigma is it?!” It’s a strange question, and one that deserves a good answer. What is a sigma? How do sigmas get (mis)used? How many sigmas is enough?

The name “sigma” refers to the symbol for the standard deviation, σ. When someone says “It’s a one sigma result!” what they really mean is “If you drew a graph and measured a curve that was one standard deviation away from the underling model then this result would sit on that curve.” Or to use a simple analogy, the height distribution for male adults in the USA is 178cm with a standard deviation of 8cm. If a man measured 170cm tall he would be a one sigma deviation from the norm and we could say that he’s a one sigma effect. As you can probably guess, saying something is a one sigma effect is not very impressive. We need to know a bit more about sigmas before we can say anything meaningful.

The term sigma is usually used for the Gaussian (or normal) distribution, and the normal distribution looks like this:

The normal distribution

The normal distribution

The area under the curve tells us the population in that region. We can color in the region that is more than one sigma away from the mean on the high side like this:

The normal distribution with the one sigma high tail shaded

The normal distribution with the one sigma high tail shaded

This accounts for about one sixth of the total, so the probability of getting a one sigma fluctuation up is about 16%. If we include the downward fluctuations (on the low side of the peak) as well then this becomes about 33%.

If we color in a few more sigmas, we can see that the probability of getting two, three, four and five sigma effect above the underlying distribution is 2%, 0.1%, 0.003%, and 0.00003%, respectively. To say that we have a five sigma result is much more than five times as impressive as a one sigma result!

The normal distribution with each sigma band shown in a different color.

The normal distribution with each sigma band shown in a different color. Within one sigma is green, two sigma is yellow, three sigma is... well can you see past the second sigma?

When confronted with a result that is (for example) three sigma above what we expect we have to accept one of two conclusions:

  1. the distribution shows a fluctuation that has a one in 500 chance of happening
  2. there is some effect that is not accounted for in the model (eg a new particle exists, perhaps a massive scalar boson!)

Unfortunately it’s not as simple as that, since we have to ask ourselves “What is the probability of getting a one sigma effect somewhere in the distribution?” rather than “What is the probability of getting a one sigma effect for a single data point?”. Let’s say we have a spectrum with 100 data points. The probability that every single one of those data points will be within the one sigma band (upward and downward fluctuations) is 68% to the power 100, or \(2\times 10^{-17}\), a tiny number! In fact, we should be expecting one sigma effects in every plot we see! By comparison, the probability that every point falls within the three sigma band is 76%, and for five sigma it’s so close to 100% it’s not even worth writing out.

A typical distribution with a one sigma band drawn on it looks like the plot below. There are plenty of one and two sigma deviations. So whenever you hear someone says “It’s an X sigma effect!” ask them how many data points there are. Ask them what the probability of seeing an X sigma effect is. Three sigma is unlikely for 100 data points. Five sigma is pretty much unheard of for that many data points!

A typical distribution of simulated data with a one sigma band drawn.

A typical distribution of simulated data with a one sigma band drawn.

So far we’ve only looked at statistical effects, and found the probability of getting an X sigma deviation due to fluctuations. Let’s consider what happens with systematic uncertainties. Suppose we have a spectrum that looks like this:

A sample distribution with a suspicious peak.

A sample distribution with a suspicious peak.

It seems like we have a two-to-three sigma effect at the fourth data point. But if we look more closely we can see that the fifth data point looks a little low. We can draw three conclusions here:

  1. the distribution shows a fluctuation that has a one in 50 chance of happening (when we take all the data points into account)
  2. there is some effect that is not accounted for in the model
  3. the model is correct, but something is causing events from one data point to “migrate” to another data point

In many cases the third conclusion will be correct. There are all kinds of non-trivial effects which can change the shape of the data points, push events around from one data point to another and create false peaks where really, there is nothing to discover. In fact I generated the distribution randomly and then manually moved 20 events from the 5th data point to the 4th data point. The correct distribution looks like this:

The sample distribution, corrected.

The sample distribution, corrected.

So when we throw around sigmas in conversation we should also ask people what the shape of the data points looks like. If there is a suspicious downward fluctuation in the vicinity of an upward fluctuation be careful! Similarly, if someone points to an upward fluctuation while ignoring a similarly sized downward fluctuation, be careful! Fluctuations happen all the time, because of statistical effects and systematic effects. Take X sigma with a pinch of salt. Ask for more details and look at the whole spectrum available. Ask for a probability that the effect is due to the underlying model.

Most of the time it’s a matter of “A sigma here, a sigma there, it all balances out in the end.” It’s only when the sigma continue to pile up as we add more data that we should start to take things seriously. Right now I’d say we’re at the point where a potential Higgs discovery could go either way. There’s a good chance that there is a Higgs at 125GeV, but there’s also a reasonable chance that it’s just a fluctuation. We’ve seen so many bumps and false alarms over the years that another one would not be a big surprise. Keep watching those sigmas! The magic number is five.


Hi All! Today marks the beginning of the Phenomenology 2012 Symposium, Pheno for short, or #Pheno2012 if you are into hashtags, here at the University of Pittsburgh.


Phenomenology 2012 Symposium Poster (Click for Full Size)


It will definitely be an exciting three days because this conference is dedicated solely to promoting the partnership and collaboration between experimentalists and theorists. For experimentalists, this is a grand opportunity to learn about new theories that may actually be testable at the Large Hadron Collider; it is also a chance to learn about new ways to test well-known ideas. Similarly, for theorists, this is an opportunity to learn about the fine details of a particular study for new physics. It is one thing to rule out the existence of certain particles (like squarks!); it is an entirely separate situation if there were special caveats were assumed (like most every search for squarks!).

From Tokyo, to Hawaii, to Heidelberg, hundreds of particle physicists from around the world are assembling for what will be a great melding of minds. Even a couple fellow QDers, including Flip Tanedo and Corrinne Mills, will be in attendance. In fact, Corinne has the star-studded honor of being first talk and will be presenting the latest Standard Model results from the ATLAS and CMS experiments. (Good luck!)


Updates from ATLAS, CMS, and LHCb will definitely be available via #Pheno2012, and, as always, Happy Colliding.

– richard (@bravelittlemuon)

PS, The detector experiments have already received 1 fb-1 worth of proton-proton collisions.

CERN's Official LHC Luminosity Plots for 2012 proton-proton Run.


Getting out

Sunday, May 6th, 2012

I wrote this while I was working on my thesis, and never got a chance to polish and post it. Since then, I’ve survived my interviews, defended my thesis, accepted a postdoc, and started working on two new exciting experiments. I got to travel to China for one of them! This week I’m finishing my relocation and moving from the lab dorms to an actual apartment. Complete normalcy finally resumes.

When you type ‘getting in to’ in the Google search bar, one of the four auto-complete recommendations you get is ‘grad school.’ It’s appropriate considering it’s quite a big deal, both in terms of its significance – you’re starting the next stage of life – and in terms of how much effort it takes. It’s stressful because there is (almost) one and only one way to do it. There are strict deadlines; there are a set of tests you need to take offered on one of two days; there are a set number of recommendation letters, and even a word or page limit on the application essay. To make it even more structured, this all takes place over the same few months each year, with applications submitted around winter break and offers made mid-spring for the term starting the following fall. It’s a lot to do in a short time and challenging for an undergraduate, so they tell you exactly how to do it. Professors, research advisors and graduate students are full of advice, sometimes unsolicited. There is an industry devoted to applying to graduate school from test prep to rankings of different departments.

Getting out of graduate school is no smaller feat. This time, though, there is no one way to do it. “When should I start writing my thesis?” you ask, or “when should I start applying for postdocs?” and you get the same answer: “It’s never too early.” Right. For more specific questions everyone has a unique answer based on their experience. It only leaves you wondering what your answer will be in a couple of years when grad students are asking you.

It’s a game of scheduling finalizing the analysis, writing and defending a thesis and getting a job for when you’re done. These all overlap and are correlated, of course. You need to apply and interview for jobs, which involves giving a seminar. The analysis needs to be almost finalized and approved by the collaboration for publicizing before you can give a seminar. Applications take a negligible amount of time but if you get an interview, which is what you want to happen, you have to make time to prepare and often travel for it, all with short notice.

If you get an offer you need to respond within a few weeks, but most likely there are other offers and interviews with non-overlapping response deadlines. You want to wait for other options but worry you’ll miss the one, and at the time only, job offer you have. While everyone’s telling you to make your own decision and choose what you really want to work on, the same people are also offering their opinion or pressing you to decide on their own offer. You frequently find yourself about to make a major career decision out of exhaustion.

If and when you accept an offer, it poses a hard deadline for when you must be finished. If, like me, you need a visa for the job, they require that you have your degree before they can obtain the visa. A bit of a cyclical problem, one I’ve yet to solve. The employer wants you to start as soon as possible, their detector is taking data. The university wants you to allow ample time for the defense committee to read the manuscript. The advisor wants you to write the best document possible. Add to all of this up to two intercontinental moves – one from your overseas experiment to your university, another for your next job – and you suffer a minor anxiety attack. (This last one is not the case for me, but the decision to only apply to local jobs was a deliberate one with pros and cons).

I’m only partly done with all of this, and maybe my relaxation is the result of getting too used to being stressed, but I’ve come to the conclusion that I won’t fret it. I’ll work hard and stop worrying about things outside my control. I’ll even allow myself to dream about two months from now, with a fresh title and a new experiment. Stepping away from the computer and going for a run (or a pint depending on time of day) has been the best solution to my temporary misery throughout the process of getting out. If you’ve made it this far, you’ll make it all the way. It turns out there is no right way to do it, but there’s no wrong way either.

Type ‘getting out of’ into Google and it autocompletes with ‘debt’! Ironically, this is never too low on a 7th year graduate students’ to-do list. 🙂


朝日が照らすイギリス東部の田園風景の中を、高速鉄道がロンドン、King’s Cross駅に向けて疾走している.まずいパニーニをほおばり、まずいコーヒーをすすりながら、幸せな時間である.



ダラム大学にお邪魔するのは初めてだったのだけれど、Mantonとソリトンの本を書いたSutcliffeや、Wardがいる大学なので、ソリトン好きな人間としてわくわくしていた.Sutcliffeには今までお会いしたことが無かったが、会ってすぐに、BPS solitonのtopological numberが多いときのsizeについて、激論を交わす.互いの主張が矛盾するように見えたので、かなりエキサイトした.しかし結局最後は、一件矛盾するような話のどこに原因があったのか特定出来て、お互いニヤニヤ.で、まだ出版していない研究のことなどを議論する.会ってから3時間.まったくお互いの文化や育ちや人生を話さないうちに、これまで何年もお互いに考えて来たことをぶつけ合うことが出来る.これがまさに、理論物理の醍醐味.







In Defense of Jargon

Friday, May 4th, 2012

Jargon, even the name has a harsh ring to it. Can anyone but an author love a title like[1]: Walking near a Conformal Fixed Point: the 2-d O(3) Model at theta near pi as a Test Case? “How can anyone take science seriously when it uses so much jargon?” said the teamster[2] as he told his helper to fasten the traces to the whiffletree and check the tugs and hames straps. Jargon is everywhere and not unique to science.  While you may not understand what the teamster is talking about, my father would have understood instantly and then gone to get a jag of wood.

But back to jargon.  To the uninitiated the above title, like the teamsters words, seems like so much gobbledygook.  But to the initiated, those working in the field, it is a precise statement and easily understood.  Trying to put the title, or the teamster’s words, in a form understandable to the layperson would have been a fool’s errand. In making it understandable to a more general audience, the precision would have been lost and we would probably never have gotten that jag of wood.  That would have been unfortunate as Nova Scotian winters can be cold.

One of the principles of all good writing is to tailor the communication to the intended audience.  When I am helping put together a report for TRIUMF, the instructions to the authors always includes a statement about the intended audience.  Even then, the good authors frequently ask me to make the description of the intended audience more precise.  Life gets more complicated when a document has more than one intended audience. Then it is necessary to have a layered document where introductory sections are understandable by an intelligent layperson while the later sections are directed at the specialist. One is reminded of the old joke about the structure of good seminar: The speaker starts at a low level understandable by anyone and then as the seminar progresses he becomes more technical and less understandable so that by the end, even the speaker does not know what he is talking about.  Well, perhaps that is getting a little too carried away, but one can error on either side, by making the writing too technical for the audience or not technical enough.

Similarly, the reader has to realize that the writing may not be directed at him or her. We, as people with technical expertise, have to be careful not to judge non-technical writing too harshly because it does not capture all the subtle nuances we are aware of. Including them would lose the layperson. It is a fine line between not confusing the layman and misleading him. When I am reading an article directed at a general audience, on a topic I am an expert in, I find I have to translate the layman’s language back to the technical language before I can understand it. That is as it should be.

Conversely, in fields we are not experts in, we should not criticize technical writing as being too filled with jargon. This latter mistake is made frequently by politicians and commentators who criticize technical writing due to ignorance. Few have the wisdom of the former Canadian Prime Minister, Pierre Elliot Trudeau, who said on opening TRIUMF, “I do not know what a cyclotron is, but I am glad Canada has one.” It is a rare politician who has the confidence to admit ignorance.  As an undergraduate student, I picked up a copy of Rose’s book:  Elementary Theory of Angular Momentum. That is when I learned one should be leery of books with elementary in the title[3]. If that is an elementary book, I would hate to have to read an advanced one. It is a good book but I, at that stage in my career, was not the intended audience.

Words only have meaning within the context they are used.  When used with a person possessing a similar background, the context does not have to be spelled out. Thus, in conversation with a colleague I have worked with for some time a lot is understood without being stated explicitly. Jargon speeds up communication and makes it less prone to misunderstanding. On the other hand, with people who are not acquainted with the field, we have to spell out the background assumptions and suppress the details that are only of interest to the expert.

In the end, it is quite unfortunate that jargon has been abused and hence has received a bad name.  In technical writing, jargon or technical terms are not only acceptable but necessary. So press on and employ jargon­—but only where appropriate.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

[1] First title on the lattice archive the day I checked to get an example.

[2] The kind that drives horses.

[3] Books with elementary in the title are usually advanced while those with advanced in the tile are usually elementary.


Si vous pouvez lire ce texte, c’est grâce au World Wide Web, un produit de la recherche fondamentale effectuée au CERN. La toile fut inventé au CERN afin d’offrir un moyen de communication aux physiciens et physiciennes des hautes énergies disséminés sur différents continents. Son impact sur la société ne fait aucun doute, changeant pour toujours notre façon de communiquer et de vivre.

Mais la toile serait encore inconnu du grand public sans transfert de connaissances, une approche visant à trouver des applications aux développements issus de la recherche de base. Le groupe de Transfert de Connaissances essaie de multiplier de tels exemples et son travail fait partie intégrante de la mission du CERN.

Le but principal du laboratoire est de développer la connaissance sur la nature de la matière et de l’univers qui nous entourent. Mais ce faisant, nous devons constamment repousser les limites de la technologie au-delà de ce qui existe couramment, en développant sans cesses des outils plus performants. De nos jours, cela signifie aussi le faire en respectant l’environnement et à moindre coût.

Chaque fois que de nouveaux détecteurs ou accélérateurs sont construits, ils doivent impérativement surpasser les précédents. On doit soit faire nous mêmes en faisant appel aux centaines d’universités et instituts rattachés au CERN, soit demander à des partenaires commerciaux  de relever le défi: électronique plus rapide, matériaux plus légers, refroidissement plus performant ou algorithmes plus malins.

En s’approvisionnant en équipement non-existant, le CERN force le développement technologique et encourage l’innovation parmi les compagnies des pays membres. Ou encore, des scientifiques ayant développé une idée novatrice y trouvent des applications en dehors du domaine de la physique des particules. Ces inventeurs et inventrices peuvent alors bénéficier du soutien du groupe de Transfert des Connaissances du CERN. L’équipe les conseille et les aide sur tous les aspects de gérance de la propriété intellectuelle, et offre son expertise pour les activités multidisciplinaires pour les applications en sciences de la vie.

Le groupe de Transfert des Connaissances doit d’abord établir si le concept est nouveau puis cherche des partenaires externes potentiels pour développer et mettre en marché cette idée. Bien sûr, en collaborant avec le milieu des affaires, le CERN doit en respecter les règles. Une garantie d’exclusivité et un retour économique sont souvent les aspects qui attirent les partenaires commerciaux. Le CERN leur accorde alors un permis d’exploitation de la technologie.

Au contraire, pour le World Wide Web,  aucun brevet n’avait été demandé afin d’assurer une dissémination gratuite et la plus large possible. Aujourd’hui, le CERN prend parfois des brevets pour stimuler l’intérêt des partenaires commerciaux. Pour certaines technologies,  c’est le seul moyen d’attirer l’industrie et mettre ces technologies sur le marché. Un tiers des revenus ainsi générés financent le  Fond de Transfert des Connaissances pour développer de nouveaux projets, et le reste retourne vers les départements techniques et scientifiques du CERN.

Parfois, un partenaire est un autre institut de recherche, comme c’est le cas avec CIEMAT, l’agence espagnole de financement des Sciences et de la Technologie. En partenariat avec le CERN,  on espère fabriquer des accélérateurs de particules appelés cyclotrons pour la production de micros doses de radioisotopes nécessaires pour l’imagerie médicale.

Ces radioisotopes ont la vie courte et doivent donc être produits tout près du lieu de l’examen médical. On espère que ce cyclotron sera assez petit pour être installé dans n’importe lequel hôpital.

Une partie importante des activités du groupe de Transfert des Connaissances du CERN vise la promotion des activités multidisciplinaires en sciences de la vie.  Le CERN est impliqué dans différents projets reliés à l’imagerie médicale, thérapie hadronique, radiobiologie, e-santé ainsi que la formation de chercheur-e-s dans ces domaines multidisciplinaires.

Les applications en imagerie médicale sont les retombées les plus fréquentes. Pas étonnant puisque nos détecteurs ne sont ni plus ni moins que des caméras ultra perfectionnées nous permettant de capter les images furtives de collisions de particules invisibles à l’œil nu. Les techniques utilisées en physique ont donc largement été adaptées pour améliorer la précision des diagnostics médicaux.

Le CERN a aussi soutenu le développement de panneaux solaires très performants pour la production d’eau chaude et le chauffage. L’ensemble consiste de tuyaux placés devant des miroirs cylindriques qui réfléchissent et concentrent le rayonnement solaire, y compris la lumière diffuse. Ces tuyaux sont contenus dans une enceinte sous vide, ce qui élimine une grande partie des déperditions de chaleur car le vide agit comme un isolant (comme une bouteille thermos). Dernière touche spéciale du CERN: l’enceinte à vide est équipée d’un « getter pump », un appareil utilisé pour capter les molécules de gaz résiduelles et développé à l’origine pour améliorer la qualité du vide dans les accélérateurs du CERN. L’aéroport de Genève est en train d’installer 300 de ces panneaux pour assurer le chauffage du terminal principal.

L’un des meilleurs moyens pour la diffusion des idées est de passer par les instituteurs et institutrices. Chaque année, le CERN accueille plus de mille enseignant-e-s qui viennent rencontrer des scientifiques, visiter les expériences et différents laboratoires, et qui pourront à leur tour partager leur enthousiasme pour la recherche fondamentale avec des centaines d’étudiant-e-s dans les années qui suivront.

Le Transfert des Connaissances est en plein essor au CERN et continuera à promouvoir les initiatives visant à maximiser les retombées de la recherche fondamentale dans différents secteurs de la société et encourager l’innovation.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution



If you can read this right now, it is thanks to the World Wide Web, a product of basic research done at CERN. The web was invented at CERN to provide a communication tool for high-energy physicists working on different continents. Its impact on society has been tremendous, changing forever the way we communicate and even the way we live.

But the web would have remained an internal product without “knowledge transfer”, a process that aims at finding applications for developments coming from basic research to other fields. CERN’s Knowledge Transfer group tries to multiply such examples and its work is an integral part of CERN’s mission.

The lab’s primary goal is to conduct scientific research to develop knowledge on the nature of matter and better understand the Universe we live in. But in the process of achieving this, we are constantly pushing technology beyond its current limits, developing ever more highly performing tools. In this day and age, this also means trying to do it in a cost and resource effective way, respectful of the environment.

Every time a new detector or a new accelerator is built, we must design the components that will allow us to do better than last time. We either do it ourselves in the hundred of universities and institutes associated with CERN, or we work with industrial partners to develop pieces of equipment that will meet these challenging requirements: faster electronics, lighter materials, better cooling or smarter algorithms.

Procurement of novel equipment is one of the ways CERN drives technological development, promoting innovation within companies from CERN Member States. Another way is through scientists who have developed new ideas thinking of practical applications outside high-energy physics. CERN inventors can then benefit from the support of the Knowledge Transfer group. The team advises on every aspect related to technology transfer and intellectual property management, and provides expertise in multidisciplinary activities relevant to life sciences applications.

The Knowledge Transfer group first needs to establish if the new concept is unique then seeks potential external partners who could further develop and market the idea. Of course, when dealing with the business world, CERN must use business world rules. Guaranteeing exclusivity and an economic return is usually what interests business partners so agreements are drafted where CERN gives licenses to regulate the commercial exploitation of the technology.

Unlike with the World Wide Web, where no patent was taken to ensure wide access to everybody free of charge, patents are sometimes requested for new technologies as a means to attract commercial partners. For some technologies, this is the only way to attract industry and bring technologies to the market. A third of the generated income is reinvested in a Knowledge Transfer Fund to develop new projects, while the remaining two thirds go to CERN’s technical and scientific departments.

Sometimes the partner is another research institute. This is the case right now with CIEMAT, the Spanish Science and Technology funding agency, who entered a partnership with CERN to develop particle accelerators called “cyclotrons” to produce micro doses of radioisotopes needed for medical imaging.

Radioisotopes are short-lived and need to be produced at or near the medical centre. This cyclotron must therefore be small enough to fit within any hospital making the production of single-patient doses possible.

An important part of CERN Knowledge Transfer is the active promotion of multidisciplinary activities in the field of life sciences. CERN is involved in various projects connected to medical imaging, particle therapy, radiobiology, e-health and training of young researchers in these multidisciplinary fields.

Applications to medical imaging are one of the most obvious spin-offs since CERN’s detectors are essentially high-tech cameras capable of catching what is invisible to the eye. Being good at taking pictures of extremely furtive events, physicists can export their skills to improve medical imaging devices.

CERN also supported the development of highly efficient solar panels to produce hot water for heating and cooling purposes. These devices consist essentially of a water circuit placed in front of cylindrical mirrors to catch even diffuse light. The pipes are contained within a vacuum-sealed panel, eliminating heat losses since the vacuum acts like an insulator, rather like in a thermos. CERN’s special touch here is the introduction in the collector of a “getter pump”, a device based on materials and thin-film coating technologies developed to improve the vacuum quality (over long periods of time) in accelerator beam pipes by catching residual gas molecules. The Geneva International Airport is in the process of equipping its roof with roughly 300 such panels that will ensure heating of the airport’s main building.

One of the best and long term means of knowledge transfer is through teachers. Every year, CERN welcomes over a thousand high school teachers who get the opportunity to meet CERN’s scientists, visit different experiments and laboratories and hopefully later on, share how exciting basic research can be with hundreds of their students in the years to follow.

Knowledge Transfer is thriving at CERN and will continue to promote initiatives to maximize the benefits of basic research to different sectors of society and drive innovation.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification.