• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Latest Posts

What are Sterile Neutrinos?

Sunday, July 27th, 2014

Sterile Neutrinos in Under 500 Words

Hi Folks,

In the Standard Model, we have three groups of particles: (i) force carriers, like photons and gluons; (ii) matter particles, like electrons, neutrinos and quarks; and (iii) the Higgs. Each force carrier is associated with a force. For example: photons are associated with electromagnetism, the W and Z bosons are associated with the weak nuclear force, and gluons are associated with the strong nuclear force. In principle, all particles (matter, force carries, the Higgs) can carry a charge associated with some force. If this is ever the case, then the charged particle can absorb or radiate a force carrier.

SM Credit: Wiki

Credit: Wikipedia

As a concrete example, consider electrons and top quarks. Electrons carry an electric charge of “-1″ and a top quark carries an electric charge of “+2/3″. Both the electron and top quark can absorb/radiate photons, but since the top quark’s electric charge is smaller than the electron’s electric charge, it will not absorb/emit a photon as often as an electron. In a similar vein, the electron carries no “color charge”, the charge associated with the strong nuclear force, whereas the top quark does carry color and interacts via the strong nuclear force. Thus, electrons have no idea gluons even exist but top quarks can readily emit/absorb them.

Neutrinos  possess a weak nuclear charge and hypercharge, but no electric or color charge. This means that neutrinos can absorb/emit W and Z bosons and nothing else.  Neutrinos are invisible to photons (particle of light) as well as gluons (particles of the color force).  This is why it is so difficult to observe neutrinos: the only way to detect a neutrino is through the weak nuclear interactions. These are much feebler than electromagnetism or the strong nuclear force.

Sterile neutrinos are like regular neutrinos: they are massive (spin-1/2) matter particles that do not possess electric or color charge. The difference, however, is that sterile neutrinos do not carry weak nuclear or hypercharge either. In fact, they do not carry any charge, for any force. This is why they are called “sterile”; they are free from the influences of  Standard Model forces.

Credit: somerandompearsonsblog.blogspot.com

Credit: somerandompearsonsblog.blogspot.com

The properties of sterile neutrinos are simply astonishing. For example: Since they have no charge of any kind, they can in principle be their own antiparticles (the infamous “sterile Majorana neutrino“). As they are not associated with either the strong nuclear scale or electroweak symmetry breaking scale, sterile neutrinos can, in principle, have an arbitrarily large/small mass. In fact, very heavy sterile neutrinos might even be dark matter, though this is probably not the case. However, since sterile neutrinos do have mass, and at low energies they act just like regular Standard Model neutrinos, then they can participate in neutrino flavor oscillations. It is through this subtle effect that we hope to find sterile neutrinos if they do exist.

Credit: Kamioka Observatory/ICRR/University of Tokyo

Credit: Kamioka Observatory/ICRR/University of Tokyo

Until next time!

Happy Colliding,

Richard (@bravelittlemuon)

 

Share

It’s Saturday, so I’m at the coffee shop working on my thesis again. It’s become a tradition over the last year that I meet a writer friend each week, we catch up, have something to drink, and sit down for a few hours of good-quality writing time.

photo09

The work desk at the coffee shop: laptop, steamed pork bun, and rosebud latte.

We’ve gotten to know the coffee shop really well over the course of this year. It’s pretty new in the neighborhood, but dark and hidden enough that business is slow, and we don’t feel bad keeping a table for several hours. We have our favorite menu items, but we’ve tried most everything by now. Some mornings, the owner’s family comes in, and the kids watch cartoons at another table.

I work on my thesis mostly, or sometimes I’ll work on analysis that spills over from the week, or I’ll check on some scheduled jobs running on the computing cluster.

My friend Jason writes short stories, works on revising his novel (magical realism in ancient Egypt in the reign of Rameses XI), or drafts posts for his blog about the puzzles of the British constitution. We trade tips on how to organize notes and citations, and how to stay motivated. So I’ve been hearing a lot about the cultural difference between academic work in the humanities and the sciences. One of the big differences is the level of citation that’s expected.

As a particle physicist, when I write a paper it’s very clear which experiment I’m writing about. I only write about one experiment at a time, and I typically focus on a very small topic. Because of that, I’ve learned that the standard for making new claims is that you usually make one new claim per paper, and it’s highlighted in the abstract, introduction, and conclusion with a clear phrase like “the new contribution of this work is…” It’s easy to separate which work you claim as your own and which work is from others, because anything outside “the new contribution of this work” belongs to others. A single citation for each external experiment should suffice.

For academic work in history, the standard is much different: the writing itself is much closer to the original research. As a start, you’ll need a citation for each quote, going to sources that are as primary as you can get your hands on. The stranger idea for me is that you also need a citation for each and every idea of analysis that someone else has come up with, and that a statement without a citation is automatically claimed as original work. This shows up in the difference between Jason’s posts about modern constitutional issues and historical ones: the historical ones have huge source lists, while the modern ones are content with a few hyperlinks.

In both cases, things that are “common knowledge” doesn’t need to be cited, like the fact that TeV cosmic rays exist (they do) or the year that Elizabeth I ascended the throne (1558).

There’s a difference in the number of citations between modern physics research and history research. Is that because of the timing (historical versus modern) or the subject matter? Do they have different amounts of common knowledge? For modern topics in physics and in history, the sources are available online, so a hyperlink is a perfect reference, even in formal post. By that standard, all Quantum Diaries posts should be ok with the hyperlink citation model. But even in those cases, Jason puts footnoted citations to modern articles in the JSTOR database, and uses more citations overall.

Another cool aspect of our coffee shop is that the music is sometimes ridiculous, and it interrupts my thoughts if I get stuck in some esoteric bog. There’s an oddly large sample of German covers of 30s and 40s showtunes. You haven’t lived until you’ve heard “The Lady is a Tramp” in German while calculating oscillation probabilities. I’m kidding. Mostly.

Jason has shown me a different way of handling citations, and I’ve taught him some of the basics of HTML, so now his citations can appear as hyperlinks to the references list!

As habits go, I’m proud of this social coffee shop habit. I default to getting stuff done, even if I’m feeling slightly off or uninspired.  The social reward of hanging out makes up for the slight activation energy of getting off my couch, and once I’m out of the house, it’s always easier to focus.  I miss prime Farmers’ Market time, but I could go before we meet. The friendship has been a wonderful supportive certainty over the last year, plus I get some perspective on my field compared to others.

Share

This article appeared in Fermilab Today on July 24, 2014.

Fermilab engineer Jim Hoff has invented an electronic circuit that can guard against radiation damage. Photo: Hanae Armitage

Fermilab engineer Jim Hoff has invented an electronic circuit that can guard against radiation damage. Photo: Hanae Armitage

Fermilab engineer Jim Hoff has received patent approval on a very tiny, very clever invention that could have an impact on aerospace, agriculture and medical imaging industries.

Hoff has engineered a widely adaptable latch — an electronic circuit capable of remembering a logical state — that suppresses a commonly destructive circuit error caused by radiation.

There are two radiation-based errors that can damage a circuit: total dose and single-event upset. In the former, the entire circuit is doused in radiation and damaged; in an SEU, a single particle of radiation delivers its energy to the chip and alters a state of memory, which takes the form of 1s and 0s. Altered states of memory equate to an unintentional shift from logical 1 or logical 0 and ultimately lead to loss of data or imaging resolution. Hoff’s design is essentially a chip immunization, preemptively guarding against SEUs.

“There are a lot of applications,” Hoff said. “Anyone who needs to store data for a length of time and keep it in that same state, uncorrupted — anyone flying in a high-altitude plane, anyone using medical imaging technology — could use this.”

Past experimental data showed that, in any given total-ionizing radiation dose, the latch reduces single-event upsets by a factor of about 40. Hoff suspects that the invention’s newer configurations will yield at least two orders of magnitude in single-event upset reduction.

The invention is fondly referred to as SEUSS, which stands for single-event upset suppression system. It’s relatively inexpensive and designed to integrate easily with a multitude of circuits — all that’s needed is a compatible transistor.

Hoff’s line of work lies in chip development, and SEUSS is currently used in some Fermilab-developed chips such as FSSR, which is used in projects at Jefferson Lab, and Phoenix, which is used in the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.

The idea of SEUSS was born out of post-knee-surgery, bed-ridden boredom. On strict bed rest, Hoff’s mind naturally wandered to engineering.

“As I was lying there, leg in pain, back cramping, I started playing with designs of my most recent project at work,” he said. “At one point I stopped and thought, ‘Wow, I just made a single-event upset-tolerant SR flip-flop!’”

While this isn’t the world’s first SEUSS-tolerant latch, Hoff is the first to create a single-event upset suppression system that is also a set-reset flip-flop, meaning it can take the form of almost any latch. As a flip-flop, the adaptability of the latch is enormous and far exceeds that of its pre-existing latch brethren.

“That’s what makes this a truly special latch — its incredible versatility,” says Hoff.

From a broader vantage point, the invention is exciting for more than just Fermilab employees; it’s one of Fermilab’s first big efforts in pursuing potential licensees from industry.

Cherri Schmidt, head of Fermilab’s Office of Partnerships and Technology Transfer, with the assistance of intern Miguel Marchan, has been developing the marketing plan to reach out to companies who may be interested in licensing the technology for commercial application.

“We’re excited about this one because it could really affect a large number of industries and companies,” Schmidt said. “That, to me, is what makes this invention so interesting and exciting.”

Hanae Armitage

Share

Welcome to Thesisland

Tuesday, July 22nd, 2014

When I joined Quantum Diaries, I did so with trepidation: while it was an exciting opportunity, I was worried that all I could write about was the process of writing a thesis and looking for postdoc jobs. I ended up telling the site admin exactly that: I only had time to work on a thesis and job hunt. I thought I was turning down the offer. But the reply I got was along the lines of “It’s great to know what topics you’ll write about! When can we expect a post?”. So, despite the fact that this is a very different topic from any recent QD posts, I’m starting a series about the process of writing a physics PhD thesis. Welcome.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

There are as many approaches to writing a PhD thesis as there are PhDs, but they can be broadly described along a spectrum.

On one end is the “constant documentation” approach: spend some fixed fraction of your time on documenting every project you work on. In this approach, the writing phase is completely integrated with the research work, and it’s easy to remember the things you’re writing about. There is a big disadvantage: it’s really easy to write too much, to spend too much time writing and not enough doing, or otherwise un-balance your time. If you keep a constant fraction of your schedule dedicated to writing, and that fraction is (in retrospect) too big, you’ve lost a lot of time. But you have documented everything, which everyone who comes after will be grateful for. If they ever see your work.

The other end of the spectrum is the “write like hell” approach (that is, write as fast as you can), where all the research is completed and approved before writing starts. This has the advantage that if you (and your committee) decide you’ve written enough, you immediately get a PhD! The disadvantage is that if you have to write about old projects, you’ll probably have forgotten a lot. So this approach typically leads to shorter theses.

These two extremes were first described to me (see the effect of thesis writing? It’s making my blog voice go all weird and passive) by two professors who were in grad school together and still work together. Each took one approach, and they both did fine, but the “constant documentation” thesis was at least twice (or was it three times?) as long as the “write like hell” thesis.

Somewhere between those extremes is the funny phenomenon of the “staple thesis”: a thesis primarily composed of all the papers you wrote in grad school, stapled together. A few of my friends have done this, but it’s not common in my research group because our collaboration is so large. I’ll discuss that in more detail later.

I’m going for something in the middle: as soon as I saw a light at the end of the tunnel, I wanted to start writing, so I downloaded the UW latex template for PhD theses and started filling it in. It’s been about 14 months since then, with huge variations in the writing/research balance. To help balance between the two approaches, I’ve found it helpful to keep at least some notes about all the physics I do, but nothing too polished: it’s always easier to start from some notes, however minimal, than to start from nothing.

When I started writing, there were lots of topics available that needed some discussion: history and theory, my detector, all the calibration work I did for my master’s project–I could have gone full-time writing at that point and had plenty to do. But my main research project wasn’t done yet. So for me, it’s not just a matter of balancing “doing” with “documenting”; it’s also a question of balancing old documentation with current documentation. I’ve almost, *almost* finished writing the parts that don’t depend on my work from the last year or so. In the meantime, I’m still finishing the last bits of analysis work.

It’s all a very long process. How many readers are looking towards writing a thesis later on? How many have gone through this and found a method that served them well? If it was fast and relatively low-stress, would you tell me about it?

Share

This article appeared in Fermilab Today on July 21, 2014.

Members of the prototype proton CT scanner collaboration move the detector into the CDH Proton Center in Warrenville. Photo: Reidar Hahn

Members of the prototype proton CT scanner collaboration move the detector into the CDH Proton Center in Warrenville. Photo: Reidar Hahn

A prototype proton CT scanner developed by Fermilab and Northern Illinois University could someday reduce the amount of radiation delivered to healthy tissue in a patient undergoing cancer treatment.

The proton CT scanner would better target radiation doses to the cancerous tumors during proton therapy treatment. Physicists recently started testing with beam at the CDH Proton Center in Warrenville.

To create a custom treatment plan for each proton therapy patient, radiation oncologists currently use X-ray CT scanners to develop 3-D images of patient anatomy, including the tumor, to determine the size, shape and density of all organs and tissues in the body. To make sure all the tumor cells are irradiated to the prescribed dose, doctors often set the targeting volume to include a minimal amount of healthy tissue just outside the tumor.

Collaborators believe that the prototype proton CT, which is essentially a particle detector, will provide a more precise 3-D map of the patient anatomy. This allows doctors to more precisely target beam delivery, reducing the amount of radiation to healthy tissue during the CT process and treatment.

“The dose to the patient with this method would be lower than using X-ray CTs while getting better precision on the imaging,” said Fermilab’s Peter Wilson, PPD associate head for engineering and support.

Fermilab became involved in the project in 2011 at the request of NIU’s high-energy physics team because of the laboratory’s detector building expertise.

The project’s goal was a tall order, Wilson explained. The group wanted to build a prototype device, imaging software and computing system that could collect data from 1 billion protons in less than 10 minutes and then produce a 3-D reconstructed image of a human head, also in less than 10 minutes. To do that, they needed to create a device that could read data very quickly, since every second data from 2 million protons would be sent from the device — which detects only one proton at a time — to a computer.

NIU physicist Victor Rykalin recommended building a scintillating fiber tracker detector with silicon photomultipliers. A similar detector was used in the DZero experiment.

“The new prototype CT is a good example of the technical expertise of our staff in detector technology. Their expertise goes back 35 to 45 years and is really what makes it possible for us to do this,” Wilson said.

In the prototype CT, protons pass through two tracking stations, which track the particles’ trajectories in three dimensions. (See figure.) The protons then pass through the patient and finally through two more tracking stations before stopping in the energy detector, which is used to calculate the total energy loss through the patient. Devices called silicon photomultipliers pick up signals from the light resulting from these interactions and subsequently transmit electronic signals to a data acquisition system.

In the prototype proton CT scanner, protons enter from the left, passing through planes of fibers and the patient's head. Data from the protons' trajectories, including the energy deposited in the patient, is collected in a data acquisition system (right), which is then used to map the patient's tissue. Image courtesy of George Coutrakon, NIU

In the prototype proton CT scanner, protons enter from the left, passing through planes of fibers and the patient’s head. Data from the protons’ trajectories, including the energy deposited in the patient, is collected in a data acquisition system (right), which is then used to map the patient’s tissue. Image courtesy of George Coutrakon, NIU

Scientists use specialized software and a high-performance computer at NIU to accurately map the proton stopping powers in each cubic millimeter of the patient. From this map, visually displayed as conventional CT slices, the physician can outline the margins, dimensions and location of the tumor.

Elements of the prototype were developed at both NIU and Fermilab and then put together at Fermilab. NIU developed the software and computing systems. The teams at Fermilab worked on the design and construction of the tracker and the electronics to read the tracker and energy measurement. The scintillator plates, fibers and trackers were also prepared at Fermilab. A group of about eight NIU students, led by NIU’s Vishnu Zutshi, helped build the detector at Fermilab.

“A project like this requires collaboration across multiple areas of expertise,” said George Coutrakon, medical physicist and co-investigator for the project at NIU. “We’ve built on others’ previous work, and in that sense, the collaboration extends beyond NIU and Fermilab.”

Rhianna Wisniewski

Share

This article appeared in symmetry on July 11, 2014.

Together, the three experiments will search for a variety of types of dark matter particles. Photo: NASA

Together, the three experiments will search for a variety of types of dark matter particles. Photo: NASA

Two US federal funding agencies announced today which experiments they will support in the next generation of the search for dark matter.

The Department of Energy and National Science Foundation will back the Super Cryogenic Dark Matter Search-SNOLAB, or SuperCDMS; the LUX-Zeplin experiment, or LZ; and the next iteration of the Axion Dark Matter eXperiment, ADMX-Gen2.

“We wanted to pool limited resources to put together the most optimal unified national dark matter program we could create,” says Michael Salamon, who manages DOE’s dark matter program.

Second-generation dark matter experiments are defined as experiments that will be at least 10 times as sensitive as the current crop of dark matter detectors.

Program directors from the two federal funding agencies decided which experiments to pursue based on the advice of a panel of outside experts. Both agencies have committed to working to develop the new projects as expeditiously as possible, says Jim Whitmore, program director for particle astrophysics in the division of physics at NSF.

Physicists have seen plenty of evidence of the existence of dark matter through its strong gravitational influence, but they do not know what it looks like as individual particles. That’s why the funding agencies put together a varied particle-hunting team.

Both LZ and SuperCDMS will look for a type of dark matter particles called WIMPs, or weakly interacting massive particles. ADMX-Gen2 will search for a different kind of dark matter particles called axions.

LZ is capable of identifying WIMPs with a wide range of masses, including those much heavier than any particle the Large Hadron Collider at CERN could produce. SuperCDMS will specialize in looking for light WIMPs with masses lower than 10 GeV. (And of course both LZ and SuperCDMS are willing to stretch their boundaries a bit if called upon to double-check one another’s results.)

If a WIMP hits the LZ detector, a high-tech barrel of liquid xenon, it will produce quanta of light, called photons. If a WIMP hits the SuperCDMS detector, a collection of hockey-puck-sized integrated circuits made with silicon or germanium, it will produce quanta of sound, called phonons.

“But if you detect just one kind of signal, light or sound, you can be fooled,” says LZ spokesperson Harry Nelson of the University of California, Santa Barbara. “A number of things can fake it.”

SuperCDMS and LZ will be located underground—SuperCDMS at SNOLAB in Ontario, Canada, and LZ at the Sanford Underground Research Facility in South Dakota—to shield the detectors from some of the most common fakers: cosmic rays. But they will still need to deal with natural radiation from the decay of uranium and thorium in the rock around them: “One member of the decay chain, lead-210, has a half-life of 22 years,” says SuperCDMS spokesperson Blas Cabrera of Stanford University. “It’s a little hard to wait that one out.”

To combat this, both experiments collect a second signal, in addition to light or sound—charge. The ratio of the two signals lets them know whether the light or sound came from a dark matter particle or something else.

SuperCDMS will be especially skilled at this kind of differentiation, which is why the experiment should excel at searching for hard-to-hear low-mass particles.

LZ’s strength, on the other hand, stems from its size.

Dark matter particles are constantly flowing through the Earth, so their interaction points in a dark matter detector should be distributed evenly throughout. Quanta of radiation, however, can be stopped by much less significant barriers—alpha particles by a piece of paper, beta particles by a sandwich. Even gamma ray particles, which are harder to stop, cannot reach the center of LZ’s 7-ton detector. When a particle with the right characteristics interacts in the center of LZ, scientists will know to get excited.

The ADMX detector, on the other hand, approaches the dark matter search with a more delicate touch. The dark matter axions ADMX scientists are looking for are too light for even SuperCDMS to find.

If an axion passed through a magnetic field, it could convert into a photon. The ADMX team encourages this subtle transformation by placing their detector within a strong magnetic field, and then tries to detect the change.

“It’s a lot like an AM radio,” says ADMX-Gen2 co-spokesperson Gray Rybka of the University of Washington in Seattle.

The experiment slowly turns the dial, tuning itself to watch for one axion mass at a time. Its main background noise is heat.

“The more noise there is, the harder it is to hear and the slower you have to tune,” Rybka says.

In its current iteration, it would take around 100 years for the experiment to get through all of the possible channels. But with the addition of a super-cooling refrigerator, ADMX-Gen2 will be able to search all of its current channels, plus many more, in the span of just three years.

With SuperCDMS, LZ and ADMX-Gen2 in the works, the next several years of the dark matter search could be some of its most interesting.

Kathryn Jepsen

Share

La 37ème Conférence internationale de physique des hautes énergies vient de se terminer à Valence, en Espagne. Cette année, pas de grande surprise : aucun nouveau boson, aucun signe de nouvelles particules ou phénomènes révélant la nature de la matière sombre ou l’existence de nouvelles théories comme la supersymétrie. Mais comme toujours, quelques petites anomalies ont capté l’attention.

Les chercheur-e-s s’intéressent particulièrement à toute déviation par rapport aux prédictions théoriques car ces petites anomalies pourraient révéler l’existence d’une “nouvelle physique”. Cela permettrait de découvrir des indices d’une théorie plus inclusive puisque tout le monde réalise que le modèle théorique actuel, le Modèle standard, a ses limites et doit être remplacé par une théorie plus complète.

Mais il faut se méfier. Tous les physiciens et physiciennes le savent bien : de petits écarts apparaissent souvent et disparaissent tout aussi vite. Toutes les mesures faites en physique suivent des lois statistiques. Des déviations d’un écart-type entre les valeurs mesurées expérimentalement et celles prédites par la théorie sont observées dans trois mesures sur dix. De plus grands écarts sont moins communs, mais toujours possibles. Une déviation de deux écarts-types se produit dans 5% des mesures, et trois écarts-types, 1%. Il y a aussi des erreurs systématiques reliées aux instruments de mesure. Ces erreurs ne sont pas de nature statistiques mais peuvent être réduites avec une connaissance accrue du détecteur. L’erreur expérimentale associée à chaque résultat correspond à un écart-type. Voici à titre d’exemple deux petites anomalies rapportées durant la conférence et qui ont attiré l’attention cette année.

La Collaboration ATLAS a montré un résultat préliminaire sur la production d’une paire de bosons W. La mesure de ce taux permet d’effectuer des vérifications détaillées du Modèle puisque les théoricien–ne-s peuvent prévoir combien de fois des paires de bosons W sont produites quand les protons entrent en collision dans Grand collisionneur de hadrons (LHC). Le taux de production dépend de l’énergie dégagée pendant ces collisions. Jusqu’ici, on peut faire deux mesures puisque le LHC a fonctionné à deux énergies différentes, soit 7 et 8 TeV.

Les expériences CMS et ATLAS avaient déjà publié leurs résultats basés sur les données recueillis à 7 TeV. Les taux mesurés excédaient légèrement les prédictions théoriques mais restaient tout de même à l’intérieur des marges d’erreur expérimentale avec des déviations de 1.0 et 1.4 écart-type, respectivement. CMS avait aussi publié des résultats basés sur environ 20% de toutes les données accumulées à 8 TeV. Le taux mesuré excédait légèrement la prédiction théorique par 1.7 écart-type. Le dernier résultat d’ATLAS ajoute un élément supplémentaire au tableau. Il est basé sur l’ensemble des données recueillies à 8 TeV. ATLAS obtient une déviation un peu plus forte pour le taux de production de deux bosons W à 8 TeV avec une déviation de 2.1 écarts-types par rapport à la prédiction théorique.

WWResultsLes quatre mesures expérimentales du taux de production de paires de bosons W (points noirs) avec l’incertitude expérimentale (barre horizontale) aussi bien que la prédiction théorique actuelle (triangle bleu) avec sa propre incertitude (bande bleue). On peut voir que toutes les mesures sont plus élevées que les prédictions actuelles, suggérant que le calcul théorique actuel n’inclut pas tout.

Chacune de ces quatre mesures est en bon accord avec la valeur théorique mais le fait qu’elles excèdent toutes cette prédiction commence à attirer l’attention. Très probablement, cela signifie que les théoriciens n’ont pas encore pris en compte toutes les petites corrections exigées par le Modèle standard pour déterminer ce taux suffisamment précisément. C’est un peu comme si on oubliait de noter quelques petites dépenses dans son budget, menant à un déficit non expliqué à la fin du mois. Il pourrait aussi y avoir des facteurs communs dans les incertitudes expérimentales, qui réduiraient l’importance globale de cette anomalie. Mais si les prédictions théoriques demeurent ce qu’elles sont, même en rajoutant toutes les petites corrections possibles, cela indiquerait l’existence de nouveaux phénomènes, ce qui serait passionnant. Il faudra alors surveiller l’évolution de cette mesure après la remise en marche du LHC en 2015 à plus haute énergie, soit 13 TeV.

La Collaboration CMS a présenté elle aussi un résultat intrigant. Un groupe de chercheur-e-s a trouvé quelques événements compatibles avec l’observation d’une désintégration d’un boson de Higgs en un tau et un muon. De telles désintégrations sont interdites dans le Modèle standard puisqu’elles enfreignent la conservation de la « saveur » leptonique. Il y a trois saveurs ou types de leptons chargés (une catégorie de particules fondamentales) : l’électron, le muon et le tau. Chacun vient avec son propre type de neutrinos. Dans toutes les observations faites jusqu’à présent, les leptons sont toujours produits soit avec leur propre neutrino, soit avec leur antiparticule. La désintégration d’un boson de Higgs en leptons devrait donc toujours produire un lepton chargé et son antiparticule, mais jamais deux leptons chargés de saveur différente. Il est tout simplement interdit d’enfreindre cette règle à l’intérieur du cadre du Modèle standard.

Il faudra vérifier tout cela avec plus de données, ce qui sera possible après la reprise du LHC l’année prochaine. Mais d’autres modèles de « nouvelle physique » permettent la violation de la saveur leptonique. Il s’agit de modèles comme ceux comprenant plusieurs doublets de Higgs ou des bosons de Higgs composites ou encore les modèles impliquant des dimensions supplémentaires comme ceux de Randall-Sundrum. Alors si avec plus de données ATLAS et CMS confirment que cette tendance correspond à un effet réel, ce sera une véritable révolution.

HtomutauLes résultats obtenus par la Collaboration CMS pour six types de désintégrations différentes. Tous donnent une valeur non-nulle, contrairement aux prédictions du Modèle standard, pour le taux de désintégration de bosons de Higgs en paires de tau et muon.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

Share

Two anomalies worth noticing

Monday, July 14th, 2014

The 37th International Conference on High Energy Physics just finished in Valencia, Spain. This year, no big surprises were announced: no new boson, no signs from new particles or clear phenomena revealing the nature of dark matter or new theories such as Supersymmetry. But as always, a few small anomalies were reported.

Looking for deviations from the theoretical predictions is precisely how experimentalists are trying to find a way to reveal “new physics”. It would help discover a more encompassing theory since everybody realises the current theoretical model, the Standard Model, has its limits and must be superseded by something else. However, all physicists know that small deviations often come and go. All measurements made in physics follow statistical laws. Therefore deviations from the expected value by one standard deviation occur in three measurements out of ten. Larger deviations are less common but still possible. A two standard deviation happens 5% of the time. Then there are systematic uncertainties that relate to the experimental equipment. These are not purely statistical, but can be improved with a better understanding of our detectors. The total experimental uncertainty quoted with each result corresponds to one standard variation. Here are two small anomalies reported at this conference that attracted attention this year.

The ATLAS Collaboration showed its preliminary result on the production of a pair of W bosons. Measuring this rate provides excellent checks of the Standard Model since theorists can predict how often pairs of W bosons are produced when protons collide in the Large Hadron Collider (LHC). The production rate depends on the energy released during these collisions. So far, two measurements can be made since the LHC operated at two different energies, namely 7 TeV and 8 TeV.

CMS and ATLAS had already released their results on their 7 TeV data. The measured rates exceeded slightly the theoretical prediction but were both well within their experimental error with a deviation of 1.0 and 1.4 standard deviation, respectively. CMS had also published results based on about 20% of all data collected at 8 TeV. It exceeded slightly the theoretical prediction by 1.7 standard deviation. The latest ATLAS result adds one more element to the picture. It is based on the full 8 TeV data sample. Now ATLAS reports a slightly stronger deviation for this rate at 8 TeV with 2.1 standard deviations from the theoretical prediction.

WWResults

The four experimental measurements for the WW production rate (black dots) with the experimental uncertainty (horizontal bar) as well as the current theoretical prediction (blue triangle) with its own uncertainty (blue strip). One can see that all measurements are higher than the current prediction, indicating that the theoretical calculation fails to include everything.

The four individual measurements are each reasonably consistent with expectation, but the fact that all four measurements lie above the predictions becomes intriguing. Most likely, this means that theorists have not yet taken into account all the small corrections required by the Standard Model to precisely determine this rate. This would be like having forgotten a few small expenses in one’s budget, leading to an unexplained deficit at the end of the month. Moreover, there could be common factors in the experimental uncertainties, which would lower the overall significance of this anomaly. But if the theoretical predictions remain what they are even when adding all possible little corrections, it could indicate the existence of new phenomena, which would be exciting. It would then be something to watch for when the LHC resumes operation in 2015 at 13 TeV.

The CMS Collaboration presented another intriguing result. They found some events consistent with coming from a decay of a Higgs boson into a tau and a muon. Such decays are prohibited in the Standard Model since they violate lepton flavour conservation. There are three “flavours” or types of charged leptons (a category of fundamental particles): the electron, the muon and the tau. Each one comes with its own type of neutrinos. According to all observations made so far, leptons are always produced either with their own neutrino or with their antiparticle. Hence, the decay of a Higgs boson in leptons should always produce a charged lepton and its antiparticle, but never two charged leptons of different flavour. Violating a conservation laws in particle physics is simply not allowed.

This needs to be scrutinised with more data, which will be possible when the LHC resumes next year. Lepton flavour violation is allowed outside the Standard Model in various models such as models with more than one Higgs doublet or composite Higgs models or Randall-Sundrum models of extra dimensions for example. So if both ATLAS and CMS confirm this trend as a real effect, it would be a small revolution.

HtomutauThe results obtained by the CMS Collaboration showing that six different channels all give a non-zero value for the decay rate of Higgs boson into pairs of tau and muon.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline
 or sign-up on this mailing list to receive and e-mail notification.

 

Share

ICHEP at a distance

Friday, July 11th, 2014

I didn’t go to ICHEP this year.  In principle I could have, especially given that I have been resident at CERN for the past year, but we’re coming down to the end of our stay here and I didn’t want to squeeze in one more work trip during a week that turned out to be a pretty good opportunity for one last family vacation in Europe.  So this time I just kept track of it from my office, where I plowed through the huge volume of slides shown in the plenary sessions earlier this week.  It was a rather different experience for me from ICHEP 2012, which I attended in person in Melbourne and where we had the first look at the Higgs boson.  (I’d have to say it was also probably the pinnacle of my career as a blogger!)

Seth’s expectations turned out to be correct — there were no earth-shattering announcements at this year’s ICHEP, but still a lot to chew on.  The Standard Model of particle physics stands stronger than ever.  As Pauline wrote earlier today, the particle thought to be the Higgs boson two years ago still seems to be the Higgs boson, to the best of our abilities to characterize it.  The LHC experiments are starting to move beyond measurements of the “expected” properties — the dominant production and decay modes — into searches for unexpected, low-rate behavior.  While there are anomalous results here and there, there’s nothing that looks like more than a fluctuation.  Beyond the Higgs, all sectors of particle physics look much as predicted, and some fluctuations, such as the infamous forward-backward asymmetry of top-antitop production at the Tevatron, appear to have subsided.  Perhaps the only ambiguous result out there is that of the BICEP2 experiment which might have observed gravitational waves, or maybe not.  We’re all hoping that further data from that experiment and others will resolve the question by the end of the year.  (See the nice talk on the subject of particle physics and cosmology by Alan Guth, one of the parents of that field.)

This success of the Standard Model is both good and bad news.  It’s good that we do have a model that has stood up so well to every experimental test that we have thrown at it, in some cases to startling precision.  You want models to have predictive power.  But at the same time, we know that the model is almost surely incomplete.  Even if it can continue to work at higher energy scales than we have yet explored, at the very least we seem to be missing some particles (those that make up the dark matter we know exists from astrophysical measurements) and it also fails to explain some basic observations (the clear dominance of matter over antimatter in the universe).  We have high hopes for the next run of the LHC, which will start in Spring 2015, in which we will have higher beam energies and collision rates, and a greater chance of observing new particles (should they exist).

It was also nice to see the conference focus on the longer-term future of the field.  Since the last ICHEP, every region of the world has completed long-range strategic planning exercises, driven by recent discoveries (including that of the Higgs boson, but also of various neutrino properties) and anchored by realistic funding scenarios for the field.  There were several presentations about these plans during the conference, and a panel discussion featuring leaders of the field from around the world.  It appears that we are having a nice sorting out of which region wants to host which future facility, and when, in such a way that we can carry on our international efforts in a straightforward way.  Time will tell if we can bring all of these plans to fruition.

I’ll admit that I felt a little left out by not attending ICHEP this year.  But here’s the good news: ICHEP 2016 is in Chicago, one of the few places in the world that I can reach on a single plane flight from Lincoln.  I have marked my calendar!

Share

C’est en chantant joyeux anniversaire en faussant un peu mais dans la bonne humeur générale que plusieurs centaines de physicien-ne-s ont terminé la journée du 4 juillet lors de la 37ème Conférence internationale de physique des hautes énergies qui se tenait à Valence, en Espagne du 2 au 9 juillet. Il y a deux ans, les expériences ATLAS et CMS avaient annoncé la découverte du boson de Higgs à la veille de la même conférence tenue alors à Melbourne, en Australie. Beaucoup échangeaient des souvenirs sur où ils et elles étaient lors de cette annonce historique.

gateau

A peine deux années plus tard, les deux expériences ont déjà acquis une quantité impressionnante de connaissances sur le boson de Higgs. Les deux groupes ont maintenant mesuré avec haute précision sa masse, comment il est produit et comment il se désintègre. ATLAS a présenté son résultat récemment publié pour la masse combinée du boson Higgs, soit 125.36 ± 0.41 GeV en parfait accord avec la valeur présentée pour la première fois à cette conférence par CMS de 125.03 ± 0.30 GeV.

En présentant son résultat final sur les désintégrations de bosons de Higgs en deux photons, la Collaboration CMS a maintenant complété l’analyse de toutes les données récoltées jusqu’à maintenant. La valeur combinée pour la force du signal, une quantité mesurant le nombre de bosons de Higgs observés comparé au nombre prévu par la théorie, est de 1.00 ± 0.13. ATLAS obtient 1.3 ± 0.18. Ces deux mesures indiquent qu’avec la précision expérimentale actuelle, ce boson est compatible avec celui prévu par le Modèle standard.

On connaît aussi son spin et sa parité, deux caractéristiques propres aux particules fondamentales et équivalant à leurs empreintes digitales. Leur détermination révèle l’identité d’une particule et c’est ainsi que nous savons que le boson découvert il y a deux ans est bel et bien un boson de Higgs.

Reste encore à savoir s’il s’agit de l’unique boson de Higgs prévu par Robert Brout, François Englert et Peter Higgs en 1964 dans le cadre de la théorie actuelle, le Modèle standard. Car ce boson pourrait aussi être le plus léger des cinq bosons de Higgs prévus par une des autres théories plus inclusives comme la supersymétrie proposées pour combler plusieurs lacunes du Modèle standard. Une telle découverte ouvrirait la porte vers ce qu’on appelle communément « la nouvelle physique ».

ATLAS-Higgs-couplingsPlusieurs mesures d’ATLAS sur la force du signal, i.e. une quantité mesurant le nombre de bosons de Higgs produits dans différents canaux et se désintégrant en différentes particules, comparé au nombre prévu par la théorie. Le résultat devrait donc être égal à 1.0 si la théorie est juste. Le symbole “+” en noir indique la valeur théorique prévue tandis que les divers cercles délimitent la zone où on s’attend à trouver la valeur réelle avec un niveau de confiance de 68 % ou 95 %.

Presque toutes les données rassemblées jusqu’à la fin de 2012 – avant l’arrêt technique du Grand collisionneur de hadrons (LHC) pour maintenance et consolidation – ont maintenant été analysées. Et tout ce qui a été mesuré jusqu’ici est en accord avec les prédictions du Modèle standard en tenant compte des marges d’erreur. Non seulement les expériences ont-elles amélioré la précision dans la plupart des mesures, mais elles examinent sans cesse de nouveaux aspects. Par exemple, les expériences CMS et ATLAS ont aussi montré la distribution de la quantité de mouvement du boson de Higgs et de ses produits de désintégrations. Toutes ces mesures testent le Modèle standard avec une précision accrue. Les physicien-ne-s cherchent justement la moindre déviation par rapport aux prédictions théoriques dans l’espoir de trouver la brèche qui révèlerait en quoi consiste la « nouvelle physique », celle qui permettra d’aller au-delà du Modèle standard.

CMS-muUne série de mesures de la force du signal correspondant à différents modes de désintégrations obtenus par la Collaboration CMS. Toutes les valeurs mesurées n’ont révélé aucun écart par rapport à la valeur de 1.0 prévue par le Modèle standard, du moins dans l’état actuel des marges d’erreurs expérimentales. Une déviation suggérerait la manifestation de quelque chose allant au-delà du Modèle standard.

Mais aucune des nombreuses tentatives directes entreprises pour trouver des particules liées à cette nouvelle physique ne s’est avérée fructueuse jusqu’à maintenant. Bien qu’on ait vérifié des centaines de possibilités correspondant à autant de scénarios différents impliquant des particules hypothétiques de supersymétrie, on n’a encore détecté aucun signe de leur présence.

Tout cela s’apparente beaucoup à des fouilles archéologiques : on doit souvent pelleter longtemps avant d’extraire quelque chose de spécial. Chaque analyse effectuée correspond à un seau de terre enlevé. Et chaque petit bout d’information récoltée contribue à fournir une vue d’ensemble. Aujourd’hui, grâce aux dizaines de nouveaux résultats présentés à la conférence, les théoricien-ne-s sont en bien meilleure position pour tirer des conclusions générales, éliminer les modèles erronés et trouver la bonne solution.

Tout le monde attend maintenant avec impatience la reprise du LHC prévue pour le début de 2015 afin de récolter de nouvelles données à plus haute l’énergie et explorer tout un monde de nouvelles possibilités. Tous les espoirs de découvrir la nouvelle physique seront alors renouvelés.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

Share