• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for February, 2014

B Decays Get More Interesting

Friday, February 28th, 2014

While flavor physics often offers a multitude of witty jokes (read as bad puns), I think I’ll skip one just this time and let the analysis speak for itself. Just recently, at the Lake Louise Winter Institute, a new result was released for the analysis looking for \( b\to s\gamma\) transitions. Now this is a flavor changing neutral current, which cannot occur at tree level in the standard model. Therefore, the the lowest order diagram which this decay can proceed by is the one loop penguin shown below to the right.

\(b\to s\gamma \\)

One loop penguin diagram representing the transition \(b \to s \gamma \).

From quantum mechanics, photons can have either left handed or right handed circular polarization. In the standard model, the photon in the decay \(b\to s\gamma\) is primarily left handed, due to spin and angular momentum conservation. However, models beyond the standard model, including some minimally super symmetric models (MSSM) predict a larger than standard model right handed component to the photon polarization. So even though the decay rates observed for \(b\to s\gamma\) agree with those predicted by the standard model, the photon polarization itself is sensitive to new physics scenarios.

As it turns out, the decays \(B^\pm \to K^\pm \pi^\mp \pi^\pm \gamma \) are well suited to explore photon polarizations after playing a few tricks. In order to understand why, the easies way is to consider a picture.

Definition of \(\theta\)

Picture defining the angle \(\theta\) in the analysis of \(B^\pm\to K^\pm \pi^\mp \pi^\pm \gamma\). From the Lake Louise Conference Talk

In the picture to the left, we consider the rest frame of a possible resonance which decays into \(K^\pm \pi^\mp \pi^\pm\). It is then possible to form the triple product of \(p_\gamma\cdot(p_{\pi,slow}\times p_{\pi,fast})\). Effectively, this defines the angle \(\theta\) defined in the picture to the left.

Now for the trick: Photon polarization is odd under parity transformation, and so is the triple product defined above. Defining the decay rate as a function of this angle, we find:

\(\frac{d\Gamma}{d \cos(\theta)}\propto \sum_{i=0,2,4}a_i cos^i\theta + \lambda_i\sum_{j=1,3} a_j \cos^j \theta\)

This is an expansion in Legendre Polynomials up to the 4th order. The odd moments are those which would contribute to photon polarization effects. The lambda is the photon polarization. Therefore, by looking at the decay rate as a function of this angle, we can directly access the photon polarization. However, another way to access the same information is by taking the asymmetry between the decay rate for events where theta is above the plane and those where theta is below the plane. This is then proportional to the photon polarization as well and allows for direct statistical calculation. We will call this the up-down asymmetry, or \(A_{ud}\). For more information, a useful theory paper is found here.

Enter LHCb. With the 3 fb\(^{-1}\) collected over 2011 and 2012 containing ~14,000 signal events, the up-down asymmetry was measured.

Up-down asymmetry for the analysis of \(b\to s\gamma\).

Up-down asymmetry for the analysis of \(b\to s\gamma\). From the Lake Louise Conference Talk

In bins of invariant mass of the \(K \pi \pi\) system, we see the asymmetry is clearly non-zero, and varies across the mass range given. As seen in the note posted to the arXiv, the shapes of the fit of the Legendre moments are not the same in differing mass bins, either. This corresponds to a 5.2\(\sigma\) observation of photon polarization in this channel. What this means for new physics models, however, is not interpreted, though I’m sure that the arXiv will be full of explanations given about a week.

Share

This Fermilab press release was published on February 24.

Matteo Cremonesi, left, of the University of Oxford and the CDF collaboration and Reinhard Schwienhorst of Michigan State University and the DZero collaboration present their joint discovery at a forum at Fermilab on Friday, Feb. 21. The two collaborations have observed the production of single top quarks in the s-channel, as seen in data collected from the Tevatron. Photo: Cindy Arnold

Matteo Cremonesi, left, of the University of Oxford and the CDF collaboration and Reinhard Schweinhorst of Michigan State University and the DZero collaboration present their joint discovery at a forum at Fermilab on Friday, Feb. 21. The two collaborations have observed the production of single top quarks in the s-channel, as seen in data collected from the Tevatron. Photo: Cindy Arnold

Scientists on the CDF and DZero experiments at the U.S. Department of Energy’s Fermi National Accelerator Laboratory have announced that they have found the final predicted way of creating a top quark, completing a picture of this particle nearly 20 years in the making.

The two collaborations jointly announced on Friday, Feb. 21, that they had observed one of the rarest methods of producing the elementary particle – creating a single top quark through the weak nuclear force, in what is called the s-channel. For this analysis, scientists from the CDF and DZero collaborations sifted through data from more than 500 trillion proton-antiproton collisions produced by the Tevatron from 2001 to 2011. They identified about 40 particle collisions in which the weak nuclear force produced single top quarks in conjunction with single bottom quarks.

Top quarks are the heaviest and among the most puzzling elementary particles. They weigh even more than the Higgs boson – as much as an atom of gold – and only two machines have ever produced them: Fermilab’s Tevatron and the Large Hadron Collider at CERN. There are several ways to produce them, as predicted by the theoretical framework known as the Standard Model, and the most common one was the first one discovered: a collision in which the strong nuclear force creates a pair consisting of a top quark and its antimatter cousin, the anti-top quark.

Collisions that produce a single top quark through the weak nuclear force are rarer, and the process scientists on the Tevatron experiments have just announced is the most challenging of these to detect. This method of producing single top quarks is among the rarest interactions allowed by the laws of physics. The detection of this process was one of the ultimate goals of the Tevatron, which for 25 years was the most powerful particle collider in the world.

“This is an important discovery that provides a valuable addition to the picture of the Standard Model universe,” said James Siegrist, DOE associate director of science for high energy physics. “It completes a portrait of one of the fundamental particles of our universe by showing us one of the rarest ways to create them.”

Searching for single top quarks is like looking for a needle in billions of haystacks. Only one in every 50 billion Tevatron collisions produced a single s-channel top quark, and the CDF and DZero collaborations only selected a small fraction of those to separate them from background, which is why the number of observed occurrences of this particular channel is so small. However, the statistical significance of the CDF and DZero data exceeds that required to claim a discovery.

“Kudos to the CDF and DZero collaborations for their work in discovering this process,” said Saul Gonzalez, program director for the National Science Foundation. “Researchers from around the world, including dozens of universities in the United States, contributed to this important find.”

The CDF and DZero experiments first observed particle collisions that created single top quarks through a different process of the weak nuclear force in 2009. This observation was later confirmed by scientists using the Large Hadron Collider.

Scientists from 27 countries collaborated on the Tevatron CDF and DZero experiments and continue to study the reams of data produced during the collider’s run, using ever more sophisticated techniques and computing methods.

“I’m pleased that the CDF and DZero collaborations have brought their study of the top quark full circle,” said Fermilab Director Nigel Lockyer. “The legacy of the Tevatron is indelible, and this discovery makes the breadth of that research even more remarkable.”

Fermilab is America’s national laboratory for particle physics research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Share

A second chance at sight

Monday, February 17th, 2014

This article appeared in symmetry on February 4, 2014.

Silicon microstrip detectors, a staple in particle physics experiments, provide information that may be critical to restoring vision to some who lost it.

Silicon microstrip detectors, a staple in particle physics experiments, provide information that may be critical to restoring vision to some who lost it.

In 1995, physicist Alan Litke co-wrote a particularly prescient article for Scientific American about potential uses for an emerging technology called the silicon microstrip detector. With its unprecedented precision, this technology was already helping scientists search for the top quark and, Litke wrote, it could help discover the elusive Higgs boson. He further speculated that it could perhaps also begin to uncover some of the many mysteries of the brain.

As the article went to press, physicists at Fermilab announced the discovery of the top quark, using those very same silicon detectors. In 2012, the world celebrated the discovery of the Higgs boson, aided by silicon microstrip detectors at CERN. Now Litke’s third premonition is also coming true: His work with silicon microstrip detectors and slices of retinal tissue is leading to developments in neurobiology that are starting to help people with certain kinds of damage to their vision to see.

“The starting point and the motivation was fundamental physics,” says Litke, who splits his time between University of California, Santa Cruz, and CERN. “But once you have this wonderful technology, you can think about applying it to many other fields.”

Silicon microstrip detectors use a thin slab of silicon, implanted with an array of diode strips, to detect charged particles. As a particle passes through the silicon, a localized current is generated. This current can be detected on the nearby strips and measured with high spatial resolution and accuracy.

Litke and collaborators with expertise in, and inspiration from, the development of silicon microstrip detectors, fabricated two-dimensional arrays of microscopic electrodes to study the complex circuitry of the retina. In the experiments, a slice of retinal tissue is placed on top of one of the arrays. Then a movie—a variety of visual stimuli including flashing checkerboards and moving bars—is focused on the input neurons of the retina, and the electrical signals generated by hundreds of the retina’s output neurons are simultaneously recorded. This electrical activity is what would normally be sent as signals to the brain and translated into visual perceptions.

This process allowed Litke and his collaborators to help decipher the retina’s coded messages to the brain and to create a functional connectivity map of the retina, showing the strengths of connections between the input and output neurons. That in itself was important to neurobiology, but Litke wanted to take this research further, to not just record neural activity but also to stimulate it. Litke and his team designed a system in which they stimulate retinal and brain tissue with precise electrical signals and study the kinds of signals the tissue produces in response.

Such observations have led to an outpouring of new neurobiology and biomedical applications, including studies for the design of a retinal prosthesis, a device that can restore sight. In a disease like retinitis pigmentosa or age-related macular degeneration, the eye’s output system to the brain is fine, but the input system has degraded.

In one version of a retinal prosthesis, a patient could wear a small video camera—something similar to Google Glass. A small computer would process the collected images and generate a pattern of electrical signals that would, in turn, stimulate the retina’s output neurons. In this way, the pattern of electrical signals that a naturally functioning eye would create could be replicated. The studies with the stimulation/recording system are being carried out in collaboration with neurobiologist E. J. Chichilnisky (Salk Institute and Stanford University) and physicist Pawel Hottowy (AGH University of Science and Technology, Krakow). The interdisciplinary and international character of the research highlights its origins in high energy physics.

In another approach, the degraded input neurons—the neurons that convert light into electrical signals—are functionally replaced by a two-dimensional array of silicon photodiodes. Daniel Palanker, an associate professor at Stanford University, has been using Litke’s arrays, in collaboration with Alexander Sher, an assistant professor at UCSC, who completed his postdoctoral work with Litke, to study how a prosthesis of this type would interact with a retina. Palanker and Sher are also researching retinal plasticity and have discovered that, in patients whose eyes have been treated with lasers, which can cause scar tissue, healthy cells sometimes migrate into an area where cells have died.

“I’m not sure we would be able to get this kind of information without these arrays,” Palanker says. “We use them all the time. It’s absolutely brilliant technology.”

Litke’s physics-inspired technology is continuing to play a role in the development of neurobiology. In 2013, President Obama announced the BRAIN—Brain Research through Advancing Innovative Neurotechnologies—Initiative, with the aim of mapping the entire neural circuitry of the human brain. A Nature Methods paper laying out the initiative’s scientific priorities noted that “advances in the last decade have made it possible to measure neural activities in large ensembles of neurons,” citing Litke’s arrays.

“The technology has enabled initial studies that now have contributed to this BRAIN Initiative,” Litke says. “That comes from the Higgs boson. That’s an amazing chain.”

Share

Fermilab released this press release on Feb. 11, 2014.

Workers at the NOvA hall in northern Minnesota assemble the final block of the far detector in early February 2014, with the nearly completed detector in the background. Each block of the detector measures about 50 feet by 50 feet by 6 feet and is made up of 384 plastic PVC modules, assembled flat on a massive pivoting machine. Photo courtesy of NOvA collaboration

Workers at the NOvA hall in northern Minnesota assemble the final block of the far detector in early February 2014, with the nearly completed detector in the background. Each block of the detector measures about 50 feet by 50 feet by 6 feet and is made up of 384 plastic PVC modules, assembled flat on a massive pivoting machine. Photo courtesy of NOvA collaboration

Scientists on the world’s longest-distance neutrino experiment announced today that they have seen their first neutrinos.

The NOvA experiment consists of two huge particle detectors placed 500 miles apart, and its job is to explore the properties of an intense beam of ghostly particles called neutrinos. Neutrinos are abundant in nature, but they very rarely interact with other matter. Studying them could yield crucial information about the early moments of the universe.

“NOvA represents a new generation of neutrino experiments,” said Fermilab Director Nigel Lockyer. “We are proud to reach this important milestone on our way to learning more about these fundamental particles.”

Scientists generate a beam of the particles for the NOvA experiment using one of the world’s largest accelerators, located at the Department of Energy’s Fermi National Accelerator Laboratory near Chicago. They aim this beam in the direction of the two particle detectors, one near the source at Fermilab and the other in Ash River, Minn., near the Canadian border. The detector in Ash River is operated by the University of Minnesota under a cooperative agreement with the Department of Energy’s Office of Science.

Billions of those particles are sent through the earth every two seconds, aimed at the massive detectors. Once the experiment is fully operational, scientists will catch a precious few each day.

Neutrinos are curious particles. They come in three types, called flavors, and change between them as they travel. The two detectors of the NOvA experiment are placed so far apart to give the neutrinos the time to oscillate from one flavor to another while traveling at nearly the speed of light. Even though only a fraction of the experiment’s larger detector, called the far detector, is fully built, filled with scintillator and wired with electronics at this point, the experiment has already used it to record signals from its first neutrinos.

“That the first neutrinos have been detected even before the NOvA far detector installation is complete is a real tribute to everyone involved. That includes the staff at Fermilab, Ash River Lab and the University of Minnesota module facility, the NOvA scientists, and all of the professionals and students building this detector,” said University of Minnesota physicist Marvin Marshak, Ash River Laboratory director. “This early result suggests that the NOvA collaboration will make important contributions to our knowledge of these particles in the not so distant future.”

Once completed, NOvA’s near and far detectors will weigh 300 and 14,000 tons, respectively. Crews will put into place the last module of the far detector early this spring and will finish outfitting both detectors with electronics in the summer.

“The first neutrinos mean we’re on our way,” said Harvard physicist Gary Feldman, who has been a co-leader of the experiment from the beginning. “We started meeting more than 10 years ago to discuss how to design this experiment, so we are eager to get under way.”

The NOvA collaboration is made up of 208 scientists from 38 institutions in the United States, Brazil, the Czech Republic, Greece, India, Russia and the United Kingdom. The experiment receives funding from the U.S. Department of Energy, the National Science Foundation and other funding agencies.

The NOvA experiment is scheduled to run for six years. Because neutrinos interact with matter so rarely, scientists expect to catch just about 5,000 neutrinos or antineutrinos during that time. Scientists can study the timing, direction and energy of the particles that interact in their detectors to determine whether they came from Fermilab or elsewhere.

Fermilab creates a beam of neutrinos by smashing protons into a graphite target, which releases a variety of particles. Scientists use magnets to steer the charged particles that emerge from the energy of the collision into a beam. Some of those particles decay into neutrinos, and the scientists filter the non-neutrinos from the beam.

Fermilab started sending a beam of neutrinos through the detectors in September, after 16 months of work by about 300 people to upgrade the lab’s accelerator complex.

“It is great to see the first neutrinos from the upgraded complex,” said Fermilab physicist Paul Derwent, who led the accelerator upgrade project. “It is the culmination of a lot of hard work to get the program up and running again.”

Different types of neutrinos have different masses, but scientists do not know how these masses compare to one another. A goal of the NOvA experiment is to determine the order of the neutrino masses, known as the mass hierarchy, which will help scientists narrow their list of possible theories about how neutrinos work.

“Seeing neutrinos in the first modules of the detector in Minnesota is a major milestone,” said Fermilab physicist Rick Tesarek, deputy project leader for NOvA. “Now we can start doing physics.”

Note: NOvA stands for NuMI Off-Axis Electron Neutrino Appearance. NuMI is itself an acronym, standing for Neutrinos from the Main Injector, Fermilab’s flagship accelerator.

Share

In August I moved away from CERN, and I’ve been back and forth between CERN and Brussels quite a lot since then. In fact right now I’m sitting in the building 40 where people go to drink coffee and have meetings, and I can see the ATLAS Higgs Convener sitting on the next table. All this leaves me feeling a little detached from what is really happening at CERN, as if it’s not “my” lab anymore, and that actually sums up how many people think about particle physics at the moment. With LHC Run I we found the Higgs boson. It was what most people expected to see, and by a large margin it was the most probable thing we would have discovered. Things will be different for Run II. Nobody has a good idea about what to expect in terms of new particles (and if they say they do have a good idea, they’re lying.) In that sense it’s not “our” dataset, it’s whatever nature decides it should be. All we can do is say what is possible, not what is probable. (Although we can probably say one scenario is more probable than another.)

The problem we now face is that there is no longer an obvious piece that’s missing, but there are still many unanswered questions, which means we have to move from an era of a well constrained search to an era of phenomenology, or looking for new effects in the data. That’s not a transition I’m entirely comfortable with for several reasons. It’s often said that nature is not spiteful, but it is subtle and indifferent to our expectations. There’s no reason to think that there “should” be new physics for us to discover as we increase the energy of the LHC, and we could be unlucky enough to not find anything new in the Run II dataset. A phenomenological search also means that we’d be overly sensitive to statistical bumps and dips in the data. Every time there’s a new peak that we don’t expect we have to exercise caution and skepticism, almost to the point where it stops being fun. Suppose we find an excess in a dijet spectrum. We may conclude that this is due a new particle, but if we’re going to be phenomenologists about it we must remain open minded, so we can’t necessarily expect to see the same particle in a dimuon final state. It would then be prudent to ask if such a peak comes from a poorly understood effect, such as jet energy scales, and those kinds of effects can be hard to untangle if we don’t have a good control sample in data. At least with the discovery of the Higgs boson, the top quark, and the W and Z bosons we knew what final states to expect and what ratios they should exhibit. There’s also something a little unsettling about not having a roadmap of what to expect. When asked to pick between several alternative scenarios that are neither favoured by evidence nor disfavoured by lack of evidence it’s hard to decide what to prioritise.

Take your pick of new physics!  Each scenario will have new phase space to explore in LHC Run II [CMS]

Take your pick of new physics! Each scenario will have new phase space to explore in LHC Run II [CMS]

On the other hand there is reason to be excited. Since we don’t know what to expect in LHC Run II, anything we do discover will change our views considerably, and will lead to a paradigm shift. If we do discover a new particle, or even better, a new sector of particles, it could help frame the Standard Model as a subset of something more elegant and unified. If that’s the case then we can look forward to decades of intense and exciting research, that would make the Higgs discovery look like small potatoes. So the next few years at the LHC could be either the most boring or the most exciting time in the history of particle physics, and we won’t know until we look at the data. Will nature tantalise us with hints of something novel, will it give us irrefutable evidence of a new resonance, or will it leave us with nothing new at all? For my part I’m taking on the dilepton final states. These are quick, clean, simple, and versatile signatures of something new that are not tied down to a specific model. That’s the best search I can perform in an environment of such uncertainty and with a lack of coherent direction. Let’s hope it pays off, and paves the way for even more discoveries.

What's happening at 325GeV at CDF?  Only more data can tell us! (CDF)

What’s happening at 325GeV at CDF? Only more data can tell us. Based on what the LHC has seen, this is probably a statistical fluctuation. (CDF)

Share

If there were only one credible interpretation of quantum mechanics, then we could take it as a reliable representation of reality. But when there are many, it destroys the credulity of all of them. The plethora of interpretations of quantum mechanics lends credence to the thesis that science tells us nothing about the ultimate nature of reality.

Quantum mechanics, in its essence, is a mathematical formalism with an algorithm for how to connect the formalism to observation or experiments. When relativistic extensions are included, it provides the framework for all of physics[1] and the underlying foundation for chemistry. For macroscopic objects (things like footballs), it reduces to classical mechanics through some rather subtle mathematics, but it still provides the underlying framework even there. Despite its empirical success, quantum mechanics is not consistent with our common sense ideas of how the world should work. It is inherently probabilistic despite the best efforts of motivated and ingenious people to make it deterministic. It has superposition and interference of the different states of particles, something not seen for macroscopic objects. If it is weird to us, just imagine how weird it must have seemed to the people who invented it. They were trained in the classical system until it was second nature and then nature itself said, “Fooled you, that is not how things are.” Some, like Albert Einstein (1879 – 1955), resisted it to their dying days.

The developers of quantum mechanics, in their efforts to come to grips with quantum weirdness, invented interpretations that tried to understand quantum mechanics in a way that was less disturbing to common sense and their classical training. In my classes in quantum mechanics, there were hand waving discussions of the Copenhagen interpretation, but I could never see what they added to mathematical formalism. I am not convinced my lecturers could either, although the term Copenhagen interpretation was uttered with much reverence. Then I heard a lecture by Sir Rudolf Peierls[2] (1907 – 1995) claiming that the conscious mind caused the collapse of the wave function. That was an interesting take on quantum mechanics, which was also espoused by John von Neumann (1903 – 1957) and Eugene Wigner (1902 –1995) for part of their careers.

So does consciousness play a crucial role in quantum mechanics? Not according to Hugh Everett III (1930 – 1982) who invented the many-worlds interpretation. In this interpretation, the wave function corresponds to physical reality, and each time a measurement is made the universe splits into many different universes corresponding to each possible outcome of the quantum measurement process. Physicists are nothing if not imaginative. This interpretation also offers the promise of eternal life.  The claim is that in all the possible quantum universes there must be one in which you will live forever. Eventually that will be the only one you will be aware of. But as with the Greek legend of Tithonus, there is no promise of eternal youth. The results may not be pretty.

If you do not like either of those interpretations of quantum mechanics, well have I got an interpretation for you. It goes under the title of the relation interpretation. Here the wave function is simply the information a given observer has about the quantum system and may be different for different observers; nothing mystical here and no multiplicity of worlds. Then there is the theological interpretation. This I first heard from Steven Hawking (b. 1942) although I doubt he believed it. In this interpretation, God uses quantum indeterminacy to hide his direct involvement in the unfolding of the universe. He simply manipulates the results of quantum measurements to suit his own goals. Well, He does work in mysterious ways after all.

I will not bore you with all possible interpretations and their permutations. Life is too short for that, but we are still left with the overarching question: which interpretation is the one true interpretation? What is the nature of reality implied by quantum mechanics? Does the universe split into many? Does consciousness play a central role? Is the wave function simply information? Does God hide in quantum indeterminacy?

Experiment cannot sort this out since all the interpretations pretty much agree on the results of experiments (even this is subject to debate), but science has one other criteria: parsimony. We eliminate unnecessary assumptions. When applied to interpretations of quantum mechanics, parsimony seems to favour the relational interpretation. But, in fact, parsimony, carefully applied, favours something else; the instrumentalist approach. That is: don’t worry about the interpretations, just shut up and calculate. All the interpretations have additional assumptions not required by observations.

But what about the ultimate nature of reality? There is no theorem that says reality, itself, must be simple. So quantum mechanics implies very little about the ultimate nature of reality. I guess we will have to leave that discussion to the philosophers and theologians. More power to them.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Although quantum gravity is still a big problem.

[2] A major player in the development of quantum many body theory and nuclear physics.

Share

CERN will be celebrating its 60th anniversary this year. That means 60 years of pioneering scientific research and exciting discoveries. Two Italian physicists, Maria and Giuseppe Fidecaro, remember nearly all of it since they arrived in 1956. Most impressively, they are still hard at work, every day!

The couple is easy to spot, even in the cafeteria during busy lunchtimes, where they usually engage in the liveliest discussions. “We argue quite a lot,” Maria tells me with a big smile. “We have very different styles.” “But in general, in the end, we agree,” completes Giuseppe.

Fidecaro-3-smallPhoto credit: Anna Pantelia, CERN

In October 1954, Giuseppe went to the University of Liverpool as a CERN Fellow to do research with their brand new synchrocyclotron. Maria also joined, having obtained a fellowship from the International Federation of University Women. After getting married in July 1955, they carried out experiments on pions, Giuseppe with a lead glass Cerenkov counter, Maria with a diffusion chamber.

In summer 1956, both moved to Geneva, and Maria got a CERN fellowship. “There were only about 300, maybe 400 people at CERN then”, explains Maria. A beautiful mansion called “Villa de Cointrin” housed the administrative offices on the airport premises, while physicists had their offices in nearby barracks.

Giuseppe was assigned to the Synchrocyclotron Division.  This was the first accelerator built at CERN and was operated from 1957 until 1990. Giuseppe set up a group and prepared the basic equipment for experiments  that was used in 1958 for a successful search for pions decaying into an electron and a neutrino. This was a hot topic at the time and was the first experiment involving a CERN accelerator. “The news went all over the world overnight”, recalls Giuseppe. Recently refurbished, the synchrocyclotron will soon become a permanent exhibit at CERN.

Meanwhile, Maria worked on a novel method to provide polarised proton beams.  As she recalls: “It was just a mere 10 years after the end of the war. The war feelings were still very much there”. “But it was really easy to work with each other,” Giuseppe adds, “everybody got along; we all had a common goal.”

Although there were very few women when she started, Maria feels she was respected by her peers. “In my group, I was simply one of them”, she comments.

Today, long after most have retired, they have both chosen to remain active and are still doing research but of another style.  Giuseppe delves in the history of physics while Maria is happy to revisit some of her past work, making sure she did not overlook any important detail. “In the heat of the moment, with the beams on and everything, there was no time to have a broad view”, she explains. “It’s a pleasure to go back and gain a deeper insight, and put our work in perspective with respect to what was going on at CERN and elsewhere”.

Both agree: every moment was good. “Having gone through all of it for 60 years is what has been best”, Maria says. “It was great to be able to pioneer so many different experiments”, adds Giuseppe, “and to share work with so many interesting people”.   Maria confirms “Life has been kind to us”.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline
 or sign-up on this mailing list to receive an e-mail notification.

 

Share

Le CERN célébrera cette année son 60e anniversaire. Cela signifie 60 ans de recherche scientifique et de découvertes passionnantes. Une physicienne et un physicien italiens se souviennent de presque tout. En effet, Maria et Giuseppe Fidecaro ont commencé à travailler au CERN en 1956. Fait encore plus surprenant, on les y retrouve encore tous les jours !

Le couple est facile à repérer, même lorsque la cafétéria est bondée le midi : ils sont toujours lancés dans des discussions animées. « Nous nous disputons beaucoup, me confie Maria avec un grand sourire. Nous avons des styles très différents. » « Mais en général, à la fin, nous nous mettons d’accord », précise Giuseppe.

Fidecaro-3-smallPhoto: Anna Pantelia, CERN

En octobre 1954, Giuseppe part pour l’Université de Liverpool, pour travailler avec leur tout nouveau synchrocyclotron. Maria en fait de même, ayant obtenu une bourse de la Fédération internationale des femmes universitaires. Après leur mariage, en juillet 1955, ils travaillent sur les pions : Giuseppe, avec un détecteur Cherenkov de verre et de plomb, et Maria, avec une chambre à diffusion. À l’été 1956, les deux chercheurs s’installent à Genève, où Maria a obtenu une bourse du CERN. « Il y avait alors seulement 300, peut-être 400 personnes au CERN », se souvient-elle. Près de l’aéroport, un bel hôtel particulier, appelé « La Villa de Cointrin », abritait les bureaux administratifs, tandis que les physicien(ne)s avaient leurs bureaux dans des baraques voisines.

Au CERN, Giuseppe est assigné à la division Synchrocyclotron. C’était le premier accélérateur du Laboratoire et il fut exploité de 1957 à 1990. Giuseppe y monta un groupe et prépara l’équipement expérimental de base. Le synchrocyclotron fut utilisé en 1958 pour une recherche fructueuse sur les désintégrations de pions en électrons et neutrinos. Ce mode de désintégration n’avait jamais été observé, ce qui soulevait bien des questions à l’époque. C’était la première expérience impliquant un accélérateur au CERN. « La nouvelle a fait le tour du monde ! », se réjouit encore Giuseppe. Le synchrocyclotron a depuis été réhabilité et deviendra bientôt une exposition permanente.

Maria, quand à elle, travaille à l’époque sur une nouvelle méthode visant à fournir des faisceaux de protons polarisés. « C’était à peine dix ans après la fin de la guerre, se souvient-elle. Les souvenirs des conflits étaient encore présents. » « Mais il était vraiment facile de travailler les uns avec les autres. Tout le monde s’entendait bien. Nous avions un but commun », ajoute Giuseppe. Bien que les femmes étaient très peu nombreuses quand elle a commencé, Maria se sentait respectée par ses collègues. « Dans mon groupe, j’étais simplement une scientifique parmi d’autres », fait-elle remarquer.

Alors que la plupart de leurs collègues sont depuis longtemps partis à la retraite, les deux chercheurs ont choisi de rester actifs et font toujours de la recherche, mais dans un style différent. Giuseppe s’intéresse à l’histoire de la physique, tandis que Maria est heureuse de revisiter une partie de son travail passé, s’assurant de ne pas avoir oublié de détails importants. « Dans le feu de l’action, avec les faisceaux et tout le reste, on n’avait pas le temps de prendre de recul sur nos recherches, explique-t-elle. C’est un plaisir aujourd’hui de revenir en arrière, d’avoir une vue plus approfondie et de remettre notre travail dans le contexte de l’époque, du CERN et d’ailleurs. »

Tous les deux sont d’accord : chaque moment était bien. « Le meilleur, c’est d’avoir pu prendre part à tout cela pendant 60 ans », confie Maria. Giuseppe ajoute : « C’était super de pouvoir participer en tant que pionniers à autant d’expériences différentes et d’avoir pu partager le travail de tant de personnes intéressantes ». Et Maria de conclure : « La vie a été gentille avec nous. »

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

Share

This article appeared in symmetry on January 30, 2014.

A video from Fermilab highlights some of the many steps needed to build the largest neutrino experiment in the United States.

A video from Fermilab highlights some of the many steps needed to build the largest neutrino experiment in the United States.

Coordinating the construction of an international particle physics experiment is never an easy task.

This is indeed the case for NOvA, a US-based physics experiment that studies a beam of hard-to-catch particles sent an unprecedented 500 miles through the Earth toward a 14,000-ton particle detector. Building the experiment has required harmonizing the efforts of several dozen laboratories, universities and companies from the United States, Brazil, the Czech Republic, Greece, India, Japan, Russia and the United Kingdom.

“It sinks in,” says John Perko, a construction technician at the NOvA facility in Ash River, Minnesota, in a new video about the process of building the NOvA detector. “It makes you feel that the whole world’s watching.”

The scientists on the NOvA collaboration have come together to study neutrinos, particles that are abundant in nature but that physicists still don’t quite understand. They are mysteriously lightweight, leading physicists to wonder if something other than the Higgs boson gives them their masses. Neutrinos come in three types, and they morph from one to another. Scientists think they might hold clues to what caused the imbalance between matter and antimatter in our universe.

To study these elusive particles, scientists on the NOvA collaboration designed a set of two detectors—a 300-ton one located near the source of the neutrino beam and a 14,000-ton one located in Ash River, Minnesota.

Fermilab recently posted a video highlighting some of the many steps required to build these detectors, from extruding 50-foot-long plastic tubes at a company in Manitowoc, Wisconsin, to assembling them into modules at a facility staffed by students at the University of Minnesota, to putting together the world’s largest free-standing plastic structure.

“I’m familiar with all the neutrino projects that are going on, and getting to actually be a part of one of those projects is pretty exciting,” University of Minnesota physics student Nicole Olsen says in the video.

Workers are scheduled to finish building the detectors this spring, and they plan to finish outfitting them with electronics in the summer. They have already begun to take data with portions of the experiment, and their capabilities will only improve as they get closer to completing construction.

Kathryn Jepsen

Share

The discovery of the Higgs boson was a triumph for particle physics. Its discovery completes the tremendously successful Standard Model of particle physics.  Of course, we know there are other phenomena — like dark matter, the dominance of matter over anti-matter, the mass of neutrinos, etc. — that aren’t explained by the Standard Model.  However, the Higgs itself is the source of one of the deepest mysteries of particle physics: the fine tuning problem.

The fine-tuning problem is related to the slippery concept of naturalness, and has driven the bulk of theoretical work for the last several decades.  Unfortunately, it is notoriously difficult to explain.  I took on this topic recently for a public lecture and came up with an analogy that I would like to share.

Why we take our theory seriously

Before discussing the fine tuning, we need need a few prerequisites.  The first thing to know is that the Standard Model (and most other theories we are testing) is based on a conceptual framework called Relativistic Quantum Field Theory (QFT).  As you might guess from the name, it’s based on the pillars of relativity, quantum mechanics, and field theory.  The key point here is that relativistic quantum field theory goes beyond the initial formulation of quantum mechanics.  To illustrate this difference, let’s consider a property of the electron and muon called its “g-factor” that relates its magnetic moment and spin [more].  In standard quantum mechanics, the prediction is that g=2; however, with relativistic quantum field theory we expect corrections.  Those corrections are shown pictorially  in the Feynman diagrams below.

g-2corrections

It turns out that this correction is small — about one part in a thousand.  But we can calculate it to an exquisite accuracy (about ten digits).  Moreover, we can measure it to a comparable accuracy.  The current result for the muon is

g = 2.0023318416 ± 0.000000001

This is a real tour de force for relativistic quantum field theory and represents one of the most stringent tests of any theory in the history of science [more].  To put it into perspective, it’s slightly better than hitting a hole in one from New York to China (that distance is about 10,000 km =1 billion cm).

It is because of tests like these that we take the predictions of this conceptual framework very seriously.

Precision-g-2

The Higgs, fine tuning, and an analogy

It turns out that all quantities that we can predict receive similar quantum corrections, even the mass of the Higgs boson.    In the Standard Model, there is a free parameter that can be thought of as an initial estimate for the Higgs mass, let’s call it M₀.  There will also be corrections, let’s call them ΔM (where Δ is pronounced “delta” and it indicates “change to”).   The physical mass that we observe is this initial estimate plus the corrections.  [For the aficionados: usually physicists talk about the mass squared instead of the mass, but that does not change the essential message].

The funny thing about the mass of the Higgs is that the corrections are not small.  In fact, the naive size of the corrections is enormously larger than the 126 GeV mass of that we observe!

Confused?  Now is a good time to bring in the analogy.  Let’s think about the budget of a large country like the U.S.  We will think of positive contributions to the Higgs mass as income (taxes) and negative contributions to the Higgs mass as spending.  The physical Higgs mass that we measure corresponds to the budget surplus.

Now imagine that there is no coordination between the raising of taxes and government spending (maybe it’s not that hard). Wouldn’t you be surprised that a large economy of trillions of dollars would have a budget balanced to better than a penny?  Wouldn’t that be unnatural to expect such a  fine tuning between  income and spending if they are just independent quantities?

This is exactly the case we find ourselves in with the Standard Model… and we don’t like it.  With the discovery of the Higgs, the Standard Model is now complete.  It is also the first theory we have had that can be extrapolated to very high energies (we say that it is renormalizable). But it has a severe fine tuning problem and does not seem natural.

Budget

AnalogyTable

 

The analogy can be fleshed out a bit more.  It turns out that the size of the corrections to the Higgs mass is related to something we call the cutoff, which is the  energy scale where the theory is no longer a valid approximation because some other phenomena become important.  For example, in a grand unified theory the strong force and the electroweak force would unify at approximately 10¹⁶ GeV (10 quadrillion GeV), and we would expect the corrections to be of a similar size.  Another common energy scale for the cutoff is the Planck Scale — 10¹⁹ GeV — where the quantum effects of gravity become important.  In the analogy, the cutoff energy corresponds to the fiscal year.  As time goes on, the budget grows and the chance of balancing the budget so precisely seems more and more unnatural.

Going even further, I can’t resist pointing out that the analogy even offers a nice way to think about one of the most enigmatic concepts in quantum field theory called renormalization.  We often use this term to describe how fundamental constants aren’t really constant.  For example, the  charge of an electron depends on the energy you use to probe the electron.  In the analogy, renormalization is like adjusting for inflation.  We know that a dollar today isn’t comparable to a dollar fifty years ago.

Breaking down the budget

The first thing one wants to understand before attempting to balance the budget is to find out where the money is going.  In the U.S. the big budget items are the military and social programs like social security and medicare.  In the case of the Higgs, the biggest corrections come from the top quark (the top diagrams on the right).  Of course the big budget items get most of the attention, and so it is with physics as well.  Most of the thinking that goes into to solving the fine tuning problem is related to the top quark.

BudgetOfCorrections

Searching for a principle to balance the budget

Maybe it’s not a miraculous coincidence that the budget is balanced so well.  Maybe there is some underlying principle.  Maybe someone came to Washington DC and passed a law to balance the budget that says that for every dollar of spending there must be a dollar of revenue.  This is an excellent analogy for supersymmetry.  In supersymmetry, there is an underlying principle — a symmetry — that relates two types of particles (fermions and bosons).  These two types of particles give corrections to the Higgs mass with opposite signs.  If this symmetry was perfect, the budget would be perfectly balanced, and it would not be unnatural for the Higgs to be 126 GeV.

That is one of the reasons that supersymmetry is so highly motivated, and there is an enormous effort to search for signs of supersymmetry in the LHC data.  Unfortunately, we haven’t seen any evidence for supersymmetry thus far. In the analogy that is a bit like saying that if there is some sort of law to balance the budget, it allows for some wiggle room between spending and taxes.  If the laws allow for too much wiggle room between spending and taxes then it may still be a law, but it isn’t explaining why the budget is balanced as well as it is.  The current state of the LHC experiments indicates that budget is balanced about 10-100 times better than the wiggle room allows  — which is better than we would expect, but not so much better that it seems unnatural.  However, if we don’t see supersymmetry in the next run of the LHC the situation will be worse. And if we were to build a 100 TeV collider and not see evidence of supersymmetry, then the level of fine tuning would be high enough that most physicists probably would consider the situation unnatural and abandon supersymmetry as the solution to the fine tuning problem.

SUSY

Since the fine tuning problem was first recognized, there have been essentially two proposed solutions.  One of them is supersymmetry, which I discussed above.  The second is often referred to as strong dynamics or compositeness.  The idea there is that maybe the Higgs is not a fundamental particle, but instead it’s a composite of some more fundamental particles.  My colleague Jamison Galloway and I tried to think through the analogy in that situation. In that case, one must start to think of different kinds of currencies… say the dollar for the Higgs boson and something other currencies like bitcoin for the more fundamental particles.  You would imagine that as time goes on (energy increases) that there is a transition from one currency to another.   At early times the budget is described entirely in terms of  dollars, but at later times the budget is described in terms of bitcoin.  That transition can be very complicated, but if it happened at a time when the total budget in dollars wasn’t too  large, then a well balanced budget wouldn’t seem too unnatural.  Trying to explain the rest of the compositeness story took us from a simple analogy to the basis for a series of sci-fi fantasy books, and I will spare you from that.

There are a number of examples where this aesthetic notion of naturalness has been a good guide, which is partially why physicists hold it so dear.  However, another avenue of thinking is that maybe the theory is unnatural, maybe it is random chance that the budget is balanced so well.  That thinking is bolstered by the idea that there may be a huge number of universes that are part of a larger complex we call the multiverse. In most of these universes the budget wouldn’t be balanced, the Higgs mass  would be very different.  In fact, most universes would not form atoms, would not form starts, and would not support life.  Of course, we are here here to observe our universe, and the conditions necessary to support life select very special universes out of the larger multiverse.  Maybe it is this requirement that explains why our universe seems so finely tuned.  This reasoning is called the anthropic principle, and it is one of the most controversial topics in theoretical physics. Many consider it giving up on a more fundamental theory that would explain why nature is as it is.  The very fact that we are resorting to this type of reasoning is evidence that the fine tuning problem is a big deal. I discuss this at the end of the public lecture (starting around the 30 min mark) with another analogy for the multiverse, but maybe I will leave that for another post.

Nota bene:  After developing this analogy I learned about a similar analogy from Tommaso Dorigo. They both use the idea of money, but the budget analogy goes a bit further.

Share