• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Sally
  • Shaw
  • University College London
  • UK

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Latest Posts

This article appeared in Fermilab Today on June 22, 2015.

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Last month, a group collaborating across four national laboratories completed the first successful tests of a superconducting coil in preparation for the future high-luminosity upgrade of the Large Hadron Collider, or HL-LHC. These tests indicate that the magnet design may be adequate for its intended use.

Physicists, engineers and technicians of the U.S. LHC Accelerator Research Program (LARP) are working to produce the powerful magnets that will become part of the HL-LHC, scheduled to start up around 2025. The plan for this upgrade is to increase the particle collision rate, or luminosity, by approximately a factor of 10, so expanding the collider’s physics reach by creating 10 times more data.

“The upgrade will help us get closer to new physics. If we see something with the current run, we’ll need more data to get a clear picture. If we don’t find anything, more data may help us to see something new,” said Technical Division’s Giorgio Ambrosio, leader of the LARP magnet effort.

LARP is developing more advanced quadrupole magnets, which are used to focus particle beams. These magnets will have larger beam apertures and the ability to produce higher magnetic fields than those at the current LHC.

The Department of Energy established LARP in 2003 to contribute to LHC commissioning and prepare for upgrades. LARP includes Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory and SLAC. Its members began developing the technology for advanced large-aperture quadrupole magnets around 2004.

The superconducting magnets currently in use at the LHC are made from niobium titanium, which has proven to be a very effective material to date. However, they will not be able to support the higher magnetic fields and larger apertures the collider needs to achieve higher luminosities. To push these limits, LARP scientists and engineers turned to a different material, niobium tin.

Niobium tin was discovered before niobium titanium. However, it has not yet been used in accelerators because, unlike niobium titanium, niobium tin is very brittle, making it susceptible to mechanical damage. To be used in high-energy accelerators, these magnets need to withstand large amounts of force, making them difficult to engineer.

LARP worked on this challenge for almost 10 years and went through a number of model magnets before it successfully started the fabrication of coils for 150-millimeter-aperture quadrupoles. Four coils are required for each quadrupole.

LARP and CERN collaborated closely on the design of the coils. After the first coil was built in the United States earlier this year, the LARP team successfully tested it in a magnetic mirror structure. The mirror structure makes possible tests of individual coils under magnetic field conditions similar to those of a quadrupole magnet. At 1.9 Kelvin, the coil exceeded 19 kiloamps, 15 percent above the operating current.

The team also demonstrated that the coil was protected from the stresses and heat generated during a quench, the rapid transition from superconducting to normal state.

“The fact that the very first test of the magnet was successful was based on the experience of many years,” said TD’s Guram Chlachidze, test coordinator for the magnets. “This knowledge and experience is well recognized by the magnet world.”

Over the next few months, LARP members plan to test the completed quadrupole magnet.

“This was a success for both the people building the magnets and the people testing the magnets,” said Fermilab scientist Giorgio Apollinari, head of LARP. “We still have a mountain to climb, but now we know we have all the right equipment at our disposal and that the first step was in the right direction.”

Diana Kwon

Share

I know what you are thinking. The LHC is back in action, at the highest energies ever! Where are the results? Where are all the blog posts?

Back in action, yes, but restarting the LHC is a very measured process. For one thing, when running at the highest beam energies ever achieved, we have to be very careful about how we operate the machine, lest we inadvertently damage it with beams that are mis-steered for whatever reason. The intensity of the beams — how many particles are circulating — is being incrementally increased with successive fills of the machine. Remember that the beam is bunched — the proton beams aren’t continuous streams of protons, but collections that are just a few centimeters long, spaced out by at least 750 centimeters. The LHC started last week with only three proton bunches in each beam, only two of which were actually colliding at an interaction point. Since then, the LHC team has gone to 13 bunches per beam, and then 39 bunches per beam. Full-on operations will be more like 1380 bunches per beam. So at the moment, the beams are of very low intensity, meaning that there are not that many collisions happening, and not that much physics to do.

What’s more, the experiments have much to do also to prepare for the higher collision rates. In particular, there is the matter of “timing in” all the detectors. Information coming from each individual component of a large experiment such as CMS takes some time to reach the data acquisition system, and it’s important to understand how long that time is, and to get all of the components synchronized. If you don’t have this right, then you might not be getting the optimal information out of each component, or worse still, you could end up mixing up information from different bunch crossings, which would be disastrous. This, along with other calibration work, is an important focus during this period of low-intensity beams.

But even if all these things were working right out of the box, we’d still have a long way to go until we had some scientific results. As noted already, the beam intensities have been low, so there aren’t that many collisions to examine. There is much work to do yet in understanding the basics in a revised detector operating at a higher beam energy, such as how to identify electrons and muons once again. And even once that’s done, it will take a while to make measurements and fully vet them before they could be made public in any way.

So, be patient, everyone! The accelerator scientists and the experimenters are hard at work to bring you a great LHC run! Next week, the LHC takes a break for maintenance work, and that will be followed by a “scrubbing run”, the goal of which is to improve the vacuum in the LHC beam pipe. That will allow higher-intensity beams, and position us to take data that will get the science moving once again.

Share

LHC-page-1-3juin2015

Today begins the second operation period of the Large Hadron Collider (LHC) at CERN. By declaring “stable beams”, the LHC operators signal to physicists it is now safe to turn all their detectors on. After more than two years of intensive repair and consolidation work, the LHC now operates at higher energy. What do we hope to achieve?

The discovery of the Higgs boson in July 2012 completed the Standard Model of particle physics. This theoretical model describes all matter seen around us, both on Earth and in all stars and galaxies. But this is precisely the problem: this model only applies to what is visible in the Universe, namely 5% of its content in matter and energy. The rest consists of dark matter (27%) and dark energy (68%), two absolutely unknown substances. Hence the need for a more encompassing theory. But what is it and how can it be reached?

By operating the LHC at 13 TeV, we now have much more energy available to produce new particles than during the 2010-2012 period, when the proton collisions occurred at 8 TeV. Given that energy and mass are two forms of the same essence, the energy released during these collisions materialises, producing new particles. Having more energy means one can now produce heavier particles. It is as if one’s budget just went from 8000 euro to 13000 euro. We can “afford” bigger particles if they exist in Nature.

The Standard Model tells us that all matter is built from twelve basic particles, just like a construction set consisting of twelve basic building blocks and some “connectors” linking them together. These connectors are other particles associated with the fundamental forces. Since none of these particles has the properties of dark matter, there must still be undiscovered particles.

Which theory will allow us to go beyond the Standard Model? Will it be Supersymmetry, one of the numerous theoretical hypotheses currently under study. This theory would unify the particles of matter with the particles associated with the fundamental forces. But Supersymmetry implies the existence of numerous new particles, none of which has been found yet.

Will the LHC operating at 13 TeV allow us to produce some of these supersymmetric particles? Or will the entrance of the secret passage towards this “new physics” be revealed by meticulously studying a plethora of quantities, such as the properties of the Higgs boson. Will we discover that it establishes a link between ordinary matter (everything described by the Standard Model) and dark matter?

These are some of the many questions the LHC could clarify in the coming years. An experimental discovery would reveal the new physics. We might very well be on the verge of a huge scientific revolution.

For more information about particle physics and my book, see my website

Share

This article appeared in Fermilab Today on May 27, 2015.

The future Dark Energy Spectroscopic Instrument will be mounted on the Mayall 4-meter telescope. It will be used to create a 3-D map of the universe for studies of dark energy. Photo courtesy of NOAO

The future Dark Energy Spectroscopic Instrument will be mounted on the Mayall 4-meter telescope. It will be used to create a 3-D map of the universe for studies of dark energy. Photo courtesy of NOAO

Dark energy makes up about 70 percent of the universe and is causing its accelerating expansion. But what it is or how it works remains a mystery.

The Dark Energy Spectroscopic Instrument (DESI) will study the origins and effects of dark energy by creating the largest 3-D map of the universe to date. It will produce a map of the northern sky that will span 11 billion light-years and measure around 25 million galaxies and quasars, extending back to when the universe was a mere 3 billion years old.

Once construction is complete, DESI will sit atop the Mayall 4-Meter Telescope in Arizona and take data for five years.

DESI will work by collecting light using optical fibers that look through the instrument’s lenses and can be wiggled around to point precisely at galaxies. With 5,000 fibers, it can collect light from 5,000 galaxies at a time. These fibers will pass the galaxy light to a spectrograph, and researchers will use this information to precisely determine each galaxy’s three-dimensional position in the universe.

Lawrence Berkeley National Laboratory is managing the DESI experiment, and Fermilab is making four main contributions: building the instrument’s barrel, packaging and testing charge-coupled devices, or CCDs, developing an online database and building the software that will tell the fibers exactly where to point.

The barrel is a structure that will hold DESI’s six lenses. Once complete, it will be around 2.5 meters tall and a meter wide, about the size of a telephone booth. Fermilab is assembling both the barrel and the structures that will hold it on the telescope.

“It’s a big object that needs to be built very precisely,” said Gaston Gutierrez, a Fermilab scientist managing the barrel construction. “It’s very important to position the lenses very accurately, otherwise the image will be blurred.”

DESI’s spectrograph will use CCDs, sensors that work by converting light collected from distant galaxies into electrons, then to digital values for analysis. Fermilab is responsible for packaging and testing these CCDs before they can be assembled into the spectrograph.

Fermilab is also creating a database that will store information required to operate DESI’s online systems, which direct the position of the telescope, control and read the CCDs, and ensure proper functioning of the spectrograph.

Lastly, Fermilab is developing the software that will convert the known positions of interesting galaxies and quasars to coordinates for the fiber positioning system.

Fermilab completed these same tasks when it built the Dark Energy Camera (DECam), an instrument that currently sits on the Victor Blanco Telescope in Chile, imaging the universe. Many of these scientists and engineers are bringing this expertise to DESI.

“DESI is the next step. DECam is going to precisely measure the sky in 2-D, and getting to the third dimension is a natural progression,” said Fermilab’s Brenna Flaugher, project manager for DECam and one of the leading scientists on DESI.

These four contributions are set to be completed by 2018, and DESI is expected to see first light in 2019.

“This is a great opportunity for students to learn the technology and participate in a nice instrumentation project,” said Juan Estrada, a Fermilab scientist leading the DESI CCD effort.

DESI is funded largely by the Department of Energy with significant contributions from non-U.S. and private funding sources. It is currently undergoing the DOE CD-2 review and approval process.

“We’re really appreciative of the strong technical and scientific support from Fermilab,” said Berkeley Lab’s Michael Levi, DESI project director.

Diana Kwon

Share

All those super low energy jets that the LHC cannot see? LHC can still see them.

Hi Folks,

Particle colliders like the Large Hadron Collider (LHC) are, in a sense, very powerful microscopes. The higher the collision energy, the smaller distances we can study. Using less than 0.01% of the total LHC energy (13 TeV), we see that the proton is really just a bag of smaller objects called quarks and gluons.

myproton_profmattstrassler

This means that when two protons collide things are sprayed about and get very messy.

atlas2009-collision-vp1-142308-482137-web

One of the most important processes that occurs in proton collisions is the Drell-Yan process. When a quark, e.g., a down quark d, from one proton and an antiquark, e.g., an down antiquark d, from an oncoming proton collide, they can annihilate into a virtual photon (γ) or Z boson if the net electric charge is zero (or a W boson if the net electric charge is one). After briefly propagating, the photon/Z can split into a lepton and its antiparticle partner, for example into a muon and antimuon or electronpositron pair! In pictures, quark-antiquark annihilation into a lepton-antilepton pair (Drell-Yan process) looks like this

feynmanDiagram_DrellYan_Simple

By the conservation of momentum, the sum of the muon and antimuon momenta will add up to the photon/Z boson  momentum. In experiments like ATLAS and CMS, this gives a very cool-looking distribution

cms_DY_7TeV

Plotted is the invariant mass distribution for any muon-antimuon pair produced in proton collisions at the 7 TeV LHC. The rightmost peak at about 90 GeV (about 90 times the proton’s mass!) is a peak corresponding to the production Z boson particles. The other peaks represent the production of similarly well-known particles in the particle zoo that have decayed into a muon-antimuon pair. The clarity of each peak and the fact that this plot uses only about 0.2% of the total data collected during the first LHC data collection period (Run I) means that the Drell-Yan process is a very useful for calibrating the experiments. If the experiments are able to see the Z boson, the rho meson, etc., at their correct energies, then we have confidence that the experiments are working well enough to study nature at energies never before explored in a laboratory.

However, in real life, the Drell-Yan process is not as simple as drawn above. Real collisions include the remnants of the scattered protons. Remember: the proton is bag filled with lots of quarks and gluons.

feynmanDiagram_DrellYan_wRad

Gluons are what holds quarks together to make protons; they mediate the strong nuclear force, also known as quantum chromodynamics (QCD). The strong force is accordingly named because it requires a lot of energy and effort to overcome. Before annihilating, the quark and antiquark pair that participate in the Drell-Yan process will have radiated lots of gluons. It is very easy for objects that experience the strong force to radiate gluons. In fact, the antiquark in the Drell-Yan process originates from an energetic gluon that split into a quark-antiquark pair. Though less common, every once in a while two or even three energetic quarks or gluons (collectively called jets) will be produced alongside a Z boson.

feynmanDiagram_DrellYan_3j

Here is a real life Drell-Yan (Z boson) event with three very energetic jets. The blue lines are the muons. The red, orange and green “sprays” of particles are jets.

atlas_158466_4174272_Zmumu3jets

 

As likely or unlikely it may be for a Drell-Yan process or occur with additional energetic jets, the frequency at which they do occur appear to match very well with our theoretical predictions. The plot below show the likelihood (“Production cross section“) of a W or Z boson with at least 0, 1, 2, 3, or 4(!) very energetic jets. The blue bars are the theoretical predictions and the red circles are data. Producing a W or Z boson with more energetic jets is less likely than having fewer jets. The more jets identified, the smaller the production rate (“cross section”).

cms_StairwayHeaven_2014

How about low energy jets? These are difficult to observe because experiments have high thresholds for any part of a collision to be recorded. The ATLAS and CMS experiments, for example, are insensitive to very low energy objects, so not every piece of an LHC proton collision will be recorded. In short: sometimes a jet or a photon is too “dim” for us to detect it. But unlike high energy jets, it is very, very easy for Drell-Yan processes to be accompanied with low energy jets.

feynmanDiagram_DrellYan_wRadx6

There is a subtlety here. Our standard tools and tricks for calculating the probability of something happening in a proton collision (perturbation theory) assumes that we are studying objects with much higher energies than the proton at rest. Radiation of very low energy gluons is a special situation where our usual calculation methods do not work. The solution is rather cool.

As we said, the Z boson produced in the quark-antiquark annihilation has much more energy than any of the low energy gluons that are radiated, so emitting a low energy gluon should not affect the system much. This is like massive freight train pulling coal and dropping one or two pieces of coal. The train carries so much momentum and the coal is so light that dropping even a dozen pieces of coal will have only a negligible effect on the train’s motion. (Dropping all the coal, on the other hand, would not only drastically change the train’s motion but likely also be a terrible environmental hazard.) We can now make certain approximations in our calculation of a radiating a low energy gluon called “soft gluon factorization“. The result is remarkably simple, so simple we can generalize it to an arbitrary number of gluon emissions. This process is called “soft gluon resummation” and was formulated in 1985 by Collins, Soper, and Sterman.

Low energy gluons, even if they cannot be individually identified, still have an affect. They carry away energy, and by momentum conservation this will slightly push and kick the system in different directions.

feynmanDiagram_DrellYan_wRadx6_Text

 

If we look at Z bosons with low momentum from the CDF and DZero experiments, we see that the data and theory agree very well! In fact, in the DZero (lower) plot, the “pQCD” (perturbative QCD) prediction curve, which does not include resummation, disagrees with data. Thus, soft gluon resummation, which accounts for the emission of an arbitrary number of low energy radiations, is important and observable.

cdf_pTZ dzero_pTZ

In summary, Drell-Yan processes are a very important at high energy proton colliders like the Large Hadron Collider. They serve as a standard candle for experiments as well as a test of high precision predictions. The LHC Run II program has just begun and you can count on lots of rich physics in need of studying.

Happy Colliding,

Richard (@bravelittlemuon)

 

Share

This past month in Geneva a conference took place bringing together the world’s foremost experiments in cosmic ray physics and indirect dark matter detection: “AMS Days at CERN”. I took a break from thesis-writing, grabbed a bag of popcorn, and sat down to watch a couple of the lectures via webcast. There was a stellar lineup, including but not limited to talks from IceCube, the Pierre Auger Observatory, H.E.S.S. and CTA, Fermi-LAT, and CREAM. The Alpha Magnetic Spectrometer (AMS) experiment was, of course, the star of the show. It is the AMS and its latest results that I’d like to focus on now.

But first, I’d like to give a brief introduction to cosmic rays, since that’s what AMS studies.

It turns out that space is not as empty as one might think. The Earth is constantly being bombarded by extremely-high-energy particles from all directions.  These cosmic rays were discovered in the early twentieth century by the Austrian physicist Victor Hess. Hess made several balloon-borne measurements of the Earth’s natural radiation at various altitudes and observed that the incidence of ionizing radiation actually increased with ascent, the exact opposite of what you would expect if all radioactivity came from the earth.

Fig. 1: An artist's rendition of cosmic rays . Image from http://apod.nasa.gov/apod/ap060814.html.

Fig. 1: An artist’s rendition of cosmic rays . Image from http://apod.nasa.gov.

The word “ray” is actually something of a misnomer – Cosmic rays are primarily charged matter particles rather than electromagnetic radiation. Their makeup goes as follows: approximately 98% are nuclei, of which 90% of are protons, 9% are alpha particles (helium nuclei), and only a small proportion heavier nuclei; and approximately 2% electrons and positrons. Only very small trace amounts (less than one ten-thousandth the number of protons) of antimatter are present, and of this, it is all positrons and antiprotons – not a single antihelium or heavier anti-nucleus has been discovered. There are two types of cosmic rays: primary rays, which come directly from extrasolar sources, and secondary rays, which come from primary rays crashing into the interstellar medium and forming new particles through processes such as nuclear spallation. Particles resulting from cosmic ray collisions with the Earth’s atmosphere are also considered secondary cosmic rays – these include particles like pions, kaons, and muons, and their decay products.

Fig. 2: Cosmic ray flux vs. particle energy.  Image from http://science.nasa.gov/science-news/science-at-nasa/2001/ast15jan_1/

Fig. 2: Cosmic ray flux vs. particle energy. Image from http://science.nasa.gov/science-news/science-at-nasa/2001/ast15jan_1/

Despite being discovered over a hundred years ago, cosmic rays remain in a lot of ways a big mystery. For one thing, we don’t know exactly where they come from. Because cosmic rays are generally electrically charged, they don’t travel to us straight from the source. Rather, they are accelerated this way and that by magnetic fields in space so that when they finally reach us they could be coming from any direction at all. Indeed, the cosmic ray flux that we see is completely isotropic, or the same in all directions.

Not only do they not come straight from the source, but we don’t even know what that source is. These particles move orders of magnitude faster than particles in our most powerful accelerators on Earth. Astronomers’ best guess is that cosmic rays are accelerated by magnetic shocks from supernovae. But even supernovae aren’t enough to accelerate the highest-energy cosmic rays. Moreover, there are features in the cosmic ray energy spectrum that we just don’t understand (see Fig. 2). Two kinks, a “knee” at about 1016 eV and an “ankle” at about 1018 eV could indicate the turning on or off of some astrophysical process. Experiments like the Pierre Auger Observatory were designed to study these ultra-high-energy particles and hopefully will tell up a little bit more about them in the next few years.

The AMS is primarily interested in lower-energy cosmic rays. For four years, ever since its launch up to the International Space Station, it’s been cruising the skies and collecting cosmic rays by the tens of billions. I will not address the experimental design and software here. Instead I refer the reader to one of my previous articles, “Dark Skies II- Indirect Detection and the Quest for the Smoking Gun”.

In addition to precision studies of the composition and flux of cosmic rays, the AMS has three main science goals: (1) Investigating the matter-antimatter asymmetry by searching for primordial antimatter. (2) Searching for dark matter annihilation products amidst the cosmic rays. And (3), looking for strangelets and other exotic forms of matter.

The very small fraction of cosmic rays made up of antimatter is relevant not just for the first goal but for the second as well. Not many processes that we know about can produce positrons and antiprotons, but as I mention in “Dark Skies II”, dark matter annihilations into Standard Model particles could be one of those processes. Any blips or features in the cosmic ray antimatter spectrum could indicate dark matter annihilations at work.

Fig. 3. The positron fraction measured by AMS.  Image from L. Accardo et al. (AMS Collaboration), September 2014.

Fig. 3. The positron fraction measured by AMS. Image from L. Accardo et al. (AMS Collaboration), September 2014.

On April 14 at “AMS Days at CERN”, Professor Andrei Kounine of MIT presented the latest results from AMS.

The first part of Kounine’s talk focused on a precise characterization of the positron fraction presented by the AMS collaboration in September 2014 and a discussion of the relevant systematics. In the absence of new physics processes, we expect the positron fraction to be smooth and decreasing with energy. As you can see in Fig. 3, however, the positron fraction starts rising at approximately 8 GeV and increases steadily up to about 250 GeV. The curve hits a maximum at about 275 GeV and then appears to begin to turn over, although at these energies the measurements are limited by statistics and more data is needed to determine exactly what happens beyond this point. Models of dark matter annihilation predict a much steeper drop-off than do models where the positron excess is produced by, say, pulsars. Five possible sources of systematic error were identified, all of which have been heavily investigated. These included a small asymmetry in positron and electron acceptance due to slight differences in some of the bits of the tracker; variations in efficiency with respect to energy of the incoming particle; binning errors, which are mitigated due to high experimental resolution; low statistics at the tails of the electron and positron distributions; and “charge confusion”, or the misidentification of electrons as positrons, which happens only in a very small number of cases.

Kounine also presented a never-before-seen, not-yet-published measurement of the antiproton-proton ratio as measured by AMS, which you can see in Fig. 4. This curve represents a total of 290,000 antiprotons selected out of total of 54 billion events collected by AMS over the past 4 years. Many of the same systematics (acceptance asymmetry, charge confusion, and so on) as in the positron measurement are relevant here. Work on the antiproton analysis is ongoing, however, and according to Kounine it’s too soon to try to match models to the data.

Fig. 4. AMS’s latest antiproton-proton ratio measurement, from Prof. Andrei Kounine’s presentation at “AMS Days at CERN”.

Fig. 4. AMS’s latest antiproton-proton ratio measurement, from Prof. Andrei Kounine’s presentation at “AMS Days at CERN”.

As a dark matter physicist, the question in my mind is, do these measurements represent dark matter annihilations? Professor Subir Sarkar of Oxford and the Niels Bohr Institute in Copenhagen thinks not. In his talk at “AMS Days”, Sarkar argues that the dark matter annihilation cross-section necessary to match the positron flux seen by AMS and other experiments such as Fermi-LAT and PAMELA needs to be so large that by all rights the dark matter in the universe should have all annihilated away already. This is inconsistent with the observed dark matter density in our galaxy. You can get around this with theoretical models that incorporate new kinds of long-range forces. However, the observed antiproton flux, according to Sarkar, is consistent with background. Therefore dark matter would have to be able to annihilate into leptons (electrons and positrons, muons, neutrinos, and so on) but not quarks. Such models exist, but now we’re starting to severely restrict our model space. Moreover, dark matter annihilating in the early universe near the time of recombination should leave visible imprints in the Cosmic Microwave Background (CMB), which have not yet been seen. CMB experiments such as Planck therefore disfavor a dark matter explanation for the observed peak in positron fraction.

Sarkar then goes on to present an alternate model where secondary cosmic ray particles such as positrons are accelerated by the same mechanisms (magnetic shocks from supernovae, pulsars, and other cosmic accelerators) that accelerate primary cosmic rays. Then, if there are invisible accelerators in our nearby galactic neighborhood, as seems likely because electrons and positrons can’t propagate very far without losing energy due to interactions with starlight and the CMB, it could be possible to get very large fluctuations in the cosmic ray flux due purely to the randomness of how these accelerators are distributed around us.

Regardless of whether or not the AMS has actually seen a dark matter signal, the data are finally beginning to be precise enough that we can start really pinning down how cosmic rays backgrounds are created and propagated. I encourage you to check out some of the webcasts at “AMS Days at CERN” for yourself. Although the event is over the webcasts are still available in the CERN document archive here.

Share

This article appeared in Fermilab Today on May 5, 2015.

Technicians John Cornele, Pat Healey and Skyler Sherwin have been crucial in preparing the LArIAT detector for beam. The liquid-argon-filled detector saw first beam on Thursday. Photo: Jen Raaf

Technicians John Cornele, Pat Healey and Skyler Sherwin have been crucial in preparing the LArIAT detector for beam. The liquid-argon-filled detector saw first beam on Thursday. Photo: Jen Raaf

Fermilab’s Test Beam Facility (FTBF) now runs a second beamline to provide particles for R&D experiments. The MCenter beamline came back to life last year after an eight-year slumber to join the facility’s other beamline, MTest.

On Thursday, April 30, accelerator operators began using the revived beamline to send particles to its first major experiment, Liquid Argon TPC in a Test Beam (LArIAT), which will help advance particle detector technologies for neutrino experiments.

The FTBF provides experiments with different types of particle beams with a range of energies. Its main purpose is the research and development of particle detectors. It is one of only two sites in the world that provides this service with high-energy hadrons, which are particles made of quarks. Since 2005, the FTBF, with its distinctive orange and blue corrugated-steel roof, has staged more than 50 experiments, conducted by scientists from more than 170 institutions in 30 countries.

“We’re very busy and fully subscribed,” said JJ Schmidt, deputy facility manager at FTBF. “The existence of two beams allows us to serve a broader class of experiments.”

Not only does the new beamline allow FTBF to serve a larger number of users, it also provides room for a greater diversity of experiments. While MTest is aimed at experiments with a turnover of about one to four weeks, MCenter caters to more long-term experiments like LArIAT that will last for months, or even years.

Beautiful tracks at first try
LArIAT is a liquid-argon time projection chamber. Charged particles traveling through the sea of liquid argon ionize the argon atoms, and an electric field causes liberated electrons to drift toward the detector readout. Different particles cause different amounts of ionization, allowing researchers to distinguish between particles such as pions, kaons and protons.

This plot shows LArIAT's first tracks: two views of a charged particle interacting inside the LArIAT detector, which is filled with liquid argon.

This plot shows LArIAT’s first tracks: two views of a charged particle interacting inside the LArIAT detector, which is filled with liquid argon.

The first spill of particles delivered to LArIAT led to immediate success. The detector recorded picture-perfect tracks of charged particles.

Like the test beam, LArIAT will act as a research and development vehicle for future projects. Because neutrinos can be studied only through the particles produced when they interact with material inside a particle detector, being able to reliably characterize these other particles is of great importance.

“This is going to be fantastic not only for LArIAT but all the neutrino experiments that will use its results,” said Jen Raaf, co-spokesperson for LArIAT.

LArIAT will run the test beam for 24 hours a day while experimenters take data. The first run will last about three months, after which the detector’s cryogenic system will undergo upgrades to prepare for longer follow-up runs.

“It’s great that we have a facility where a small experiment can take beam over a long term,” said Brian Rebel, a scientist involved in LArIAT.

About 75 people from 22 institutions from the United States, Europe and Japan work on this experiment.

“Most are young postdocs and Ph.D. students that are enthusiastically doing a great job,” said Flavio Cavanna, LArIAT co-spokesperson.

“It’s an exciting combination of many years of work by the Accelerator, Particle Physics, Neutrino and Scientific Computing divisions to have the capability to do research that is important for making this the premier neutrino laboratory in the world,” Schmidt said.

Diana Kwon

Share

This article appeared in Fermilab Today on May 1, 2015.

Fermilab Director Nigel Lockyer shakes hands with Jefferson Lab Director Hugh Montgomery by a superconducting coil and its development and fabrication team at Fermilab. Six coils have been made and shipped to Jefferson Lab for use in the CLAS12 experiment. Photo: Reidar Hahn

Fermilab Director Nigel Lockyer shakes hands with Jefferson Lab Director Hugh Montgomery by a superconducting coil and its development and fabrication team at Fermilab. Six coils have been made and shipped to Jefferson Lab for use in the CLAS12 experiment. Photo: Reidar Hahn

A group of Fermilab physicists and engineers was faced with a unique challenge when Jefferson Lab asked them to make the superconducting coils for an upgrade to their CEBAF Large Acceptance Spectrometer experiments. These are some of the largest coils Fermilab has ever built.

Despite obstacles, the sixth coil was completed, packed on a truck and sent to Jefferson Lab to become the last piece of the torus magnet in the lab’s CLAS detector. It arrived on Thursday.

The CLAS detector’s upgrade (CLAS12) will allow it to accept electron beams of up to 11 GeV, matching the beam energy of the Virginia laboratory’s CEBAF electron accelerator after five passes. These improvements will allow Jefferson Lab to more accurately study the properties of atomic nuclei.

A major component of the enhanced detector is the torus magnet, which will be made from the six superconducting coils created at Fermilab. Aside from cleaning, insulating and winding the coils, one of the most important parts of the process is vacuum epoxy impregnation. During this step, air and water vapor are removed from the coils and replaced with an epoxy.

This process is particularly difficult when you’re working on magnets as big as the CLAS12 coils, which are 14 feet long and seven feet wide. Fermilab’s Magnet Systems Department fabrication team, the group responsible for making these massive coils, encountered a major obstacle at the end of March 2014 after finishing the first practice coil.

What they found were dry areas within the coil where the epoxy couldn’t penetrate. These were places where the coils weren’t fixed into place, meaning they could move and generate heat and resistance. This can lead to magnet quench, the transition from superconducting to a normal state — a highly undesirable consequence.

The Fermilab group and Jefferson Lab staff collaborated to come up with a solution. By trying new materials, new temperature profiles and adjusting the time that the epoxy was left to sit and be adsorbed, the team was able to prevent the dry areas from forming.

Fred Nobrega, the lead engineer at Fermilab for the CLAS12 coil project, joined the effort last August.

“It was rewarding for me to join the project near its low point, be able to help get through the hurdle and see this completed,” he said.

Production has been steady since December, with Fermilab sending roughly one coil a month to Jefferson Lab. Although the sixth coil will become the last piece of the torus magnet, the project isn’t complete just yet — the ultimate goal is to make eight identical coils, the six for the magnet and two spares.

“We’re succeeding because we have great people and a productive collaboration with Jefferson Lab, who helped us at difficult moments,” said George Velev, head of the Magnet Systems Department. “We worked together on a tough problem and now we see the results.”

Diana Kwon

Share

This article appeared in symmetry on April 22, 2015.

The world’s largest liquid-argon neutrino detector will help with the search for sterile neutrinos at Fermilab. Photo: INFN

The world’s largest liquid-argon neutrino detector will help with the search for sterile neutrinos at Fermilab. Photo: INFN

Mysterious particles called neutrinos seem to come in three varieties. However, peculiar findings in experiments over the past two decades make scientists wonder if a fourth is lurking just out of sight.

To help solve this mystery, a group of scientists spearheaded by Nobel laureate Carlo Rubbia plans to bring ICARUS, the world’s largest liquid-argon neutrino detector, across the Atlantic Ocean to the United States. The detector is currently being refurbished at CERN, where it is the first beneficiary of a new test facility for neutrino detectors.

Neutrinos are some of the most abundant and yet also most mysterious particles in the universe. They have tiny masses, but no one is sure why—or where those masses come from. They interact so rarely that they can pass through the entire Earth as if it weren’t there. They oscillate from one type to another, so that even if you start out with one kind of neutrino, it might change to another kind by the time you detect it.

Many theories in particle physics predict the existence of a sterile neutrino, which would behave differently from the three known types of neutrino.

“Finding a fourth type of neutrinos would change the whole picture we’re trying to address with current and future experiments,” says Peter Wilson, a scientist at Fermi National Accelerator Laboratory.

The Program Advisory Committee at Fermilab recently endorsed a plan, managed by Wilson, to place a suite of three detectors in a neutrino beam at the laboratory to study neutrinos—and determine whether sterile neutrinos exist.

Over the last 20 years, experiments have seen clues pointing to the possible existence of sterile neutrinos. Their influence may have caused two different types of unexpected neutrino behavior seen at the Liquid Scintillator Neutrino Detector experiment at Los Alamos National Laboratory in New Mexico and the MiniBooNE experiment at Fermilab.

Both experiments saw indications that a surprisingly large number of neutrinos may be morphing from one kind to another a short distance from a neutrino source. The existence of a fourth type of neutrino could encourage this fast transition.

The new three-detector formation at Fermilab could provide the answer to this mystery.

In the suite of experiments, a 260-ton detector called Short Baseline Neutrino Detector will sit closest to the source of the beam, so close that it will be able to detect the neutrinos before they’ve had a chance to change from one type into another. This will give scientists a baseline to compare with results from the other two detectors. SBND is under construction by a team of scientists and engineers from universities in the United Kingdom, the United States and Switzerland, working with several national laboratories in Europe and the US.

The SBND detector will be filled with liquid argon, which gives off flashes of light when other particles pass through it.

“Liquid argon is an extremely exciting technology to make precision measurements with neutrinos,” says University of Manchester physicist Stefan Soldner-Rembold, who leads the UK project building a large section of the detector. “It’s the technology we’ll be using for the next 20 to 30 years of neutrino research.”

Farther from the beam will be the existing 170-ton MicroBooNE detector, which is complete and will begin operation at Fermilab this year. The MicroBooNE detector was designed to find out whether the excess of particles seen by MiniBooNE was caused by a new type of neutrino or a new type of background. Identifying either would have major implications for future neutrino experiments.

Finally, farthest from the beam would be a liquid-argon detector more than four times the size of MicroBooNE. The 760-ton detector was used in the ICARUS experiment, which studied neutrino oscillations at Gran Sasso Laboratory in Italy using a beam of neutrinos produced at CERN from 2010 to 2014.

Its original beam at CERN is not optimized for the next stage of the sterile neutrino search. “The Fermilab beamline is the only game in town for this type of experiment,” says physicist Steve Brice, deputy head of Fermilab’s Neutrino Division.

And the ICARUS detector “is the best detector in the world to detect this kind of particle,” says Alberto Scaramelli, the former technical director of Gran Sasso National Laboratory. “We should use it.”

Rubbia, who initiated construction of ICARUS and leads the ICARUS collaboration, proposed bringing the detector to Fermilab in August 2013. Since then, the ICARUS, MicroBooNE and SBND groups have banded together to create the current proposal. The updated plan received approval from the Fermilab Program Advisory Committee in February.

“The end product was really great because it went through the full scrutiny of three different collaborations,” says MicroBooNE co-leader Sam Zeller. “The detectors all have complementary strengths.”

In December, scientists shipped the ICARUS detector from the Gran Sasso laboratory to CERN, where it is currently undergoing upgrades. The three-detector short-baseline neutrino program at Fermilab is scheduled to begin operation in 2018.

Kathryn Jepsen

Share

This article appeared in Fermilab Today on April 21, 2015.

Fermilab's Mu2e groundbreaking ceremony took place on Saturday, April 18. From left: Alan Stone (DOE Office of High Energy Physics), Nigel Lockyer (Fermilab director), Jim Siegrist (DOE Office of High Energy Physics director), Ron Ray (Mu2e project manager), Paul Philp (Mu2e federal project director at the Fermi Site Office), Jim Miller (Mu2e co-spokesperson), Doug Glenzinski (Mu2e co-spokesperson), Martha Michels (Fermilab ESH&Q head), Mike Shrader (Middough architecture firm), Julie Whitmore (Mu2e deputy project manager), Jason Whittaker (Whittaker Construction), Tom Lackowski (FESS). Photo: Reidar Hahn

Fermilab’s Mu2e groundbreaking ceremony took place on Saturday, April 18. From left: Alan Stone (DOE Office of High Energy Physics), Nigel Lockyer (Fermilab director), Jim Siegrist (DOE Office of High Energy Physics director), Ron Ray (Mu2e project manager), Paul Philp (Mu2e federal project director at the Fermi Site Office), Jim Miller (Mu2e co-spokesperson), Doug Glenzinski (Mu2e co-spokesperson), Martha Michels (Fermilab ESH&Q head), Mike Shrader (Middough architecture firm), Julie Whitmore (Mu2e deputy project manager), Jason Whittaker (Whittaker Construction), Tom Lackowski (FESS). Photo: Reidar Hahn

This weekend, members of the Mu2e collaboration dug their shovels into the ground of Fermilab’s Muon Campus for the experiment that will search for the direct conversion of a muon into an electron in the hunt for new physics.

For decades, the Standard Model has stood as the best explanation of the subatomic world, describing the properties of the basic building blocks of matter and the forces that govern them. However, challenges remain, including that of unifying gravity with the other fundamental forces or explaining the matter-antimatter asymmetry that allows our universe to exist. Physicists have since developed new models, and detecting the direct conversion of a muon to an electron would provide evidence for many of these alternative theories.

“There’s a real possibility that we’ll see a signal because so many theories beyond the Standard Model naturally allow muon-to-electron conversion,” said Jim Miller, a co-spokesperson for Mu2e. “It’ll also be exciting if we don’t see anything, since it will greatly constrain the parameters of these models.”

Muons and electrons are two different flavors in the charged-lepton family. Muons are 200 times more massive than electrons and decay quickly into lighter particles, while electrons are stable and live forever. Most of the time, a muon decays into an electron and two neutrinos, but physicists have reason to believe that once in a blue moon, muons will convert directly into an electron without releasing any neutrinos. This is physics beyond the Standard Model.

Under the Standard Model, the muon-to-electron direct conversion happens too rarely to ever observe. In more sophisticated models, however, this occurs just frequently enough for an extremely sensitive machine to detect.

The Mu2e detector, when complete, will be the instrument to do this. The 92-foot-long apparatus will have three sections, each with its own superconducting magnet. Its unique S-shape was designed to capture as many slow muons as possible with an aluminum target. The direct conversion of a muon to an electron in an aluminum nucleus would release exactly 105 million electronvolts of energy, which means that if it occurs, the signal in the detector will be unmistakable. Scientists expect Mu2e to be 10,000 times more sensitive than previous attempts to see this process.

Construction will now begin on a new experimental hall for Mu2e. This hall will eventually house the detector and the infrastructure needed to conduct the experiment, such as the cryogenic systems to cool the superconducting magnets and the power systems to keep the machine running.

“What’s nice about the groundbreaking is that it becomes a real thing. It’s a long haul, but we’ll get there eventually, and this is a start,” said Julie Whitmore, deputy project manager for Mu2e.

The detector hall will be complete in late 2016. The experiment, funded mainly by the Department of Energy Office of Science, is expected to begin in 2020 and run for three years until peak sensitivity is reached.

“This is a project that will be moving along for many years. It won’t just be one shot,” said Stefano Miscetti, the leader of the Italian INFN group, Mu2e’s largest international collaborator. “If we observe something, we will want to measure it better. If we don’t, we will want to increase the sensitivity.”

Physicists around the world are working to extend the frontiers of the Standard Model. One hundred seventy-eight people from 31 institutions are coming together for Mu2e to make a significant impact on this venture.

“We’re sensitive to the same new physics that scientists are searching for at the Large Hadron Collider, we just look for it in a complementary way,” said Ron Ray, Mu2e project manager. “Even if the LHC doesn’t see new physics, we could see new physics here.”

Diana Kwon

See a two-minute video on the ceremony

Share