• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for May, 2015

This article appeared in Fermilab Today on May 27, 2015.

The future Dark Energy Spectroscopic Instrument will be mounted on the Mayall 4-meter telescope. It will be used to create a 3-D map of the universe for studies of dark energy. Photo courtesy of NOAO

The future Dark Energy Spectroscopic Instrument will be mounted on the Mayall 4-meter telescope. It will be used to create a 3-D map of the universe for studies of dark energy. Photo courtesy of NOAO

Dark energy makes up about 70 percent of the universe and is causing its accelerating expansion. But what it is or how it works remains a mystery.

The Dark Energy Spectroscopic Instrument (DESI) will study the origins and effects of dark energy by creating the largest 3-D map of the universe to date. It will produce a map of the northern sky that will span 11 billion light-years and measure around 25 million galaxies and quasars, extending back to when the universe was a mere 3 billion years old.

Once construction is complete, DESI will sit atop the Mayall 4-Meter Telescope in Arizona and take data for five years.

DESI will work by collecting light using optical fibers that look through the instrument’s lenses and can be wiggled around to point precisely at galaxies. With 5,000 fibers, it can collect light from 5,000 galaxies at a time. These fibers will pass the galaxy light to a spectrograph, and researchers will use this information to precisely determine each galaxy’s three-dimensional position in the universe.

Lawrence Berkeley National Laboratory is managing the DESI experiment, and Fermilab is making four main contributions: building the instrument’s barrel, packaging and testing charge-coupled devices, or CCDs, developing an online database and building the software that will tell the fibers exactly where to point.

The barrel is a structure that will hold DESI’s six lenses. Once complete, it will be around 2.5 meters tall and a meter wide, about the size of a telephone booth. Fermilab is assembling both the barrel and the structures that will hold it on the telescope.

“It’s a big object that needs to be built very precisely,” said Gaston Gutierrez, a Fermilab scientist managing the barrel construction. “It’s very important to position the lenses very accurately, otherwise the image will be blurred.”

DESI’s spectrograph will use CCDs, sensors that work by converting light collected from distant galaxies into electrons, then to digital values for analysis. Fermilab is responsible for packaging and testing these CCDs before they can be assembled into the spectrograph.

Fermilab is also creating a database that will store information required to operate DESI’s online systems, which direct the position of the telescope, control and read the CCDs, and ensure proper functioning of the spectrograph.

Lastly, Fermilab is developing the software that will convert the known positions of interesting galaxies and quasars to coordinates for the fiber positioning system.

Fermilab completed these same tasks when it built the Dark Energy Camera (DECam), an instrument that currently sits on the Victor Blanco Telescope in Chile, imaging the universe. Many of these scientists and engineers are bringing this expertise to DESI.

“DESI is the next step. DECam is going to precisely measure the sky in 2-D, and getting to the third dimension is a natural progression,” said Fermilab’s Brenna Flaugher, project manager for DECam and one of the leading scientists on DESI.

These four contributions are set to be completed by 2018, and DESI is expected to see first light in 2019.

“This is a great opportunity for students to learn the technology and participate in a nice instrumentation project,” said Juan Estrada, a Fermilab scientist leading the DESI CCD effort.

DESI is funded largely by the Department of Energy with significant contributions from non-U.S. and private funding sources. It is currently undergoing the DOE CD-2 review and approval process.

“We’re really appreciative of the strong technical and scientific support from Fermilab,” said Berkeley Lab’s Michael Levi, DESI project director.

Diana Kwon

Share

All those super low energy jets that the LHC cannot see? LHC can still see them.

Hi Folks,

Particle colliders like the Large Hadron Collider (LHC) are, in a sense, very powerful microscopes. The higher the collision energy, the smaller distances we can study. Using less than 0.01% of the total LHC energy (13 TeV), we see that the proton is really just a bag of smaller objects called quarks and gluons.

myproton_profmattstrassler

This means that when two protons collide things are sprayed about and get very messy.

atlas2009-collision-vp1-142308-482137-web

One of the most important processes that occurs in proton collisions is the Drell-Yan process. When a quark, e.g., a down quark d, from one proton and an antiquark, e.g., an down antiquark d, from an oncoming proton collide, they can annihilate into a virtual photon (γ) or Z boson if the net electric charge is zero (or a W boson if the net electric charge is one). After briefly propagating, the photon/Z can split into a lepton and its antiparticle partner, for example into a muon and antimuon or electronpositron pair! In pictures, quark-antiquark annihilation into a lepton-antilepton pair (Drell-Yan process) looks like this

feynmanDiagram_DrellYan_Simple

By the conservation of momentum, the sum of the muon and antimuon momenta will add up to the photon/Z boson  momentum. In experiments like ATLAS and CMS, this gives a very cool-looking distribution

cms_DY_7TeV

Plotted is the invariant mass distribution for any muon-antimuon pair produced in proton collisions at the 7 TeV LHC. The rightmost peak at about 90 GeV (about 90 times the proton’s mass!) is a peak corresponding to the production Z boson particles. The other peaks represent the production of similarly well-known particles in the particle zoo that have decayed into a muon-antimuon pair. The clarity of each peak and the fact that this plot uses only about 0.2% of the total data collected during the first LHC data collection period (Run I) means that the Drell-Yan process is a very useful for calibrating the experiments. If the experiments are able to see the Z boson, the rho meson, etc., at their correct energies, then we have confidence that the experiments are working well enough to study nature at energies never before explored in a laboratory.

However, in real life, the Drell-Yan process is not as simple as drawn above. Real collisions include the remnants of the scattered protons. Remember: the proton is bag filled with lots of quarks and gluons.

feynmanDiagram_DrellYan_wRad

Gluons are what holds quarks together to make protons; they mediate the strong nuclear force, also known as quantum chromodynamics (QCD). The strong force is accordingly named because it requires a lot of energy and effort to overcome. Before annihilating, the quark and antiquark pair that participate in the Drell-Yan process will have radiated lots of gluons. It is very easy for objects that experience the strong force to radiate gluons. In fact, the antiquark in the Drell-Yan process originates from an energetic gluon that split into a quark-antiquark pair. Though less common, every once in a while two or even three energetic quarks or gluons (collectively called jets) will be produced alongside a Z boson.

feynmanDiagram_DrellYan_3j

Here is a real life Drell-Yan (Z boson) event with three very energetic jets. The blue lines are the muons. The red, orange and green “sprays” of particles are jets.

atlas_158466_4174272_Zmumu3jets

 

As likely or unlikely it may be for a Drell-Yan process or occur with additional energetic jets, the frequency at which they do occur appear to match very well with our theoretical predictions. The plot below show the likelihood (“Production cross section“) of a W or Z boson with at least 0, 1, 2, 3, or 4(!) very energetic jets. The blue bars are the theoretical predictions and the red circles are data. Producing a W or Z boson with more energetic jets is less likely than having fewer jets. The more jets identified, the smaller the production rate (“cross section”).

cms_StairwayHeaven_2014

How about low energy jets? These are difficult to observe because experiments have high thresholds for any part of a collision to be recorded. The ATLAS and CMS experiments, for example, are insensitive to very low energy objects, so not every piece of an LHC proton collision will be recorded. In short: sometimes a jet or a photon is too “dim” for us to detect it. But unlike high energy jets, it is very, very easy for Drell-Yan processes to be accompanied with low energy jets.

feynmanDiagram_DrellYan_wRadx6

There is a subtlety here. Our standard tools and tricks for calculating the probability of something happening in a proton collision (perturbation theory) assumes that we are studying objects with much higher energies than the proton at rest. Radiation of very low energy gluons is a special situation where our usual calculation methods do not work. The solution is rather cool.

As we said, the Z boson produced in the quark-antiquark annihilation has much more energy than any of the low energy gluons that are radiated, so emitting a low energy gluon should not affect the system much. This is like massive freight train pulling coal and dropping one or two pieces of coal. The train carries so much momentum and the coal is so light that dropping even a dozen pieces of coal will have only a negligible effect on the train’s motion. (Dropping all the coal, on the other hand, would not only drastically change the train’s motion but likely also be a terrible environmental hazard.) We can now make certain approximations in our calculation of a radiating a low energy gluon called “soft gluon factorization“. The result is remarkably simple, so simple we can generalize it to an arbitrary number of gluon emissions. This process is called “soft gluon resummation” and was formulated in 1985 by Collins, Soper, and Sterman.

Low energy gluons, even if they cannot be individually identified, still have an affect. They carry away energy, and by momentum conservation this will slightly push and kick the system in different directions.

feynmanDiagram_DrellYan_wRadx6_Text

 

If we look at Z bosons with low momentum from the CDF and DZero experiments, we see that the data and theory agree very well! In fact, in the DZero (lower) plot, the “pQCD” (perturbative QCD) prediction curve, which does not include resummation, disagrees with data. Thus, soft gluon resummation, which accounts for the emission of an arbitrary number of low energy radiations, is important and observable.

cdf_pTZ dzero_pTZ

In summary, Drell-Yan processes are a very important at high energy proton colliders like the Large Hadron Collider. They serve as a standard candle for experiments as well as a test of high precision predictions. The LHC Run II program has just begun and you can count on lots of rich physics in need of studying.

Happy Colliding,

Richard (@bravelittlemuon)

 

Share

This past month in Geneva a conference took place bringing together the world’s foremost experiments in cosmic ray physics and indirect dark matter detection: “AMS Days at CERN”. I took a break from thesis-writing, grabbed a bag of popcorn, and sat down to watch a couple of the lectures via webcast. There was a stellar lineup, including but not limited to talks from IceCube, the Pierre Auger Observatory, H.E.S.S. and CTA, Fermi-LAT, and CREAM. The Alpha Magnetic Spectrometer (AMS) experiment was, of course, the star of the show. It is the AMS and its latest results that I’d like to focus on now.

But first, I’d like to give a brief introduction to cosmic rays, since that’s what AMS studies.

It turns out that space is not as empty as one might think. The Earth is constantly being bombarded by extremely-high-energy particles from all directions.  These cosmic rays were discovered in the early twentieth century by the Austrian physicist Victor Hess. Hess made several balloon-borne measurements of the Earth’s natural radiation at various altitudes and observed that the incidence of ionizing radiation actually increased with ascent, the exact opposite of what you would expect if all radioactivity came from the earth.

Fig. 1: An artist's rendition of cosmic rays . Image from http://apod.nasa.gov/apod/ap060814.html.

Fig. 1: An artist’s rendition of cosmic rays . Image from http://apod.nasa.gov.

The word “ray” is actually something of a misnomer – Cosmic rays are primarily charged matter particles rather than electromagnetic radiation. Their makeup goes as follows: approximately 98% are nuclei, of which 90% of are protons, 9% are alpha particles (helium nuclei), and only a small proportion heavier nuclei; and approximately 2% electrons and positrons. Only very small trace amounts (less than one ten-thousandth the number of protons) of antimatter are present, and of this, it is all positrons and antiprotons – not a single antihelium or heavier anti-nucleus has been discovered. There are two types of cosmic rays: primary rays, which come directly from extrasolar sources, and secondary rays, which come from primary rays crashing into the interstellar medium and forming new particles through processes such as nuclear spallation. Particles resulting from cosmic ray collisions with the Earth’s atmosphere are also considered secondary cosmic rays – these include particles like pions, kaons, and muons, and their decay products.

Fig. 2: Cosmic ray flux vs. particle energy.  Image from http://science.nasa.gov/science-news/science-at-nasa/2001/ast15jan_1/

Fig. 2: Cosmic ray flux vs. particle energy. Image from http://science.nasa.gov/science-news/science-at-nasa/2001/ast15jan_1/

Despite being discovered over a hundred years ago, cosmic rays remain in a lot of ways a big mystery. For one thing, we don’t know exactly where they come from. Because cosmic rays are generally electrically charged, they don’t travel to us straight from the source. Rather, they are accelerated this way and that by magnetic fields in space so that when they finally reach us they could be coming from any direction at all. Indeed, the cosmic ray flux that we see is completely isotropic, or the same in all directions.

Not only do they not come straight from the source, but we don’t even know what that source is. These particles move orders of magnitude faster than particles in our most powerful accelerators on Earth. Astronomers’ best guess is that cosmic rays are accelerated by magnetic shocks from supernovae. But even supernovae aren’t enough to accelerate the highest-energy cosmic rays. Moreover, there are features in the cosmic ray energy spectrum that we just don’t understand (see Fig. 2). Two kinks, a “knee” at about 1016 eV and an “ankle” at about 1018 eV could indicate the turning on or off of some astrophysical process. Experiments like the Pierre Auger Observatory were designed to study these ultra-high-energy particles and hopefully will tell up a little bit more about them in the next few years.

The AMS is primarily interested in lower-energy cosmic rays. For four years, ever since its launch up to the International Space Station, it’s been cruising the skies and collecting cosmic rays by the tens of billions. I will not address the experimental design and software here. Instead I refer the reader to one of my previous articles, “Dark Skies II- Indirect Detection and the Quest for the Smoking Gun”.

In addition to precision studies of the composition and flux of cosmic rays, the AMS has three main science goals: (1) Investigating the matter-antimatter asymmetry by searching for primordial antimatter. (2) Searching for dark matter annihilation products amidst the cosmic rays. And (3), looking for strangelets and other exotic forms of matter.

The very small fraction of cosmic rays made up of antimatter is relevant not just for the first goal but for the second as well. Not many processes that we know about can produce positrons and antiprotons, but as I mention in “Dark Skies II”, dark matter annihilations into Standard Model particles could be one of those processes. Any blips or features in the cosmic ray antimatter spectrum could indicate dark matter annihilations at work.

Fig. 3. The positron fraction measured by AMS.  Image from L. Accardo et al. (AMS Collaboration), September 2014.

Fig. 3. The positron fraction measured by AMS. Image from L. Accardo et al. (AMS Collaboration), September 2014.

On April 14 at “AMS Days at CERN”, Professor Andrei Kounine of MIT presented the latest results from AMS.

The first part of Kounine’s talk focused on a precise characterization of the positron fraction presented by the AMS collaboration in September 2014 and a discussion of the relevant systematics. In the absence of new physics processes, we expect the positron fraction to be smooth and decreasing with energy. As you can see in Fig. 3, however, the positron fraction starts rising at approximately 8 GeV and increases steadily up to about 250 GeV. The curve hits a maximum at about 275 GeV and then appears to begin to turn over, although at these energies the measurements are limited by statistics and more data is needed to determine exactly what happens beyond this point. Models of dark matter annihilation predict a much steeper drop-off than do models where the positron excess is produced by, say, pulsars. Five possible sources of systematic error were identified, all of which have been heavily investigated. These included a small asymmetry in positron and electron acceptance due to slight differences in some of the bits of the tracker; variations in efficiency with respect to energy of the incoming particle; binning errors, which are mitigated due to high experimental resolution; low statistics at the tails of the electron and positron distributions; and “charge confusion”, or the misidentification of electrons as positrons, which happens only in a very small number of cases.

Kounine also presented a never-before-seen, not-yet-published measurement of the antiproton-proton ratio as measured by AMS, which you can see in Fig. 4. This curve represents a total of 290,000 antiprotons selected out of total of 54 billion events collected by AMS over the past 4 years. Many of the same systematics (acceptance asymmetry, charge confusion, and so on) as in the positron measurement are relevant here. Work on the antiproton analysis is ongoing, however, and according to Kounine it’s too soon to try to match models to the data.

Fig. 4. AMS’s latest antiproton-proton ratio measurement, from Prof. Andrei Kounine’s presentation at “AMS Days at CERN”.

Fig. 4. AMS’s latest antiproton-proton ratio measurement, from Prof. Andrei Kounine’s presentation at “AMS Days at CERN”.

As a dark matter physicist, the question in my mind is, do these measurements represent dark matter annihilations? Professor Subir Sarkar of Oxford and the Niels Bohr Institute in Copenhagen thinks not. In his talk at “AMS Days”, Sarkar argues that the dark matter annihilation cross-section necessary to match the positron flux seen by AMS and other experiments such as Fermi-LAT and PAMELA needs to be so large that by all rights the dark matter in the universe should have all annihilated away already. This is inconsistent with the observed dark matter density in our galaxy. You can get around this with theoretical models that incorporate new kinds of long-range forces. However, the observed antiproton flux, according to Sarkar, is consistent with background. Therefore dark matter would have to be able to annihilate into leptons (electrons and positrons, muons, neutrinos, and so on) but not quarks. Such models exist, but now we’re starting to severely restrict our model space. Moreover, dark matter annihilating in the early universe near the time of recombination should leave visible imprints in the Cosmic Microwave Background (CMB), which have not yet been seen. CMB experiments such as Planck therefore disfavor a dark matter explanation for the observed peak in positron fraction.

Sarkar then goes on to present an alternate model where secondary cosmic ray particles such as positrons are accelerated by the same mechanisms (magnetic shocks from supernovae, pulsars, and other cosmic accelerators) that accelerate primary cosmic rays. Then, if there are invisible accelerators in our nearby galactic neighborhood, as seems likely because electrons and positrons can’t propagate very far without losing energy due to interactions with starlight and the CMB, it could be possible to get very large fluctuations in the cosmic ray flux due purely to the randomness of how these accelerators are distributed around us.

Regardless of whether or not the AMS has actually seen a dark matter signal, the data are finally beginning to be precise enough that we can start really pinning down how cosmic rays backgrounds are created and propagated. I encourage you to check out some of the webcasts at “AMS Days at CERN” for yourself. Although the event is over the webcasts are still available in the CERN document archive here.

Share

This article appeared in Fermilab Today on May 5, 2015.

Technicians John Cornele, Pat Healey and Skyler Sherwin have been crucial in preparing the LArIAT detector for beam. The liquid-argon-filled detector saw first beam on Thursday. Photo: Jen Raaf

Technicians John Cornele, Pat Healey and Skyler Sherwin have been crucial in preparing the LArIAT detector for beam. The liquid-argon-filled detector saw first beam on Thursday. Photo: Jen Raaf

Fermilab’s Test Beam Facility (FTBF) now runs a second beamline to provide particles for R&D experiments. The MCenter beamline came back to life last year after an eight-year slumber to join the facility’s other beamline, MTest.

On Thursday, April 30, accelerator operators began using the revived beamline to send particles to its first major experiment, Liquid Argon TPC in a Test Beam (LArIAT), which will help advance particle detector technologies for neutrino experiments.

The FTBF provides experiments with different types of particle beams with a range of energies. Its main purpose is the research and development of particle detectors. It is one of only two sites in the world that provides this service with high-energy hadrons, which are particles made of quarks. Since 2005, the FTBF, with its distinctive orange and blue corrugated-steel roof, has staged more than 50 experiments, conducted by scientists from more than 170 institutions in 30 countries.

“We’re very busy and fully subscribed,” said JJ Schmidt, deputy facility manager at FTBF. “The existence of two beams allows us to serve a broader class of experiments.”

Not only does the new beamline allow FTBF to serve a larger number of users, it also provides room for a greater diversity of experiments. While MTest is aimed at experiments with a turnover of about one to four weeks, MCenter caters to more long-term experiments like LArIAT that will last for months, or even years.

Beautiful tracks at first try
LArIAT is a liquid-argon time projection chamber. Charged particles traveling through the sea of liquid argon ionize the argon atoms, and an electric field causes liberated electrons to drift toward the detector readout. Different particles cause different amounts of ionization, allowing researchers to distinguish between particles such as pions, kaons and protons.

This plot shows LArIAT's first tracks: two views of a charged particle interacting inside the LArIAT detector, which is filled with liquid argon.

This plot shows LArIAT’s first tracks: two views of a charged particle interacting inside the LArIAT detector, which is filled with liquid argon.

The first spill of particles delivered to LArIAT led to immediate success. The detector recorded picture-perfect tracks of charged particles.

Like the test beam, LArIAT will act as a research and development vehicle for future projects. Because neutrinos can be studied only through the particles produced when they interact with material inside a particle detector, being able to reliably characterize these other particles is of great importance.

“This is going to be fantastic not only for LArIAT but all the neutrino experiments that will use its results,” said Jen Raaf, co-spokesperson for LArIAT.

LArIAT will run the test beam for 24 hours a day while experimenters take data. The first run will last about three months, after which the detector’s cryogenic system will undergo upgrades to prepare for longer follow-up runs.

“It’s great that we have a facility where a small experiment can take beam over a long term,” said Brian Rebel, a scientist involved in LArIAT.

About 75 people from 22 institutions from the United States, Europe and Japan work on this experiment.

“Most are young postdocs and Ph.D. students that are enthusiastically doing a great job,” said Flavio Cavanna, LArIAT co-spokesperson.

“It’s an exciting combination of many years of work by the Accelerator, Particle Physics, Neutrino and Scientific Computing divisions to have the capability to do research that is important for making this the premier neutrino laboratory in the world,” Schmidt said.

Diana Kwon

Share

This article appeared in Fermilab Today on May 1, 2015.

Fermilab Director Nigel Lockyer shakes hands with Jefferson Lab Director Hugh Montgomery by a superconducting coil and its development and fabrication team at Fermilab. Six coils have been made and shipped to Jefferson Lab for use in the CLAS12 experiment. Photo: Reidar Hahn

Fermilab Director Nigel Lockyer shakes hands with Jefferson Lab Director Hugh Montgomery by a superconducting coil and its development and fabrication team at Fermilab. Six coils have been made and shipped to Jefferson Lab for use in the CLAS12 experiment. Photo: Reidar Hahn

A group of Fermilab physicists and engineers was faced with a unique challenge when Jefferson Lab asked them to make the superconducting coils for an upgrade to their CEBAF Large Acceptance Spectrometer experiments. These are some of the largest coils Fermilab has ever built.

Despite obstacles, the sixth coil was completed, packed on a truck and sent to Jefferson Lab to become the last piece of the torus magnet in the lab’s CLAS detector. It arrived on Thursday.

The CLAS detector’s upgrade (CLAS12) will allow it to accept electron beams of up to 11 GeV, matching the beam energy of the Virginia laboratory’s CEBAF electron accelerator after five passes. These improvements will allow Jefferson Lab to more accurately study the properties of atomic nuclei.

A major component of the enhanced detector is the torus magnet, which will be made from the six superconducting coils created at Fermilab. Aside from cleaning, insulating and winding the coils, one of the most important parts of the process is vacuum epoxy impregnation. During this step, air and water vapor are removed from the coils and replaced with an epoxy.

This process is particularly difficult when you’re working on magnets as big as the CLAS12 coils, which are 14 feet long and seven feet wide. Fermilab’s Magnet Systems Department fabrication team, the group responsible for making these massive coils, encountered a major obstacle at the end of March 2014 after finishing the first practice coil.

What they found were dry areas within the coil where the epoxy couldn’t penetrate. These were places where the coils weren’t fixed into place, meaning they could move and generate heat and resistance. This can lead to magnet quench, the transition from superconducting to a normal state — a highly undesirable consequence.

The Fermilab group and Jefferson Lab staff collaborated to come up with a solution. By trying new materials, new temperature profiles and adjusting the time that the epoxy was left to sit and be adsorbed, the team was able to prevent the dry areas from forming.

Fred Nobrega, the lead engineer at Fermilab for the CLAS12 coil project, joined the effort last August.

“It was rewarding for me to join the project near its low point, be able to help get through the hurdle and see this completed,” he said.

Production has been steady since December, with Fermilab sending roughly one coil a month to Jefferson Lab. Although the sixth coil will become the last piece of the torus magnet, the project isn’t complete just yet — the ultimate goal is to make eight identical coils, the six for the magnet and two spares.

“We’re succeeding because we have great people and a productive collaboration with Jefferson Lab, who helped us at difficult moments,” said George Velev, head of the Magnet Systems Department. “We worked together on a tough problem and now we see the results.”

Diana Kwon

Share