• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for June, 2014

There’s a software tool I use almost every day, for almost any work situation. It’s good for designing event selections, for brainstorming about systematic errors, and for mesmerizing kids at outreach events. It’s good anytime you want to build intuition about the detector. It’s our event viewer. In this post, I explain a bit about how I use our event viewer, and also share the perspective of code architect Steve Jackson, who put the code together.

Steamshovel event viewer showing the event Mr. Snuffleupagus

The IceCube detector is buried in the glacier under the South Pole. The signals can only be read out electronically; there’s no way to reach the detector modules after the ice freezes around them. In designing the detector, we carefully considered what readout we would need to describe what happens in the ice, and now we’re at the stage of interpreting that data. A signal from one detector module might tell us the time, amplitude, and duration of light arriving at that detector, and we put those together into a picture of the detector. From five thousand points of light (or darkness), we have to answer: where did this particle come from? Does the random detector noise act the way we think it acts? Is the disruption from dust in the ice the same in all directions? All these questions are answerable, but the answers take some teasing out.

To help build our intuition, we use event viewer software to make animated views of interesting events. It’s one of our most useful tools as physicist-programmers. Like all bits of our software, it’s written within the collaboration, based on lots of open-source software, and unique to our experiment. It’s called “steamshovel,” a joke on the idea that you use it to dig through ice (actually, dig through IceCube data – but that’s the joke).

Meet Steve Jackson and Steamshovel

IceCube data from the event Mr. Snuffleupagus

Steve Jackson’s job on IceCube was originally maintaining the central software, a very broad job description. His background is in software including visualizations, and he’s worked as The Software Guy in several different physics contexts, including medical, nuclear, and astrophysics. After becoming acquainted with IceCube software needs, he narrowed his focus to building an upgraded version of the event viewer from scratch.

The idea of the new viewer, Steamshovel, was to write a general core in the programming language C++, and then higher-level functionality in Python. This splits up the problem of drawing physics in the detector into two smaller problems: how to translate physics into easily describable shapes, like spheres and lines, and how to draw those spheres and lines in the most useful way. Separating these two levels makes the code easier to maintain, easier to update the core, and easier for other people to add new physics ideas, but it doesn’t make it easier to write in the first place. (I’ll add: that’s why we hire a professional!) Steve says the process took about as long as he could have expected, considering Hofstadter’s Law, and he’s happy with the final product.

A Layer of Indirection 

As Steve told me, “Every problem in computer science can be addressed by adding a layer of indirection: some sort of intermediate layer where you abstract the relevant concepts into a higher level.” The extra level here is the set of lines and spheres that get passed from the Python code to the C++ code. By separating the defining from the drawing, this intermediate level makes it simpler to define new kinds of objects to draw.

A solid backbone, written with OpenGL in C++, empowers the average grad student to write software visualization “artists” as python classes. These artists can connect novel physics ideas, written in Python, to the C++ backbone, without the grad student having to get into the details of OpenGL, or, hopefully, any C++.

Here’s a test of that simplicity: as part of our week-long, whirlwind introduction to IceCube software, we taught new students how to write a new Steamshovel artist. With just a week of software training, they were able to produce them, a testament to the usability of the Steamshovel backbone.

This separation also lets the backbone include important design details that might not occur to the average grad student, but make the final product more elegant. One such detail is that the user can specify zoom levels much more easily, so graphics are not limited to the size of your computer screen. Making high-resolution graphics suitable for publication is possible and easy. Using these new views, we’ve made magazine covers, t-shirts, even temporary tatoos.

Many Platforms, Many People

IceCube has an interesting situation that we support (and have users) running our software on many different UNIX operating systems: Mac, Ubuntu, Red Hat, Fedora, Scientific Linux, even FreeBSD. But we don’t test our software on Windows, which is the standard for many complex visualization packages: yet another good reason to use the simpler OpenGL. “For cross-platform 3D graphics,” Steve says, “OpenGL is the low-level drawing API.”

As visualization software goes, the IceCube case is relatively simple. You can describe all the interesting things with lines and spheres, like dots for detector modules, lines and cylinders for the cables connecting them or for particle tracks, and spheres of configurable color and size for hits within the detector. There’s relatively little motion beyond appearing, disappearing, and changing sizes. The light source never moves. I would add that this is nothing – nothing! – like Pixar. These simplifications mean that the more complex software packages that Steve had the option to use were unnecessarily complex, full of options that he would never use, and the simple, open-source openGL was perfectly sufficient.

The process of writing Steamshovel wasn’t just one-man job (even though I only talked to one person for this post). Steve solicited, and received, ideas for features from all over the collaboration. I personally remember that when he started working here, he took the diligent and kind step of sitting and talking to several of us while we used the old event viewer, just to see what the workflow was like, the good parts and the bad. One particularly collaborative sub-project started when one IceCube grad student, Jakob, had the clever idea of displaying Monte Carlo true Cherenkov cones. We know where the simulated light emissions are, and how the light travels through the ice – could we display the light cone arriving at the detector modules and see whether a particular hit occurred at the same time? Putting together the code to make this happen involved several people (mainly Jakob and Steve), and wouldn’t have been possible coding in isolation.

Visual Cortex Processing

The moment that best captured the purpose of a good event viewer, Steve says, was when he animated an event for the first time. Specifically, he made the observed phototube pulses disappear as the charge died away, letting him see what happens on a phototube after the first signal. Animating the signal pulses made the afterpulsing “blindingly obvious.”

We know, on an intellectual level, that phototubes display afterpulsing, and it’s especially strong and likely after a strong signal pulse. But there’s a difference between knowing, intellectually, that a certain fraction of pulses will produce afterpulses and seeing those afterpulses displayed. We process information very differently if we can see it directly than if we have to construct a model in our heads based on interpreting numbers, or even graphs. An animation connects more deeply to our intuition and natural instinctive processes.

As Steve put it: “It brings to sharp relief something you only knew about in sort of a complex, long thought out way. The cool thing about visualization is that you can get things onto a screen that your brain will notice pre-cognitively; you don’t even have to consciously think to distinguish between a red square and a blue square. So even if you know that two things are different, from having looked carefully through the math, if you see those things in a picture, the difference jumps out without you even having to think about it. Your visual cortex does the work for you. […] That was one of the coolest moments for me, when these people who understood the physics in a deep way nonetheless were able to get new insights on it just by seeing the data displayed in a new way. ”

And that’s why need event viewers.

Share

This article appeared in Fermilab Today on June 27, 2014.

The Milky Way rises over the Cerro Tololo Inter-American Observatory in northern Chile. The Dark Energy Survey operates from the largest telescope at the observatory, the 4-meter Victor M. Blanco Telescope (left). Photo courtesy of Andreas Papadopoulos

The Milky Way rises over the Cerro Tololo Inter-American Observatory in northern Chile. The Dark Energy Survey operates from the largest telescope at the observatory, the 4-meter Victor M. Blanco Telescope (left). Photo courtesy of Andreas Papadopoulos

The first images taken by the Dark Energy Survey after it began in August 2013 have revealed a rare, “superluminous” supernova (SLSN) that erupted in a galaxy 7.8 billion light-years away. The stellar explosion, called DES13S2cmm, easily outshines most galaxies in the universe and could still be seen in the data six months later, at the end of the first of what will be five years of observing by DES.

Supernovae are very bright, shining anywhere from 100 million to a few billion times brighter than the sun for weeks on end. Thousands of these brilliant stellar deaths have been discovered over the last two decades, and the word “supernova” itself was coined 80 years ago. Type Ia supernovae, the most well-known class of supernovae, are used by cosmologists to measure the expansion rate of the universe.

But SLSNe are a recent discovery, recognized as a distinct class of objects only in the past five years. Although they are 10 to 50 times brighter at their peak than type Ia supernovae, fewer than 50 have ever been found. Their rareness means each new discovery brings the potential for greater understanding — or more surprises.

Before (left) and after (center) images of the region where DES13S2cmm was discovered. On the right is a subtraction of these two images, showing a bright new object at the center — a supernova. Image: Dark Energy Survey

Before (left) and after (center) images of the region where DES13S2cmm was discovered. On the right is a subtraction of these two images, showing a bright new object at the center — a supernova. Image: Dark Energy Survey


It turns out that even within this select group of SLSNe, DES13S2cmm is unusual. The rate at which it is fading away over time is much slower than for most other SLSNe that have been observed to date. This change in brightness over time, or light curve, gives information on the mechanisms that caused the explosion and the composition of the material ejected. DES can constrain the potential energy source for DES13S2cmm thanks to the exceptional photometric data quality available. Only about 10 SLSNe are known that have been similarly well-studied.

Although they are believed to come from the death of massive stars, the explosive origin of SLSNe remains a mystery. The DES team tried to explain the luminosity of DES13S2cmm as a result of the decay of the radioactive isotope nickel-56, known to power normal supernovae. They found that, to match the peak brightness, the explosion would need to produce more than three times the mass of our sun of the element. However, the model is then unable to reproduce the rate at which DES13S2cmm brightened and faded.

The DES13S2cmm superluminous supernova was discovered by Andreas Papadopoulos (right), a graduate student at the University of Portsmouth and lead author on a forthcoming paper about the supernova. Chris D'Andrea (left) is a postdoctoral researcher at Portsmouth and leads the DES supernova spectroscopic follow-up program. Photo courtesy of Andreas Papadopoulos

The DES13S2cmm superluminous supernova was discovered by Andreas Papadopoulos (right), a graduate student at the University of Portsmouth and lead author on a forthcoming paper about the supernova. Chris D’Andrea (left) is a postdoctoral researcher at Portsmouth and leads the DES supernova spectroscopic follow-up program. Photo courtesy of Andreas Papadopoulos

A model that is more highly favored in the literature for SLSNe involves a magnetar: a neutron star that rotates once every millisecond and generates extreme magnetic fields. Produced as the remnant of a massive supernova, the magnetar begins to “spin down” and inject energy into the supernova, making the supernova exceptionally bright. This model is better able to produce the behavior of DES13S2cmm, although neither scenario could be called a good fit to the data.

DES13S2cmm was the only confirmed SLSN from the first season of DES, but several other promising candidates were found that could not be confirmed at the time. More are expected in the coming seasons. The goal is to discover and monitor enough of these rare objects to enable them to be understood as a population.

Although designed for studying the evolution of the universe, DES will be a powerful probe for understanding superluminous supernovae.

Chris D’Andrea and Andreas Papadopoulos, Institute of Cosmology and Gravitation, University of Portsmouth

Share

Fermilab published a version of this press release on June 24, 2014.

The 30-ton MicroBooNE neutrino detector is gently lowered into the Liquid-Argon Test Facility at Fermilab on Monday, June 23. The detector will become the centerpiece of the MicroBooNE experiment, which will study ghostly particles called neutrinos. Photo: Fermilab

The 30-ton MicroBooNE neutrino detector is gently lowered into the Liquid-Argon Test Facility at Fermilab on Monday, June 23. The detector will become the centerpiece of the MicroBooNE experiment, which will study ghostly particles called neutrinos. Photo: Fermilab

On Monday, June 23, the next phase of neutrino physics at Fermilab fell (gently) into place.

The MicroBooNE detector – a 30-ton, 40-foot-long cylindrical metal tank designed to detect ghostly particles called neutrinos – was carefully transported by truck across the U.S. Department of Energy’s Fermilab site, from an assembly building it was constructed in to the experimental hall three miles away.

The massive detector was then hoisted up with a crane, lowered through the open roof of the building and placed into its permanent home, directly in the path of Fermilab’s beam of neutrinos. There it will become the centerpiece of the MicroBooNE experiment, which will study those elusive particles to crack several big mysteries of the universe.

The MicroBooNE detector has been under construction for nearly two years. The tank contains a 32-foot-long “time projection chamber,” the largest ever built in the United States, equipped with 8,256 delicate gilded wires, which took the MicroBooNE team two months to attach by hand. This machine will allow scientists to further study the properties of neutrinos, particles that may hold the key to understanding many unexplained mysteries of the universe.

“This is a huge day for the MicroBooNE experiment,” said Fermilab’s Regina Rameika, project manager for the MicroBooNE experiment. “We’ve worked hard to create the best scientific instrument that we can. To see it moved into place was a thrill for the entire team.”

The MicroBooNE detector will now be filled with 170 tons of liquid argon, a heavy liquid that will release charged particles when neutrinos interact with it. The detector’s three layers of wires will then capture pictures of these interactions at different points in time and send that information to the experiment’s computers.

Using one of the most sophisticated processing programs ever designed for a neutrino experiment, those computers will sift through the thousands of interactions that will occur every day and create stunning 3-D images of the most interesting ones. The MicroBooNE team will use that data to learn more about how neutrinos change from one type (or “flavor”) to another, and narrow the search for a hypothesized (but as of yet, never observed) fourth type of neutrino.

“The scientific potential of MicroBooNE is really exciting,” said Yale University’s Bonnie Fleming, co-spokesperson for the MicroBooNE experiment. “After a long time spent designing and building the detector, we are thrilled to start taking data later this year.”

MicroBooNE is a cornerstone of Fermilab’s short-baseline neutrino program , which studies neutrinos traveling over shorter distances. (MINOS and NOvA, which send neutrinos through the Earth to Minnesota, are examples of long-baseline experiments.) In its recent report, the Particle Physics Project Prioritization Panel (P5) expressed strong support for the short-baseline neutrino program at Fermilab.

The P5 panel was comprised of members of the high-energy physics community. Their report was commissioned by the High Energy Physics Advisory Panel, which advises both the Department of Energy and the National Science Foundation on funding priorities.

The detector technology used in designing and building MicroBooNE will serve as a prototype for a much larger long-baseline neutrino facility planned for the United States, to be hosted at Fermilab. The P5 report also strongly supports this larger experiment, which will be designed and funded through a global collaboration.

Read the P5 report.

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Share

by Karen McNulty Walsh

With the discovery of the long-sought Higgs boson at the Large Hadron Collider (LHC), the world’s largest and most powerful particle collider, folks unfamiliar with the intricacies of particle physics might think the field has reached its end. But physicists gathered at the Large Hadron Collider Physics Conference in New York City June 2-7 say they are eager to move forward. Even amid discussions of tight budgets that make some proposed projects appear impossible, the general tenor, as expressed by leaders in the field, is that the future holds great potential for even more significant discoveries.

Physicist panel

Physicists joined New York Times science correspondent Dennis Overbye for a discussion on the future of the field.

At a session devoted to reflection and the future of the field, held Friday, June 6, Fabiola Gianotti, a particle physicist at Europe’s CERN laboratory (home of the LHC) and spokesperson for the LHC’s ATLAS experiment at the time of the Higgs discovery, said, “There is challenging work for everyone to make the impossible possible.” In fact, said James Siegrist, Associate Director of the Office of High Energy Physics within the U.S. Department of Energy’s (DOE) Office of Science, “I think the promise of the physics has never been greater.”

Co-sponsored by DOE’s Brookhaven National Laboratory and Columbia University, the week-long meeting featured updates on key findings from the LHC’s four experiments (including a possible hint of new physics), advances in theory, plans for future upgrades, and even future colliders—as well as apanel discussion moderated by Dennis Overbye, a science correspondent for the New York Times.

“We had a very successful conference with more than 300 participants discussing an impressive array of results from the recent LHC run,” said Brookhaven physicist Srini Rajagopalan, U.S. ATLAS Operations Program Manager and a co-organizer of the meeting. He also noted the extremely positive response to an open-to-the-public screening of Particle Fever, a documentary film that follows six scientists during the years leading up to the discovery of the Higgs boson. “I was simply amazed at the public interest in what we do. From young school students to senior citizens, people thronged to watch the movie and continued to ask questions late into the night.”

What keeps you up at night?

At Friday’s panel session, the Times’ Overbye had some questions of his own, perhaps more pointed that the public’s. He asked whether particle physicists’ streak of discoveries could be continued, whether the “glory days” for the U.S. were over, and what keeps physicists up at night. The panelists were realistic about challenges and the need for smart choices and greater globalization. But a spirit of optimism prevailed.

Natalie Roe, Director of the Physics Division at DOE’s Lawrence Berkeley National Laboratory—the first to respond—said, “I’m going to flip the question [of what keeps me up and night] and answer what gets me up in the morning.” Following a long period of experimental and theoretical successes, including the discovery of the Higgs, she said, “this is a very exciting time. There are still a few remaining details … dark matter and dark energy. And these are more than details; they are 95 percent of the universe!” With a long list of techniques available to get answers, she said, there is much work to be done.

University of California, Santa Cruz, physicist Steve Ritz, who recently chaired the Particle Physics Project Prioritization Panel (P5) and presented its recommendations for the future of the field, emphasized the importance of “telling our story,” staging and prioritizing future projects, and “aspiring to a greater program” that continues investments in crucial research and development to lay the foundation for future facilities.

Great technology progress, great challenges

In an overview talk that preceded the panel discussion, Gianotti presented a range of such future projects, including two possible linear accelerators, one in Japan the other at CERN, and two possible circular colliders, one in China and one at CERN. The latter, dubbed FCC, would be a proton-proton collider 80-100 kilometers in circumference—on the scale of the Superconducting Supercollider (SSC) once planned for and later cancelled in the U.S. Such a machine would push beyond the research limits of even the most ambitious upgrades proposed for the LHC.

Those upgrades, planned for data taking in Phase I in 2020 and Phase II in 2025, will begin the exploration of the coupling of the Higgs with other particles to explore the mechanism by which the Higgs generates mass, “electroweak symmetry breaking,” and searches for new physics beyond the standard model and into the realm of dark matter.

But, to really get at the heart of those questions and possibly reveal unknown physics, the scientists say the need for even higher precision and higher energy is clear.

Journey to the dark side

“Our elders had it easy compared to our students,” said Siegrist, describing the physics challenges now open to exploration. He likened this moment in time to the end of a video game his son had played where, “at the end of the game, you end up on ‘the dark side’ and have to start again.” In physics, he said, the dark sector—exploring dark matter and dark energy—is going to be equally challenging to everything that has come before.

To those who say building the future machines needed for this journey is impossible, Gianotti says, “didn’t the LHC also look close to impossible in the 1980s?” The path forward, she emphasized, is to innovate.

“Accelerator R&D is very important,” said Ritz, noting that, “anything we can do to design these machines to cost less” in terms of construction and operation should be done. “We need to be impatient about this,” he said. “We need to ask more and jump in more.”

Panelist Nima Arkani-Hamed, a theorist at the Institute of Advanced Study at Princeton University and Director of the Center for Future High Energy Physics in Beijing, China, likely agrees. He acknowledges the difficult task facing U.S. leadership in high-energy physics. “They are trying to make due with a budget that’s two or three times less than what our vision and this country deserves, and they are doing a good job,” he said. “But I worry that our generation will be viewed as the one that dropped the ball.”

“The sequence of steps for the next few decades is possible,” he added later. “It’s just a matter of will, not technology.”

But because of the scale and cost of future projects, he, like others, emphasized that “we will need the whole world and new pockets of resources and talent.”

The value of collaboration, competition, and globalization

Sergio Bertolucci, Director for Research and Computing at CERN, agreed. “We have been international, but we need to be truly global.”

Such cooperation and competition among nations is good for the field, Ritz emphasized. “We are intensely competitive. We want to be the ones to discover [something new.] But we are also cooperative because we can’t do it alone.”

Panelist Jerry Blazey, Assistant Director for Physical Sciences in the
Office of Science and Technology Policy, DOE’s Siegrist, and others agreed that the LHC is a great model for the field to stand behind and emulate for future collaborative projects. Blazey and Siegrist said OSTP and DOE would work together to discuss ways to smooth the process for such future multinational collaborations and to implement the recommendations of the P5 report.

These include future U.S. work at the LHC, an internationalized Long Baseline Neutrino Facility located at Fermi National Accelerator Laboratory, and a role in Japan’s proposed linear collider, as well as continued investments in the technologies needed for future experiments. Said University of California, Irvine, physicist Andrew Lankford, chair of the High Energy Physics Advisory Panel (HEPAP) to whom the report was delivered, the P5 report describes a field optimized for scientific progress. “It’s a ten year strategic plan—way more than a collection of cool experiments,” he said.

And it emphasizes the value of international competition and cooperation—perhaps one of the biggest successes of particle physics, aside from the breathtaking discoveries. Turning again to the example of the LHC collaborations, Ritz said, “50 years ago some of these people were in countries that were trying to kill one another. Now we don’t even think about what country they are from.”

As Brookhaven’s Rajagopalan summed up, “It is an exciting time for our field as we plan to move forward with ambitious global projects to address the fundamental questions of nature.”

Brookhaven Lab’s particle physics research is supported by the DOE Office of Science.

Related Links

Karen McNulty Walsh is a science writer in the Media & Communications Office at Brookhaven National Laboratory.

Share

Protons are the brave casualties in the search for new physics, but sometimes everybody lives.

Hi Folks,

The CERN Large Hadron Collider (LHC), history’s largest and most energetic proton collider, is currently being tuned up for another round of new discoveries. A Higgs boson, incredibly rare B meson decays, and evidence for vector boson fusion have already been identified, so there is great anticipation on what we may find during Run II.

Our discoveries, however, come at the cost of protons. During 2012, the ATLAS, CMS, and LHCb experiments collected a combined 48 fb-1 (48 “inverse femtobarns“), and another 12 fb-1 in 2011. To translate, an “inverse femtobarn” is a measure of proton collisions and  is the equivalent of 70 trillion proton-proton collisions. Hence, 60 fb-1 is equivalent to about 4,200 trillion proton-proton collisions, or 8,400 trillion protons. We hope to generate almost twice as much data per year when everything starts back up winter/spring 2015. You know, suddenly, @LHCproton‘s many fears of its day job make sense:

With so many protons spent in the name of science, one can reasonably ask

Is it possible to find new physics at the LHC without destroying a proton?

The answer is

Yes.

Sometimes, just sometimes, if we are very lucky, two protons can pass each other, interact and make new particles, but remain intact and unbroken. It sounds mind-boggling, but it has been one of the best tests of quantum electrodynamics (QED), the theory of light and matter at small distances and large energies.

From Maxwell to Photon Beams at the LHC

The idea is simple, the consequences are huge, and goes like this: Protons, like electrons, muons and W bosons, are electrically charged, so they can absorb and emit light. Protons, like electrons, do not just radiate light at random. Light is emitted following very specific rules and travel/propagate in very specific directions, dictated by Maxwell’s equations of electrodynamics. However, the rules of quantum mechanics state that at large enough energies and small enough distances, in other words an environment like we have at the LHC, particles of light (called photons) will interact with each other, with a predicted probability. Yes, you read that correctly, quantum mechanics states that light interacts with itself at small enough distances. The more protons we accelerate in the LHC, the more photons are radiated from protons that remain intact, and the more likely two photons will interact with each other, producing matter we can observe with detectors! An example of such a process that has already by observed is the pair production of muons from photons:

Muon pair production from photon scattering via elastic photon emission from protons. Credit: CMS, JHEP 1201 (2012) 052

To understand this more, lets take a look at Maxwell’s Equations, named after Scottish physicist J. C. Maxwell but really represent seminal contributions of several people. Do not worry about the calculus, we will not be working out any equations here, only discussing their physical interpretation. Without further ado, here are the four laws of electrodynamics:

MaxwellsEquations

The very first law, Gauss’ law, tells us that since the proton has an electric charge, it also it the source of an electric field (E). The bottom equation, Ampere’s Law, tells us that if we have a moving electric charge (a proton circling the LHC ring for example), then both the moving electric field (E) and the electric current (J) will generate a magnetic field (B). In the LHC, however, we do not just have a moving beam of protons, but an accelerating beam of protons. This means that the magnetic field is changing with time as the proton circles around the collider. The third equation, Faraday’s law, tells us that when a magnetic field (B) changes with time, an electric field (E) is generated. But since we already have an electric field, the two fields add together into something that also changes with time, and we end up back at Ampere’s law (the bottom equation). This is when something special happens. Whenever a charged particle is accelerated, the electric and magnetic fields that are generated feed into each other and create a sort of perpetual feedback. We call it electromagnetic radiation, or light. Accelerating charged particles emit light.

adsasd

Schematic representation of the strength of the electric (blue) and magnetic (red) fields as light propagates through space. Credit: Wikipedia

Maxwell’s equations  in fact tell us a bit more. They also tell us the direction in which light is emitted. The crosses and dots tell us whether things are perpendicular (at right angles) or parallel to each other. Specifically, they tell us that the generated magnetic field is always at right angles to both the electric field and direction the proton is travelling, and that light travels perpendicular to both the electric and magnetic field.  Since protons are travelling in a circle at the LHC, their tangential velocity, which always points forward, and their radial acceleration, which always points toward the center of the LHC ring, are always at right angles to each other. This crucial bit fixes the direction of the emitted light. As the protons travel in a circle, the generated electric field points in the direction of acceleration (the center); the generated magnetic is perpendicular to both of these, so it points upward if the proton is travelling in a counter-clockwise direction, or downward if the proton is travelling in a clockwise direction. The light must then always travel parallel to the proton! Along side the LHC proton beam is a hyper focused light beam! Technically speaking, this is called synchrotron radiation.

asdasd

(a) Relative orientation of an electrically charged particle travelling in a circle and its electromagnetic field likes. (b) Synchrotron radiation emitted tangent to a circular path traversed by an electrically charged particle. Credit: Wikipedia

The last but still important step is to remember that all of this is happening at distances the size of a proton and smaller. In other words, at distances where quantum mechanics is important. At these small distances, it is appropriate to talk about individual pieces (quanta) of light, called photons. That beam of synchrotron radiation travelling parallel to the proton beam can appropriately be identified as a beam of photons.  In summary, along side the LHC proton beam is the LHC photon beam! This photon beam is radiated from the protons in the proton beam, but the protons remain intact and do not rupture as long as the momentum transfer to the photon beam is not too large. A very important note I want to make is that the photon beams do not travel in a circle; they travel in straight lines and are constantly leaving the proton beam. Synchrotron radiation continuously drains the LHC beams of energy, which is why the LHC beam must be continuously fed with more energy.

Making Matter from Light Since 1994

Synchrotron radiation has been around for quite sometime. Despite recent claims, the first evidence for direct production of matter from photon beams came in 1994  from the DELPHI experiment at the Large Electron Positron (LEP) Collider, the LHC’s predecessor at CERN. There are earlier reports of photon-photon scattering at colliders but I have been unable to track down the appropriate citations. Since 1994, evidence for photon-photon scattering has been observed by the Fermilab’s CDF experiment at the Tevatron, as reported by the CERN Courier, and there is even evidence for the pair production of muons and W bosons at the CMS Experiment. Excitingly, there has also been so research to a potential Higgs factory using a dedicated photon collider. This image shows a few photon-photon scattering processes that result in final-state bottom quark and anti-bottom quark pairs.

Various photon-photon scattering processes that result in final-state bottom quark and anti-bottom quark pairs. Credit: Phys.Rev. D79 (2009) 033002

 

We can expect to see much more from the LHC on this matter because photon beams offer a good handle on understanding the stability of proton beam themselves and are a potential avenue for new physics.

Until next time, happy colliding.

– Richard (@BraveLittleMuon)

Share

I recently saw this comic from Twisted Doodles, which I think poses quite a conundrum for our usual simple picture of how science is studied and brought forth into the public:

From http://www.twisteddoodles.com/post/86414780702/working-in-science – used in this post with permission

From Twisted Doodles. Used in this post with permission.

If you are a non-scientist reading this blog, your idea of what science is for, and what it’s good for, is probably something like the left column – and in fact, I hope it is! But as someone who works day-to-day on understanding LHC data, I have a lot of sympathy with the right column. So how can they be reconciled?

Science takes hard work from a lot of people, and it’s an open process. Its ultimate goal is to produce a big picture understanding of a wide range of phenomena, which is what you’re reading about when you think all the good thoughts in the left-hand column. But that big picture is made of lots of individual pieces of work. For example, my colleagues and I worked for months and months on searching for the Higgs boson decaying to bottom quarks. We saw more bottom quarks than you would expect if the Higgs boson weren’t there, but not enough that we could be sure that we had seen any extra. So if you asked me, as an analyzer of detector data, if the Higgs boson existed, all I could say would be, “Well, we have a modest excess in this decay channel.” I might also have said, while I was working on it, “Wow, I’m tired, and I have lots of bugs in my code that still need to be fixed!” That’s the right-hand column.

The gap is bridged by something that’s sometimes called the scientific consensus, in which we put together all the analyses and conclude something like, “Yes, we found a Higgs boson!” There isn’t a single paper that proves it. Whatever our results, the fact that we’re sure we found something comes from the fact that ATLAS and CMS have independently produced the same discovery. The many bits of hard work come together to build a composite picture that we all agree on; the exhausted trees step back to take a broader perspective and see the happy forest.

So which is right? Both are, but not in the same way. The very specific results of individual papers don’t change unless there’s a mistake in them. But the way they’re interpreted can change over time; where once physicists were excited and puzzled by the discovery of new mesons, now we know they’re “just” different ways of putting quarks together.

So we expect the scientific consensus to change, it’s definitely not infallible, and any part of it can be challenged by new discoveries. But you might find that scientists like me are a bit impatient with casual, uninformed challenges to that consensus — it’s based, after all, on a lot of experts thinking and talking about all the evidence available. At the same time, scientific consensus can sometimes be muddled, and newspapers often present the latest tree as a whole new forest. Whether you are a scientist, or just read about science, keep in mind the difference between the forest and the trees. Try to understand which you’re reading about. And remember, ultimately, that the process of doing science is all the things in that comic, all at once.

Share

A version of this press release came out on on June 12, 2014.

Pi poles are part of a new exhibit for kids at Fermilab's Lederman Science Center, an educational center that houses resources for K-12 teachers and hosts science activities for students. Photo: Cindy Arnold

Pi poles are part of a new exhibit for kids at Fermilab’s Lederman Science Center, an educational center that houses resources for K-12 teachers and hosts science activities for students. Photo: Cindy Arnold

If you want to get children interested in the fundamentals of science, there’s nothing like letting them experience the phenomena first-hand. If you can make it fun at the same time, you have a formula for success.

That’s the thinking behind Fermilab’s in-progress outdoor physics exhibits, located near the Lederman Science Center. The Lederman Science Center is an educational center that houses science resources for K-12 teachers and hosts science activities for students. The Fermilab Education Office has just unveiled the latest exhibits, which allow kids to learn about basic principles of physics while playing in the sunshine.

The two new exhibits, called Wave Like a Particle and Swing Like Neutrinos, are combined into one newly built structure consisting of two poles shaped like the Greek letter Pi. Kids can make waves of various sizes by moving the rope that stretches between the two poles, thereby learning about wave propagation, one of the primary concepts of particle physics.

Children can also use the Swing Like Neutrinos portion of the exhibit – a pair of pendulums hanging from one of the Pi-shaped poles – to learn about coupled oscillations, a basic physics principle.

“Kids learn in different ways,” said Spencer Pasero of Fermilab’s Education Office. “The idea of the outdoor exhibits is to instill a love of learning into kids who respond to hands-on, fun activities.”

The Wave Like a Particle and Swing Like Neutrinos exhibits were built with funds through Fermilab Friends for Science Education, an Illinois not-for-profit organization supporting the Fermilab Education Office. Contributions were received from an anonymous donor and a grant from the Community Foundation of the Fox River Valley.

The new exhibits join the Run Like a Proton accelerator path, which opened in May 2013. Using this feature, kids can mimic protons and antiprotons as they race along Fermilab’s accelerator chain.

“We hope this series of exhibits will activate kids’ imaginations and that they immerse themselves in the physics we’ve been doing at Fermilab for decades,” Pasero said.

Fermilab is located 35 miles outside Chicago, Illnois. The Lederman Science Center is open to the public Monday to Friday from 8:30 a.m. to 4:30 p.m. and on Saturdays from 9 a.m. to 3 p.m.

The Community Foundation of the Fox River Valley is a non-profit philanthropic organization based in Aurora, Illinois that administers individual charitable funds from which grants and scholarships are distributed to benefit the citizens of the Greater Aurora Area, the TriCities and Kendall County Illinois. For more information, please see www.communityfoundationfrv.org.

Fermilab is America’s national laboratory for particle physics research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Share

Theoretical physics, simplicity. Surely the two words do not go together. Theoretical physics has been the archetypal example of complicated since its invention. So what did Frank Wilczek (b. 1951) mean by that statement[1] quoted in the title? It is the scientist’s trick of taking a well-defined word, such as simplicity, and giving it a technical meaning. In this case, the meaning is from algorithmic information theory. That theory defines complexity (Kolmogorov complexity[2]) as the minimum length of a computer program needed to reproduce a string of numbers. Simplicity, as used in the title, is the opposite of this complexity. Science, not just theoretical physics, is driven, in part but only in part, by the quest for this simplicity.

How is that you might ask. This is best described by Greg Chaitin (b. 1947), a founder of algorithmic information theory. To quote: This idea of program-size complexity is also connected with the philosophy of the scientific method. You’ve heard of Occam’s razor, of the idea that the simplest theory is best? Well, what’s a theory? It’s a computer program for predicting observations. And the idea that the simplest theory is best translates into saying that a concise computer program is the best theory. What if there is no concise theory, what if the most concise program or the best theory for reproducing a given set of experimental data is the same size as the data? Then the theory is no good, it’s cooked up, and the data is incomprehensible, it’s random. In that case the theory isn’t doing a useful job. A theory is good to the extent that it compresses the data into a much smaller set of theoretical assumptions. The greater the compression, the better!—That’s the idea…

In many ways this is quite nice; the best theory is the one that compresses the most empirical information into the shortest description or computer program.  It provides an algorithmic method to decide which of two competing theories is best (but not an algorithm for generating the best theory). With this definition of best, a computer could do science: generate programs to describe data and check which is the shortest. It is not clear, with this definition, that Copernicus was better than Ptolemy. The two approaches to planetary motion had a similar number of parameters and accuracy.

There are many interesting aspects of this approach. Consider compressibility and quantum mechanics. The uncertainty principle and the probabilistic nature of quantum mechanics put limits on the extent to which empirical data can be compressed. This is the main difference between classical mechanics and quantum mechanics. Given the initial conditions and the laws of motion, classically the empirical data is compressible to just that input. In quantum mechanics, it is not. The time, when each individual atom in a collection of radioactive atoms decays, is unpredictable and the measured results are largely incompressible. Interpretations of quantum mechanics may make the theory deterministic, but they cannot make the empirical data more compressible.

Compressibility highlights a significant property of initial conditions. While the data describing the motion of the planets can be compressed using Newton’s laws of motion and gravity, the initial conditions that started the planets on their orbits cannot be. This incompressibility tends to be a characteristic of initial conditions. Even the initial conditions of the universe, as reflected in the cosmic microwave background, have a large random non-compressible component – the cosmic variance.  If it wasn’t for quantum uncertainly, we could probably take the lack of compressibility as a definition of initial conditions. For the universe, the two are the same since the lack of compressibility in the initial conditions is due to quantum fluctuations but that is not always the case.

The algorithmic information approach makes Occam’s razor, the idea that one should minimize assumptions, basic to science. If one considers that each character in a minimal computer program is a separate assumption, then the shortest program does indeed have the fewest assumptions. But you might object that some of the characters in a program can be predicted from other characters. However, if that is true the program can probably be made shorter. This is all a bit counterintuitive since one generally does not take such a fine grained approach to what one considers an assumption.

The algorithmic information approach to science, however, does have a major shortcoming. This definition of the best theory leaves out the importance of predictions. A good model must not only compress known data, it must predict new results that are not predicted by competing models. Hence, as noted in the introduction, simplicity is only part of the story.

The idea of reducing science to just a collection of computer programs is rather frightening. Science is about more than computer programs[3]. It is, and should be, a human endeavour. As people, we want models of how the universe works that humans, not just computers, can comprehend and share with others. A collection of bits on a computer drive does not do this.

To receive a notice of future posts follow me on Twitter: @musquod.



[1] From “This Explains Everything”, Ed, John Brockman, Harper Perennial, New York, 2013

[2] Also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity.

[3] In this regard, I have a sinking feeling that I am fighting a rearguard action against the inevitable.

Share

This article appeared in symmetry on June 4, 2014.

Data collected at the long-running MINOS experiment stacks evidence against the existence of these theoretical particles. Photo: Reidar Hahn

Data collected at the long-running MINOS experiment stacks evidence against the existence of these theoretical particles. Photo: Reidar Hahn

If you’re searching for something that may not exist, and can pass right through matter if it does, then knowing where to look is essential.

That’s why the search for so-called sterile neutrinos is a process of elimination. Experiments like Fermilab’s MiniBooNE and the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory have published results consistent with the existence of these theoretical particles. But a new result from the long-running MINOS experiment announced this week severely limits the area in which they could be found and casts more doubt on whether they exist at all.

Scientists have observed three types or “flavors” of neutrinos—muon, electron and tau neutrinos—through their interactions with matter. If there are other types, as some scientists have theorized, they do not interact with matter, and the search for them has become one of the hottest and most contentious topics in neutrino physics. MINOS, located at Fermilab with a far detector in northern Minnesota, has been studying neutrinos since 2005, with an eye toward collecting data on neutrino oscillation over long distances.

MINOS uses a beam of muon neutrinos generated at Fermilab. As that beam travels 500 miles through the earth to Minnesota, those muon neutrinos can change into other flavors.

MINOS looks at two types of neutrino interactions: neutral current and charged current. Since MINOS can see the neutral current interactions of all three known flavors of neutrino, scientists can tell if fewer of those interactions occur than they should, which would be evidence that the muon neutrinos have changed into a particle that does not interact. In addition, through charged current interactions, MINOS looks specifically at muon neutrino disappearance, which allows for a much more precise measurement of neutrino energies, according to João Coelho of Tufts University.

“Disappearance with an energy profile not described by the standard three-neutrino model would be evidence for the existence of an additional sterile neutrino,” Coelho says.

The new MINOS result, announced today at the Neutrino 2014 conference in Boston, excludes a large and previously unexplored region for sterile neutrinos. To directly compare the new results with previous results from LSND and MiniBooNE, MINOS combined its data with previous measurements of electron antineutrinos from the Bugey nuclear reactor in France. The combined result, says Justin Evans of the University of Manchester, “provides a strong constraint on the existence of sterile neutrinos.”

“The case for sterile neutrinos is still not closed,” Evans says, “but there is now a lot less space left for them to hide.”

Andre Salles

The vertical axis shows the possible mass regions for the sterile neutrinos. The horizontal axis shows how likely it is that a muon neutrino will turn into a sterile neutrino as it travels. The new MINOS result excludes everything to the right of the black line. The colored areas show limits by previous experiments. Image courtesy of MINOS collaboration

The vertical axis shows the possible mass regions for the sterile neutrinos. The horizontal axis shows how likely it is that a muon neutrino will turn into a sterile neutrino as it travels. The new MINOS result excludes everything to the right of the black line. The colored areas show limits by previous experiments.
Image courtesy of MINOS collaboration

This graph shows the combined MINOS/Bugey result (the red line) in comparison with the results from LSND and MiniBooNE (the green areas). The vertical axis shows the possible mass regions for sterile neutrinos. The new MINOS/Bugey result excludes everything to the right of the red line. Image courtesy of MINOS collaboration

This graph shows the combined MINOS/Bugey result (the red line) in comparison with the results from LSND and MiniBooNE (the green areas). The vertical axis shows the possible mass regions for sterile neutrinos. The new MINOS/Bugey result excludes everything to the right of the red line.
Image courtesy of MINOS collaboration

Share

Today at the Neutrino2014 conference in Boston, the IceCube collaboration showed an analysis looking for standard atmospheric neutrino oscillations in the 20-30 GeV region. Although IceCube has seen oscillations before, and reported them in a poster at the last Neutrino conference, in 2012, this plenary talk showed the first analysis where the IceCube error bands are becoming competitive with other oscillation experiments.

IC86Multi_NuMuOsc_results_Pscan_V1Neutrino oscillation is a phenomenon where neutrinos change from one flavor to another as they travel; it’s a purely quantum phenomenon. It has been observed in several contexts, including particle accelerators, nuclear reactors, cosmic rays hitting the atmosphere, and neutrinos traveling from our Sun. This is the first widely accepted phenomenon in particle physics that requires an extension to the Standard Model, the capstone of which was the observation of the Higgs boson at CERN. Neutrinos and neutrino oscillations represent the next stage of particle physics, beyond the Higgs.

IC86Multi_NuMuOsc_results_LEOf the parameters used to describe neutrino oscillations, most have been previously measured. The mixing angles that describe oscillations are the most recent focus of measurement. Just two years ago, the last of the neutrino mixing angles was measured by the Daya Bay experiment. Of the remaining mixing angles, the atmospheric angle accessible to IceCube remains the least constrained by experimental measurements.  

IceCube, because of its size, is in a unique position to measure the atmospheric mixing angle. Considering neutrinos that traverse the diameter of the Earth, the oscillation effect is the strongest in the energy region from 20 to 30 GeV, and an experiment that can contain a 20 GeV neutrino interaction must be very large. The Super Kamiokande experiment in Japan, for example, also measures atmospheric oscillations, but because of its small size relative to IceCube, Super Kamiokande can’t resolve energies above a few GeV. At any higher energies, the detector is simply saturated. Other experiments can measure the same mixing angle using accelerator beamlines, like the MINOS experiment that sends neutrinos from Fermilab to Minnesota. Corroborating these observations from several experimental methods and separate experiments proves the strength of the oscillation framework.

The sheer size of IceCube means that neutrinos have many chances to interact and be observed within the detector, giving IceCube a statistical advantage over other oscillation experiments. Even after selecting only the best reconstructed events, the experimental sample remaining still has over five thousand events from three years of data. Previous atmospheric oscillation experiments base analysis on hundreds or fewer events, counting instead on precise understanding of systematic effects. 

The IceCube collaboration is composed of more than 250 scientists from about 40 institutions around the world, mostly from the United States and Europe. The current results are possible because of decades of planning and construction, dedicated detector operations, and precise calibrations from all over the IceCube collaboration.

IceCube has several major talks at the Neutrino conference this year, the first time that the collaboration has had such a prominent presence. In addition to the new oscillations result, Gary Hill spoke in the opening session about the high energy astrophysical neutrinos observed over the last few years. Darren Grant spoke about the proposed PINGU infill array, which was officially encouraged in the recent P5 report. IceCube contributed nine posters on far-ranging topics from calibration and reconstruction methods to a neutrino-GRB correlation search. The conference-inspired display at the MIT museum is about half IceCube material, including an 8-foot tall LED model of the detector. One of three public museum talks on Saturday will be from (yours truly) Laura Gladstone about the basics of IceCube science and life at the South Pole.

One new aspect of the new oscillation analysis is that it uses an energy reconstruction designed for the low end of the energy range available to IceCube, in the tens-of-GeV range. In this range, only a handful of hits are visible for each event, and reconstructing directional information can be tricky. “We took a simple but very clever idea from the ANTARES Collaboration, and rehashed it to tackle one of our biggest uncertainties: the optical properties of the ice. It turned out to work surprisingly well,” says IceCuber Juan Pablo Yanez Garza, who brought the new reconstruction to IceCube, and presented the result in Boston.  By considering only the detector hits that arrive without scattering, the reconstruction algorithm is more robust against systematic errors in the understanding of the glacial ice in which IceCube is built. 

Share