• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for May, 2014

Matter and energy have a very curious property. They interact with each other in predictable ways and the more energy an object has, the smaller length scales it can interact with. This leads to some very interesting and beautiful results, which are best illustrated with some simple quantum electrodynamics (QED).

QED is the framework for describing the interactions of charged leptons with photons, and for now let’s limit things to electrons, positrons and photons. An electron is a negatively charged fundamental particle, and a positron is the same particle, but with a positive charge. A photon is a neutral fundamental particle of light and it interacts with anything that has a charge.

That means that we can draw a diagram of an interaction like the one below:

An electron radiating a photon

An electron radiating a photon

In this diagram, time flows from left to right, and the paths of the particles in space are represented in the up-down direction (and two additional directions if you have a good enough imagination to think in four dimensions!) The straight line with the arrow to the right is an electron, and the wavy line is a photon. In this diagram an electron emits a photon, which is a very simple process.

Let’s make something more complicated:

An electron and positron exchange a photon

An electron and positron make friends by exchanging a photon

In this diagram the line with the arrow to the left is a positron, and the electron and positron exchange a photon.

Things become more interesting when we join up the electron and positron lines like this:

An electron and positron annihilate

An electron and positron get a little too close and annihilate

Here an electron and positron annihilate to form a photon.

Now it turns out in quantum mechanics that we can’t just consider a single process, we have to consider all possible processes and sum up their contributions. So far only the second diagram we’ve considered actually reflects a real process, because the other two violate conservation of energy. So let’s look at electron-positron scattering. We have an electron and a positron in the initial state (the left hand side of the diagram) and in the final state (the right hand side of the diagram):

What happens in the middle?  According to quantum mechanics, everything possible!

What happens in the middle? According to quantum mechanics, everything possible!

There are two easy ways to join up the lines in this diagram to get the following contributions:

Two possible diagrams for electron-positron scattering

Two possible diagrams for electron-positron scattering

There’s a multiplicative weight (on the order of a percent) associated with each photon interaction, so we can count up the photons and determine the contribution each process has. In this case, there are two photon interactions in each diagram, so each one contributes roughly equally. (You may ask why we bother calculating the contributions for a given pair of initial and final states. In fact what we find interesting is the ratio of contributions for two different pairs of initial and final states so that we can make predictions about rates of interactions.)

Let’s add a photon to the diagram, just for fun. We can connect any two parts of electron and positron lines to create a photon, like so:

Taking up the complexity a notch, by adding a photon

Taking up the complexity a notch, by adding a photon

A fun game to play in you’re bored in a lecture is to see how many unique ways you can add a photon to a diagram.

So how do we turn this into a fractal? Well we start off with an electron moving through space (now omitting the particle labels for a cleaner diagram):

A lonely electron :(

A lonely electron 🙁

Then we add a photon or two to the diagram:

An electron with a photon

An electron with a photon

An electron hanging out with two photons

An electron hanging out with two photons

An electron going on an adventure with two photons

An electron going on an adventure with two photons

Similarly let’s start with a photon:

A boring photon being boring

A boring photon being boring

And add an electron-positron pair:

Ah, that's a bit more interesting

Ah, that’s a bit more interesting

This is all we need to get started. Every time we see an electron or positron line, we can replace it with a line that emits and absorbs a photon. Every time we see a photon we can add an electron-positron pair. We can keep repeating this process as much as we like until we end up with arbitrarily complex diagrams, each new step adding more refinement to the overall contributions:

A very busy electron

A very busy electron

At each step the distance we consider is smaller than the one before it, and the energy needed to probe this distance is larger. When we talk about an electron we usually think of a simple line, but real electrons are actually made of a mess of virtual particles that swarm around the central electron. The more energy we put into probing the electron’s structure (or lack of structure) the more particles we liberate in the process. There are many diagrams we can draw and we can’t pick out a single one of these diagrams as the “real” electron, as they all contribute. We have to take everything to get a real feel of what something as simple as an electron is.

As usual, things are even more complicated in reality than this simple picture. To get a complete understanding we should add the other particles to the diagrams. After all, that’s how we can get a Higgs boson out of proton- in some sense the Higgs boson was “already there” inside the proton and we just liberated it by adding a huge amount of energy. If things are tricky for the electron, they are even more complicated for the proton. Hadrons are bound states of quarks and gluons, and while we can see an individual electron, it’s impossible to see an individual quark. Quarks are always found in groups, so have the take the huge fractal into account when we look inside a proton and try to simulate what happens. This is an intractable problem, so need a lot of help from the experimental data to get it right, such as the dedicated deep inelastic scattering experiments at the DESY laboratory.

The view inside a proton might look a little like this (where the arrows represent quarks):

The crazy inner life of the proton

The crazy inner life of the proton

Except those extra bits would go on forever to the left and right, as indicated by the dotted lines, and instead of happening in one spatial dimension it happens in three. To make matters worse, the valence quarks are not just straight lines as I’ve drawn them here, they meander to and fro, changing their characteristic properties as they exchange other particles with each other.

Each time we reach a new energy range in our experiments, we get to prober deeper into this fractal structure of matter, and as we go to higher energies we also liberate higher mass particles. The fractals for quarks interact strongly, so they are dense and have high discovery potential. The fractals for neutrinos are very sparse and their interactions can spread over huge distances. Since all particles can interact with each other directly or through intermediaries, all these fractals interact with each other too. Each proton inside your body contains three valence quarks, surrounded by a fractal mess of quarks and gluons, exactly the same as those in the protons that fly around the LHC. All we’ve done at the LHC is probe further into those fractals to look for something new. At the same time, since the protons are indistinguishable they are very weakly connected to each other via quantum mechanics. In effect the fractals that surround every valence particle join up to make one cosmological fractal, and the valence particles just excitations of that fractal that managed to break free from their (anti-)matter counterparts.

The astute reader will remember that the title of the post was the seemingly fractal nature of matter. Everything that has been described so far fulfils the requirements of any fractal- self similarity, increased complexity with depth and so on. What it is that makes matter unlike a fractal? We don’t exactly know the answer to that question, but we do know that eventually the levels of complexity have to stop. We can’t keep splitting space up into smaller and smaller chunks and finding more and more complex arrangements of the same particles over and over again. This is because eventually we would reach the Planck scale, which is where the quantum effects of gravity become important and it becomes very difficult to keep track of spatial distances.

Meanwhile, deep inside an electron, something weird happens at the Planck scale

Meanwhile, deep inside an electron’s fractal, causality breaks down and something weird happens at the Planck scale

Nobody knows what lies at the Planck scale, although there are several interesting hypotheses. Perhaps the world is made of superstrings, and the particles we see are merely excitations of those strings. Some models propose a unification of all known forces into a single force. We know that the Planck scale is about fifteen orders of magnitude higher in energy than the LHC, so we’ll never reach the energy and length scales needed to answer these questions completely. However we’ve scratched the surface with the formulation of the Standard Model, and so far it’s been a frustratingly good model to work with. The interactions we know of are simple, elegant, and very subtle. The most precise tests of the Standard Model come from adding up just a handful of these fractal-like diagrams (at the cost of a huge amount of labour, calculations and experimental time.)

I find it mind boggling how such simple ideas can result in so much beauty, and yet it’s still somehow flawed. Whatever the reality is, it must be even more beautiful than what I described here, and we’ll probably never know its true nature.

(As a footnote, to please the pedants: To get a positron from an electron you also need to invert the coordinate axes to flip the spin. There are three distinct diagrams that contribute to the electron positron scattering, but the crossed diagram is a small detail might confuse someone new to these ideas.)

Share

Among the rain
and lights
I saw the figure 5
in gold
on a red
firetruck
moving
tense
unheeded
to gong clangs
siren howls
and wheels rumbling
through the dark city.

William Carlos Williams, “The Great Figure”, 1921

Ever since the Particle Physics Project Prioritization Panel (P5) report was released on Thursday, May 22, I have been thinking very hard about the number five. Five is in the name of the panel, it is embedded in the science that the report describes, and in my opinion, the panel has figured out how to manipulate a fifth dimension. Please give me a chance to explain.

Having had a chance to read the report, let me say that I personally am very impressed by it and very supportive of the conclusions drawn and the recommendations made. The charge to P5 was to develop “an updated strategic plan for the U.S. that can be executed over a ten-year timescale, in the context of a twenty-year global vision for the field.” Perhaps the key phrase here is “can be executed”: this must be a plan that is workable under funding scenarios that are more limited than we might wish. It requires making some hard decisions about priorities, and these priorities must be set by the scientific questions that we are trying to address through the techniques of particle physics.

Using input from the Snowmass workshop studies that engaged a broad swath of the particle-physics community, P5 has done a nice job of distilling the intellectual breadth of our field into a small number of “science drivers”. How many? Well, five of course:

• Use the Higgs boson as a new tool for discovery
• Pursue the physics associated with neutrino mass
• Identify the new physics of dark matter
• Understand cosmic acceleration: dark energy and inflation
• Explore the unknown: new particles, interactions, and physical principles

I would claim that four of the drivers represent imperatives that are driven by recent and mostly unexpected discoveries — exactly how science should work. (The fifth and last listed is really the eternal question of particle physics.) While the discovery of the Higgs boson two years ago was dramatic and received a tremendous amount of publicity, it was not totally unexpected. The Higgs is part of the standard model, and all indirect evidence was pointing to its existence; now we can use it to look for things that actually are unexpected. The observation of the Higgs was not the end of an era, but the start of a new one. Meanwhile, neutrino masses, dark matter and dark energy are all outside our current theories, and they demand explanation that can only come through further experimentation. We now have the technical abilities to do these experiments. These science drivers are asking exciting, fundamental questions about how the universe came to be, what it is made of and how it all interacts, and they are questions that, finally, can be addressed in our time.

But, how to explore these questions in a realistic funding environment? Is it even possible? The answer from P5 is yes, if we are clever about how we do things. I will focus here on the largest projects that the P5 report addresses, the ones that cost at least $200M to construct; the report also discusses many medium-size and small efforts, and recommends hard choices on which we should continue to pursue and which, despite having merit, simply cannot fit into realistic funding scenarios. The three biggest projects are the LHC and its high-luminosity upgrade that should be completed about about ten years from now; a long-baseline neutrino experiment that would create neutrinos at Fermilab and observe them in South Dakota, and a high-energy electron-positron collider, the International Linear Collider (ILC) that could do precision studies of the Higgs boson but is at least ten years away from realization. They are all interesting projects that each address at least two of the science drivers, but is it possible for the U.S. to take a meaningful role in all three? The answer is yes…if you understand how to use the fifth dimension.

The high-luminosity LHC emerged as “the first high-priority large-category project” in the program recommended by P5, and it is to be executed regardless of budget scenario. (See below about the use of the word “first” here.)  As an LHC experimenter who write for the U.S. LHC blog, I am of course a bit biased, but I think this is a good choice. The LHC is an accelerator that we have in hand; there is nothing else that could be built in the next ten years that can do anything like it, and we must fully exploit its potential. It can address three of the science drivers — the Higgs, dark matter, and the unknown. U.S. physicists form the largest national contingent in each of the two big multi-purpose experiments, ATLAS and CMS, and the projects depend on U.S. participation and expertise for their success. While we can never make any guarantees of discovery, I personally think that the LHC gives us as good a chance as anything, and that it will be an exciting environment to work in over the coming years.

P5 handled the long-baseline neutrino experiment by presenting some interesting challenges to the U.S. and global particle physics communities. While there is already a plan to build this project, in the form of a proposed experiment called LBNE, it was considered to be inadequate for the importance of the science. The currently proposed LBNE detector in South Dakota would be too small to collect enough data on a timescale that would give interesting and conclusive results. Even the proponents of LBNE recognized these limitations.  So, P5 recommends that the entire project “should be reformulated under the auspices of a new international collaboration, as as an internationally coordinated and internationally funded program, with Fermilab as the host,” that will truly meet the scientific demands. It wouldn’t just be a single experiment, but a facility — the Long-Baseline Neutrino Facility (LBNF).

This is a remarkable strategic step. First, it makes the statement that if we are going to do the science, we must do it well. LBNF would be bigger then LBNE, and also much better in terms of its capabilities. It also fully integrates the U.S. program into the international community of particle physics — it would commit the U.S. to hosting a major facility that would draw world-wide collaboration and participation. The U.S. will hold up its end of the efforts to build particle-physics facilities that scientists from all over the world can take part in, just as CERN has successfully done with the LHC. To organize this new facility will take some time, such that peak costs of building LBNE will be pushed to a time later than the peak costs of upgrading the LHC.

One of the important ideas of special relativity is that the three dimensions of space and one dimension of time are placed on an equal footing. Two events in space-time that have given spatial and time separations in one frame of reference will have different spatial and time separations in a different frame. With LBNF, P5 has postulated a fifth dimension that must be considered: cost. If we were to try to upgrade the LHC and build LBNF at the same time, the cost would be more than we could afford, even with international participation. But by spacing out these two events in time, doing the HL-LHC first and LBNF second, the cost per year of these projects has become smaller; time and cost have been put on a more equal footing. Why didn’t Einstein think of that?

Thus, it is straightforward to set the international LBNF as “the highest-priority large project in its timeframe.” The title of the P5 report is “Building for Discovery”; LBNF will be the major project that the U.S. will build for discoveries in the areas of neutrino masses and exploration of the unknown.

As for the ILC, which Japan has expressed an interest in building, the scientific case for it is strong enough that “the U.S. should engage in modest and appropriate levels of ILC accelerator and detector design” no matter what the funding scenario. How much involvement there will be will depend on the funds available, and on whether the project actually goes forward. We will understand this better within the next few years. If the ILC is built, it will be a complement to the LHC and let us explore the properties of the Higgs and other particles in precise detail. With that, P5 has found a way for the U.S. to participate in all three major projects on the horizon, if we are careful about the timing of the projects and accept reasonable bounds on what we do with each.

These are the headlines from the report, but there is much more to it. The panel emphasizes the importance of maintaining a balance between the funds spent to build new facilities, to operate those facilities, and to do the actual research that leads to scientific discovery at the facilities. In recent years, there have been few building projects in the pipeline, and the fraction of the U.S. particle-physics budget devoted to new projects has languished at around 15%. P5 proposes that this be raised to the 20-25% level and maintained there, so that there will always be a push to create facilities that can address the scientific drivers — building for discovery. The research program is what funds graduate students and postdoctoral researchers, the future leaders of the field, and is where many exciting new physics ideas come from. Research has also been under financial pressure lately, and P5 proposes that it should not receive less than 40% of the budget. In addition, long-standing calls to invest in research and development that could lead to cheaper particle accelerators, more sensitive instrumentation, and revolutionary computational techniques are repeated.

This strategic vision is laid out in the context of three different funding scenarios. The most constrained scenario imagines flat budgets through 2018, and then annual increases of 2%, which is likely below the rate of inflation and thus would represent effectively shrinking budgets. The program described could be carried out, but it would be very challenging. LBNF could still be built, but it would be delayed. Various other projects would be cancelled, reduced or delayed. The research program would lose some of its capabilities. It would make it difficult for the U.S. to be a full international partner in particle physics, one that would be capable of hosting a large project and thus being a global leader in the field. Can we do better than that? Can we instead have a budget that grows at 3% per year, closer to the rate of inflation? The answer is ultimately up to our elected leaders. But I hope that we will be able to convince them, and you, that the scientific opportunities are exciting, and that the broad-based particle-physics community’s response to them is visionary while also being realistic.

Finally, I would like to offer some words on the use of logos. Since the last P5 report, in 2008, the U.S. particle physics program has relied on a logo that represented three “frontiers” of scientific exploration:

three_frontiers

It is a fine way to classify the kinds of experiments and projects that we pursue, but I have to say that the community has chafed a bit under this scheme. These frontiers represent different experimental approaches, but a single physics question can be addressed through multiple approaches. (Only the lack of time has kept me from writing a blog post titled “The tyranny of Venn diagrams.”) Indeed, in his summary presentation about the Energy Frontier for the Snowmass workshop, Chip Brock of Michigan State University suggested a logo that represented the interconnectedness of these approaches:

chip_rings

“Building for Discovery” brings us a new logo, one that represents the five science drivers as five interlocked crescents:

P5-swirl

I hope that this logo does an even better job of emphasizing the interconnectedness not just of experimental approaches to particle physics, but also of the five (!) scientific questions that will drive research in our field over the next ten to twenty years.

Of course, I’m also sufficiently old that this logo reminded me of something else entirely:

American_revolution_bicentennial

Maybe we can celebrate the P5 report as the start of an American revolution in particle physics? But I must admit that with P5, 5 science drivers and 5 dimensions, I still see the figure 5 in gold:

"I Saw the Figure 5 in Gold", Charles Demuth, 1928

“I Saw the Figure 5 in Gold”, Charles Demuth, 1928

Share

Building for Discovery

Thursday, May 22nd, 2014

After years in the making — from the earliest plans in 2011 for an extended Snowmass workshop that started in October 2012 and culminated in August 2013, to the appointment of a HEPAP subpanel in September, to today — we have now received the report of the Particle Physics Project Prioritization Panel, or P5. As has been discussed elsewhere, this is a major report outlining the strategic plan for United States participation in the global enterprise of particle physics for the next two decades.

As I writing this, Steve Ritz of UC Santa Cruz, the chair of the panel, is making his presentation on the report, which has the title “Building for Discovery: Strategic Plan for U.S. Particle Physics in the Global Context.” While at CERN, I am watching remotely (or trying to do, the system must be heavily loaded, and it sounds like there are technical difficulties in the meeting room). I am restraining myself from live-blogging the presentation, as I want to take the time to read the report carefully before discussing it. (The report will be available in a couple of hours, but the executive summary is ready now.) Anything this important takes some time for proper digestion! If you are reading this, you are already a fan of particle physics, so I invite you to read it also and see what you think. I hope to discuss the matter further in a post next week.

But in any case, a huge thank-you to the hard-working members of P5 who developed this report!

Share

I know that the majority of the posts I’ve written have focused on physics issues and results, specifically those related to LHCb. I’d like to take this opportunity, however, to focus on the development of the field of High Energy Physics (HEP) and beyond.

As some of you know, in 2013, we witnessed an effectively year-long conversation about the state of our field, called Snowmass. This process is meant to collect scientists in the field, young and old alike, and ask them what the pressing issues for the development of our field are. In essence, it’s a “hey, stop working on your analysis for a second and let’s talk about the big issues” meeting. They came out with a comprehensive list of questions and also a bunch of working papers about the discussions. If you’re interested, go look at the website. The process was separated into “frontiers,” or groups that the US funding agencies put together to divide the field into the groups that they saw fit. I’ll keep my personal views on the “frontiers” language for a different day, and instead share a much more apt interpretation of the frontiers, which emerged from Jonathan Asaadi, of Snowmass Young and Quantum Diaries. He emphasizes that we are coming together to tackle the biggest problems as a team, as opposed to dividing into groups, illustrated as Voltron in his slide below.

snowmass_young_asaadi

Slide from presentation of Jonathan Asaadi at the USLUO (now USLUA) 2013 annual meeting in Madison, Wisconsin. The point here is collaboration between frontiers to solve the biggest problems, rather than division into separate groups.

And that’s just what happened. While I willingly admit that I had zero involvement in this process aside from taking the Snowmass Young survey, I still agree with the conclusions which were reached about what the future of our field should look like. Again, I highly encourage you to go look at the outcome.

Usually, this would be the end of the story, but this year, the recommendations from Snowmass were passed to a group called P5 (Particle Physics Project Prioritization Panel). The point of this panel was to review the findings of Snowmass and come up with a larger plan about how the future of HEP will proceed. The big ideas had effectively been gathered, now the hard questions about which projects can pursue these questions effectively are being asked. This specifically focuses on what the game plan will be for HEP over the next 10-20 years, and identifies the distinct physics reach in a variety of budget situations. Their recommendation will be passed to HEPAP (High Energy Physics Advisory Panel), which reviews the findings, then passes its recommendation to the US government and funding agencies. The P5 findings will be presented to HEPAP  on May 22nd, 2014 at 10 AM, EST. I invite you to listen to the presentation live here. The preliminary executive report and white paper can be found after 10 EST on the 22nd of May on the same site, as I understand.

This is a big deal.

There are two main points here. First, 10-20 years is a long time, and any sort of recommendation about the future of the field over such a long period will be a hard one. P5 has gone through the hard numbers under many different budget scenarios to maximize the science reach that the US is capable of. Looking at the larger political picture, in 2013, the US also entered the Sequester, which cut spending across the board and had wide implications for not only the US but worldwide. This is a testament to the tight budget constraints that we are working in now, and will most certainly face in the future. Even considering such a process as P5 shows that the HEP community recognizes this point, and understands that without well defined goals and tough considerations of how to achieve them, we will endanger the future funding of any project in the US or with US involvement.

Without this process, we will endanger future funding of US HEP.

We can take this one step further with a bit more concrete example. The majority of HEP workings are done through international collaboration, both experiment and theory alike. If any member of such a collaboration does not pull their weight, it puts the entire project into jeopardy. Take, for example, the US ATLAS and CMS programs, which have 23% and 33% involvement from the US, respectively, in both analysis and detector R&D. If these projects were cut drastically over the next years, there would have to be a massive rethinking about the strategies of their upgrades, not to mention possible lack of manpower. Not only would this delay one of the goals outlined by Snowmass, to use the Higgs as a discovery tool, but would also put into question the role of the US in the future of HEP. This is a simple example, but is not outside the realm of possibility.

The second point is how to make sure a situation like this does not happen.

I cannot say that communication of the importance of this process has been stellar. A quick google search yields no mainstream news articles about the process, nor the impact. In my opinion, this is a travesty and that’s the reason why I am writing this post. Symmetry Magazine also, just today, came out with an article about the process. Young members of our community who were not necessarily involved in Snowmass, but seem to know about Snowmass, do not really know about P5 or HEPAP. I may be wrong, but I draw this conclusion from a number of conversations I’ve had at CERN with US postdocs and students. Nonetheless, people are quite adamant about making sure that the US does continue to play a role in the future of HEP. This is true across HEP, the funding agencies and the members of Congress. (I can say this as I went on a trip with the USLUO, FNAL and SLAC representatives to lobby congress on behalf of HEP in March of this year, and this is the sentiment which I received.) So the first step is informing the public about what we’re doing and why.

The stuff we do is really cool! We’re all organized around how to solve the biggest issues facing physics! Getting the word out about this is key.

Go talk to your neighbor!

Go talk to your local physicist!

Go talk to your congressperson!

Just talk about physics! Talk about why it excites you and talk about why it’s interesting to explore! Maybe leave out the CLs plots, though. If you didn’t know, there’s also a whole mess of things that HEP is good for besides colliding particles! See this site for a few.

The final step is understanding the process. The biggest worry I have is what happens after HEPAP reviews the P5 recommendations. We, as a community, have to be willing to endure the pains of this process. Good science will be excluded. However, there are not infinite funds, nor was a guarantee of funding ever given. Recognition of this, while focusing on the big problems at hand and thinking about how to work within the means allowed is *the point* of the conversation. The better question is, will we emerge from the process unified or split? Will we get behind the Snowmass process and answer the questions posed to us, or fight about how to answer them? I certainly hope the answer is that we will unify, as we unified for Snowmass.

An allegorical example is from a slide from Nima Arkani-Hamed at Pheno2014, shown in the picture.

One slide from Nima Arkani-Hamed's presentation at Pheno2014

One slide from Nima Arkani-Hamed’s presentation at Pheno2014

 

The take home point is this: If we went through the exercise of Snowmass, and cannot pull our efforts together to the wishes of the community, are we going to survive? I would prefer to ask a different question: Will we not, as a community, take the opportunity to answer the biggest questions facing physics today?

We’ll see on the 22nd and beyond.

 

*********************************************

Update: May 27, 2014

*********************************************

As posted in the comments, the full report can be found here, the presentation given by Steve Ritz, chair of P5 can be found here, and the full P5 report can be found here.  Additionally, Symmetry Magazine has a very nice piece on the report itself. As they state in the update at the bottom of the page, HEPAP voted to accept the report.

Share
Dan Yocum, left, formerly of Fermilab, shakes hands with Google's Brian Fitzpatrick in front of a quadrupole magnet at its new home in Google's Chicago offices. Photo: Troy Dawson

Dan Yocum, left, formerly of Fermilab, shakes hands with Google’s Brian Fitzpatrick in front of a quadrupole magnet’s new home in Google’s Chicago offices. Photo: Troy Dawson

Fermilab does a good job of recycling — from the ubiquitous blue trash cans to electromagnets to — in my case — employees. I myself left Fermilab in 1999 only to recycle back to the Experimental Astrophysics Group in 2000 to work on the Sloan Digital Sky Survey before leaving again in 2012.

When news of the Tevatron’s decommissioning reached Brian Fitzpatrick, head of software engineering in the Chicago offices of Google, he sent
me a short email lamenting the Tevatron closure. He included a request for a souvenir to display in Google’s Chicago offices. Brian and I met when he came to Fermilab to give a computing seminar talk on MapReduce and BigTable several years ago. We have remained in touch ever since, so I gladly accepted the challenge.

My next stop was the office of Accelerator Division head Roger Dixon. We discussed the possibility of acquiring something from the Tevatron for Google and conferred briefly with scientist Todd Johnson. We settled on a quadrupole steering magnet.

But getting a magnet out of the Tevatron was out of the question since the magnet would be slightly radioactive. As a rule, Fermilab’s safety section and the Department of Energy never let even slightly activated material leave the site to be recycled. But hope was not lost, and Roger suggested I speak with Dave Harding, then deputy head of the Technical Division, to see if there were any spare magnets in storage. Off I went to find Dave.

Dave determined that there were indeed several magnets that were clean and in storage because they had been determined to be flawed during post-manufacture testing. One man’s trash is another man’s treasure. I had hit pay dirt!

Roger had also warned that I would have to walk through a labyrinth of people in the Directorate, Business Services, Environmental Health and Safety and DOE before the magnet could be released. Over several months I proceeded to meet and speak with many folks. I list them here so they know how much I appreciate them: Gerald Annala, Dave Augustine, Jose Cardona, Debra Cobb, Shannon Fugman, Jack Kelly, Scott McCormick, Dean Still and John Zweibohmer.

After many emails of clarification, justification and negotiation, everything was signed off and the plan was approved.

Success! Or so I thought. I was already starting to feel a bit like Odysseus trying to get home after the Trojan War when I spoke with Jack Kelly in the Property Department: We had one more bit of stormy water to navigate. Luckily, Jack was an able guide, shepherding the paperwork and the magnet through not one but three online auctions for the DOE labs, the universities and, finally, eBay. He put the big shiny blue “Buy it Now” button on the final eBay page, where Google’s Brian Fitzpatrick clicked and paid $150 for a piece of Tevatron history. How did they come up with the price? That figure was based on the magnet’s estimated scrap metal value. But instead of being turned into scrap, it now proudly resides in Google’s Chicago offices.

On September 28, 2012, after 349 days of navigating a quagmire of paperwork, we had recycled a Tevatron quadrupole magnet and found a new home for it.

The magnet is the centerpiece amongst a myriad of historical scientific and computing items at the Google office. There’s even an Sloan Digital Sky Survey spectroscopic plug plate to keep it company.

Former Fermilab employee Paul Rossman, who works at Google, says, “It’s nice to pass an awesome piece of technology like the quadrupole magnet on the way to my desk. It’s almost like I got to take a little something with me from Fermilab.” Nice, indeed.

I’d like to express my sincerest appreciation to all the people named in this article. You are some of the best of Fermilab. Thank you.

Dan Yocum

Share

It’s sort of a recurring theme for me, but a recent Washington Post article on the BICEP2 result, among others, has me wanting to repeat the idea, and keep it short and sweet:

Screen Shot 2014-05-18 at 8.15.54 PM

The issue at hand is whether BICEP2 has really observed the remnants of cosmic inflation, or if in fact they have misinterpreted their results or made a mistake in the corrections to their measurement. It’s frustrating that the normal process of the scientific method – that is, other experts reviewing a result, trying to reproduce it, and looking for holes – is being dramatized as “backlash.” But let’s not worry today about whether we can ever stop the “science news cycle” from being over-sensationalized, because we probably can’t. You and I can still remember a few simple things about science:

1. If scientists think they’ve found something, they should publish it. They should say what they think it means, even if they might be wrong.
2. Other researchers try to replicate the result, or find flaws with it. If flaws are found or it can’t be reproduced, the original scientists have to go back and figure out what’s going on. If other researchers find the same thing, it’s probably right. If lots of other researchers find the same thing, we can agree it’s almost certainly right and move on to the next level of questions.
3. Science makes progress when you say what you know and the certainty with which you know it. If everything you say is always right, you might be being too timid and delaying the process of other researchers building on your results!

But I think Big Bird says it best of all:

Share

I attended the Australian Accelerator School in January of this year.  Better late than never, I recount some of my experiences below.

It’s Day 1 of the Australian Accelerator School and Melbourne is the hottest city on Earth with temperatures soaring above 40°C – which is a bit much when one has just arrived from a soggy UK winter. Fortunately, the Australian Synchrotron is housed in a beautifully air-conditioned building located in the suburbs of Melbourne, just next door to Monash University.

The Australian Synchrotron, which opened in 2007, is the largest stand-alone piece of scientific infrastructure in the southern hemisphere and provides a source of highly intense light which is used for a wide range of research purposes. It is situated on a modern site with the circular synchrotron at its focus, surrounded by several other buildings.

Beampipe: getting acquainted with the Australian  Synchrotron.

Beampipe: getting acquainted with the Australian Synchrotron.

The School has gathered 23 students, mainly from Australia and New Zealand, and an impressive panel of experts. Phil Burrows of Oxford University is the keynote lecturer and will provide a step-by-step guide on the physics and maths underpinning particle accelerators. Ralph Steinhagen of CERN is armed with over 700 slides on the technical aspects of accelerator operation. Toshi Mitsuhashi of KEK, aka the “Master”, will share his vast experience on the optics of accelerators, while Jeff Corbett of SLAC will lead laboratory exercises. And not forgetting Roger Rassool of Melbourne University, who will be present to add his irrepressible energy and enthusiasm to proceedings.

Mornings are to be spent in lectures and afternoons in the lab. In labs we will have the chance to develop practical skills, such as soldering, using oscilloscopes, programming Arduino chips, and modelling electronic circuits and particle accelerators. There are also various international conferences running through the fortnight, some sessions of which we will be attending. The programme will conclude with a group project, to be presented to the experts on the final day of the school. And there’s the odd social event to attend too.

In true Aussie fashion we are welcomed with a barbecue – although we all feel feel more cooked than sausages after a few minutes outdoors. And we’ll be kept sweating over the next 12 days…

Stay tuned for wine + DIY particle physics…

ASAP2

Day 1 labs: getting familiar with oscilloscopes.

Share

Saving the Feynman van

Monday, May 12th, 2014

A version of this article appeared in symmetry on May 8, 2014.

A team of Richard Feynman’s friends and fans banded together to restore the Nobel laureate’s most famous vehicle.

A team of Richard Feynman’s friends and fans banded together to restore the Nobel laureate’s most famous vehicle. Image courtesy of Seamus Blackley

“The game I play is a very interesting one,” says Nobel Laureate Richard Feynman in a low-resolution video posted to YouTube. “It’s imagination in a tight straitjacket.”

Feynman is describing his job as a theoretical physicist: to lay out what humanity knows about how the world works, and to search the spaces in between for what we might have missed.

The video shows more than Feynman’s way with words. It shows his approachability. One of the greatest minds that particle physics has ever known stands barefoot, lecturing in a distinct Queens, New York, accent for an audience lounging casually on the floor at the new-age Esalen Institute in Big Sur, California.

In a way, Feynman remains approachable to this day for all of the snippets of his personality left behind in books, letters and recordings of formal and informal lectures and interviews.

Recently, a more concrete bit of Feynman history came out of retirement: A small team has brought back to life the so-called “Feynman van.”

One camper, special order

In 1975, Feynman and his wife, Gweneth Howarth, bought a Dodge Tradesman Maxivan and had it painted with Feynman diagrams, symbols Feynman had invented to express complicated particle interactions through simple lines and loops.

It might seem arrogant to drive around in a van covered in reminders of one’s own intellectual prowess. But Feynman’s daughter, Michelle, thinks the decorations represented something else: a love of physics.

“My dad was pretty low-key about himself,” she says. “I think decorating the van was more to celebrate the diagrams than to celebrate himself.”

Michelle’s parents put a lot of thought into the design of the vehicle, which they primarily used for camping, Michelle says. It was outfitted with a small hammock for Michelle to use in case the family of four needed to sleep inside during inclement weather.

“I don’t think that they had ever done anything like that with a car purchase before,” Michelle says. “It was always: Go to the dealer and find something—it doesn’t really matter what color it is—and you’ll have it for a million years.”

The Feynman family took the van to Canada, Mexico and dozens of US campsites in between, often traveling with a couple of other families, often leaving the paved road for the unknown.

Michelle began driving the van to school after she turned 16.

“I thought it was kind of embarrassing,” she says. “But at a certain point I kind of got over it. If you want to drive at that age, you’ll drive anything.”

After Michelle’s first couple of years in college, one of her father’s friends, film producer Ralph Leighton—Feynman’s drumming partner in another famous fuzzy YouTube clip—bought the van and put it into storage, where it began to rust and fade.

Saving the Feynman van

When video game designer Seamus Blackley, known as the father of the Xbox, got ahold of the van in 2012, “it was just about too late,” Blackley says.

Blackley has a history with particle physics. He was in his early 20s, working on his PhD thesis at Tufts University and Fermi National Accelerator Laboratory, when he saw his plans for the future disintegrate with the defunding of the planned Superconducting Super Collider.

“I found out on CNN,” he says.

He changed course and wound up taking a job working on some of the first computer games with 3D graphics. He designed the physics of the game environments, “keeping things from going into other things.” He has helped shape the world of video games in a variety of different roles since.

Game design takes the same type of thinking Feynman described in his talk at the Esalen Institute, Blackley says. A designer must creatively solve problems without breaking the rules that keep the environment realistic—“and then you have to have a lot of intuition about how to make it fun.”

In 2005, Blackley moved to Pasadena, California, just miles from where Leighton was keeping Feynman’s van. Oblivious to his proximity to the famous camper, Blackley nonetheless began to make a hobby of restoring classic Italian cars.

It was fellow Pasadena resident Michael Shermer, founder of the Skeptics Society, who told him about the van in 2012. Blackley knew right away that he had to help save it.

“The universe is telling me I’ve gotta do this,” he says.

With the help of Leighton and Shermer, along with a donation from Feynman fan and world-class designer Edward Tufte, Blackley registered the van as a historic vehicle and brought it to his preferred restoration specialists in Los Angeles.

The van’s Feynman diagrams, which were painted poorly in the first place, turned out to be too degraded for restoration. So a pinstriper re-painted them, taking care to replicate the quality of the original work.

“It looks like this crappy job again,” Blackley says with a smile. “You see the brush marks and everything.”

After the restoration, Blackley prepared to ship the van across the country for a Feynman-themed exhibit by Tufte, held at Fermilab.

Keeping the Feynman spirit alive

The test of whether the specialists had stayed true to the original came when Blackley invited Michelle to come see the van before its next big trip. She came with her 11- and 13-year-old children.

It didn’t look brand new, Michelle says, but it was as if it had been rewound 30 years, back to the days when her father was still in the driver’s seat. She told Blackley and her kids about the times her father slept on the floor below her hammock.

“As a father now, you appreciate what that means,” says Blackley, who has an 11-year-old son.

A camping van incongruously covered in physics notations seems to be a fitting symbol for a man who couldn’t seem to help thinking about particle physics, Michelle says.

“I think it was impossible for him to turn it off,” she says. “I remember in the car there was a Kleenex box, and the back of it had been used for equations. Every little piece of paper and every waking moment was fair game.”

Richard Feynman died of cancer in 1988 at the age of 69. But projects like the van restoration keep his memory alive, Michelle says.

“He would’ve been an amazing grandfather, and he never had the opportunity,” she says. “So I’m thrilled that there are so many people around who want to share his spirit and his life so my kids can get a sense of who he is.”

Kathryn Jepsen

Share

This article appeared in symmetry on May 1, 2014.

Scientists stay inspired in their sometimes tedious task of inspecting photographs taken in the Dark Energy Survey’s ambitious cataloging of one-eighth of the sky. Image courtesy of Dark Energy Survey

Scientists stay inspired in their sometimes tedious task of inspecting photographs taken in the Dark Energy Survey’s ambitious cataloging of one-eighth of the sky. Image courtesy of Dark Energy Survey

Physicists working on the Dark Energy Survey can expect to pull many an all-nighter. The international collaboration of more than 120 scientists aims to take about 100,000 photographs peering deep into the night sky. Scientists must personally review many of these photos to make sure the experiment is working well, and they’ve come up with ways to stay motivated while doing so.

DES scientists collected almost 14,000 photographs from August 2013 to February 2014, in the first of five seasons they plan to operate their sophisticated Dark Energy Camera. Even for those of us who aren’t trying to take the most detailed survey of the universe, it might not come as a surprise that complications can occur during operation. For example, the telescope may not always sync up with the natural movement of the night sky, and passing airplanes can create trails in the images. Software bugs can also cause issues.

Two of the DES researchers, Erin Sheldon of Brookhaven National Laboratory and Peter Melchior of The Ohio State University, created the DES Exposure Checker, an online gallery of images from the telescope. Team members use the photo repository as a way to spot imperfections and other issues with the images so they can fix problems as quickly as possible.

“These problems are easier for an actual person to see rather than some automated program,” Sheldon says. “And then we can create an inventory to help diagnose troubles that may occur with future images.”

When reviewing photos, DES scientists flag the ones that show symptoms of different problems, such as long streaks from satellites; unwanted reflections, called ghosts; or marks left by cosmic rays. But the process can get overwhelming with thousands of photos to look over. So the DES researchers decided to add a positive classification to the mix—an “Awesome!” category. When someone sees an incredible photo, they can mark it as such in the database.

Sheldon points out one of his favorite images, one that captured a passing comet. “It was just so serendipitous. We couldn’t find that if we pointed the telescope in the same place at any other time,” he says.

Steve Kent, Fermilab scientist and head of the experimental astrophysics group, says one of his favorite images from the survey shows a dying star. In the color photo, a bright blue oxygen haze surrounds the hot remnant of what was formerly a giant red star.

A second way to encourage team members classifying images is the leader board posted on the DES Exposure Checker website, honoring individuals who have categorized the most photos. Researchers compete to see their names at the top.

But more than friendly competition drives the DES team to categorize images. They’re also seeking answers to questions about the past and future of our universe such as: Has the density of dark energy changed over time? Why is the expansion of the universe speeding up?

“For me, it’s a mystery,” Sheldon says. “I have this question, and I have to find out the answer.”

Amanda Solliday

Share

The empirical sciences, like physics and chemistry, are partially invented and partially discovered. Although the empirical observations are surely discovered, the models that describe them are invented through human ingenuity. But what about mathematics which is based on pure thought? Are its results invented or discovered?

Not surprisingly there are different views on this topic. Some people maintain that mathematical results are invented, others claim that they are discovered. Is there a universe of mathematical results just waiting to be discovered or are mathematical results invented by the mathematician and would disappear, like a fairy tale, when mathematicians vanish in the heat death of universe when all the available energy is used up? Invented or discovered? Perhaps some results are invented and others discovered. There is, however, a third view, namely that mathematics is game played by manipulating symbols according to well defined rules. At some level this is probably true.  All those who prefer Monopoly,® put up your hands!

What are the foundations of logic? Bertrand Russell (1872 – 1970) and Alfred Whitehead (1861 – 1947) tried to derive mathematics from logic. The result was the book: Principia Mathematica (1910), a real tour de force. Their derivation still required axioms or assumptions beyond pure logic and it has been questioned on other grounds. An alternate to this approach is set theory, in particular based on the Zermelo–Fraenkel axioms, with the axiom of choice. And an alternate to that is category theory. Whatever all that is. It is certainly very technical. The quest for foundations of mathematics and even logic, like the quest for the Holy Grail, is probably never ending. But the question remains: Was logic and set (category) theory, themselves, invented or discovered?

Let us look at things more simply. Historically, mathematics probably arose empirically: two stones plus two stones equals one stone plus three stones. Then it was realized that this holds for any tokens, stones, bushels of wheat or sheep.  The generalization from specific examples to the generic 2+2=1+3 could be considered an early example of the scientific method: generalizing from specific examples to a general rule. But one plus one does not always equal two. Consider a litre of liquid plus a litre of liquid. If one is water and the other alcohol, the result is less than two litres if they are put in the same container. Adding one litre of water to one litre of concentrated sulfuric acid is even more interesting[1].

Multiplication is also easy to demonstrate with counters. Division is a bit more problematic but if we think of dividing a bushel of wheat into equal parts the idea of fractions is quite natural. Dividing a sheep is messier. Subtraction however leads to a problem: negative numbers. Naively, we cannot have fewer than zero stones but subtraction can lead to that idea. So were negative numbers invented or discovered? We can finesse the problem of negative numbers by saying that negative numbers correspond to what we owe. If I have minus three stones it means I owe someone three stones.

Thus thinking of stones and bushels of wheat, we can understand the rational numbers, numbers written as the ratio of two whole numbers. The Pythagoreans in ancient Greece would have claimed that is all there is. Then can the thorny problem of the square root of two? This arises in connection with the Patagonian theorem. Some poor sod showed that the square root of two could not be written as the ratio of two whole numbers and was thus irrational. He was thrown into the sea for his efforts[2]. The square root of two does not exist in the universe of numbers discovered using stones, sheep, and bushels of wheat. Is it possible to have square root of two stones? Was it invented to make the Patagonian theorem work or was it discovered?

The example of the square root of minus one is even more perplexing. We can think of the square root of two as an extra number inserted between 1.414 and 1.415. But there is no place to insert the square root of minus one.  So again the question arises: Was it invented or discovered? Perhaps it is best to say it was assumed: Assume the square root of minus one can be treated like a normal number[3] and see what happens. A lot of good things as it turned out but does that mean it exists in any real sense. Perhaps it is just a useful fiction.

Nevertheless, mathematics has developed, discovering or inventing new results. As a phenomenologist, I would say we do not have enough information to assert if mathematics was invented or discovered. If we could contact extra-terrestrial mathematicians, it would be interesting to see if their mathematics was different or the same as ours. If it was different, that would be a strong indication that mathematics is invented. Or less black and white, the difference between terrestrial and extra-terrestrial mathematics would tell us the extent to which mathematics is discovered or invented.

In any event mathematics is a very interesting game, whether based on set theory or category theory, whether discovered or invented, and certainly more profitable than Monopoly®[4] in the long run.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Do not try this at home.

[2] At least that is the legend.

[3] √(-1)+√(-1) = 2 √(-1) , etc.

[4] On the other hand, oligarchy, as any large multinationals will tell you, is very profitable.

Share