• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘physics’

Nobel Week 2015

Monday, October 5th, 2015

So, once again, the Nobel week is upon us. And one of the topics of conversations for the “water cooler chat” in physics departments around the world is speculations on who (besides the infamous Hungarian “physicist” — sorry for the insider joke, I can elaborate on that if asked) would get the Nobel Prize in physics this year. What is your prediction?

With invention of various metrics for “measuring scientific performance” one can make some educated guesses — and even put the predictions on the industrial footage — see Thomson Reuters predictions based on a number of citations (they did get the Englert-Higgs prize right, but are almost always off). Or even try your luck with on-line betting (sorry, no link here — I don’t encourage this). So there is a variety of ways to make you interested.

My predictions for 2015: Vera Rubin for Dark Matter or Deborah Jin for fermionic condensates. But you must remember that my record is no better than that of Thomson Reuters.

Share

Protons and neutrons (alias, the nucleons) constitute the building blocks of matter, accounting for almost all the mass of our world. Even if we are still far from understanding their physical inner structure, many efforts have been made to deepen our knowledge about them.

Over the past few years, thanks to a fruitful synergy of theoretical and experimental progress, we have started opening the study of new multi-dimensional images of the structure of proton, investigating the behavior of its fundamental constituents, the quarks and gluons.

When we look into nucleons with extremely high resolution, we are in the regime of perturbative QCD (in other words, a regime where we can really work out mathematical calculations) and quarks and gluons appear almost free. With the due caveats, we can compare the situation to observing water at extreme magnifications, and seeing quasi-free water molecules. As we reduce the magnification, we realize that the molecules clump together in heavier, composite droplets. Eventually, at low magnification they form a single object, like the proton.

Pursuing the analogy, when we are looking at a proton at rest (not smashed inside a collider, for example) it is as if we were unable to describe water starting from the dynamics of molecules. This is because confinement, the reason for quarks and gluons being inescapably bound inside a proton, is left without any rigorous mathematical justification. Confinement is the most crucial characteristic of the theory and represents one of the hardest physics problems of today.

What we can do is to just describe this jam of quarks and gluons giving rise to a proton through mathematical objects specifically introduced in order to “parametrize” our ignorance about its structure: these are what are called parton distribution functions (PDFs), which shape the probability of finding quarks and gluons within a proton.

The knowledge of the multi-dimensional structure of protons allows the analysis of properties otherwise inaccessible. The situation may be compared to diagnostic studies: electrocardiography, for example, gives us mono-dimensional information about the hearth activity. It is of fundamental importance, but it does not give detailed information about the multidimensional inner structure. Instead, more important for this purpose are multi-dimensional tomographies of heart activity (MRI, CT and others). The enormous advantages of medical diagnostic imaging literally revolutionized medicine and surgery. In a similar way, the latest “multi-dimensional” pictures of the nucleon obtained with QCD phenomenology can improve the current status of hadronic physics and aim at better understanding particle physics in general.

Although one-dimensional (collinear) parton distribution functions are extremely useful for studying any process involving hadrons (including the proton-proton collisions taking place at the LHC), from the point of view of nucleon tomography they are rather limited, because they describe the distribution of partons in a single dimension.

More informative distributions are the so-called transverse-momentum-dependent distributions (TMDs). They represent pictures of three-dimensional probabilities in momentum space. The distributions change depending on the energy scale at which they are probed (in a way that is calculable using evolution equations from perturbative QCD) and on the value of the longitudinal fractional momentum.

Partons (quarks and gluons) are like fishes confined inside a fishbowl (the proton). Each parton has its own collinear and transverse velocity, indicated by black and colored arrows respectively. Different colors indicate different flavors for quarks.

Partons (quarks and gluons) are like fishes confined inside a fishbowl (the proton). Each parton has its own collinear and transverse velocity, indicated by black and colored arrows respectively. Different colors indicate different flavors for quarks and external excitations (like photons) can extract partons from inside the proton. (credit: A. Signori)

There are many nontrivial questions concerning TMDs that do not have an answer yet, like their most truthful mathematical representation. At present, we know that experimental data form proton-proton and electron-proton collisions point towards Gaussian shapes (if the spin of quarks is neglected), but other forms could do the job as well.

An important question concerns the flavor dependence of transverse-momentum-dependent distributions: are up quarks moving in the nucleon with greater velocity than the down ones, or vice versa? What about sea quarks? Are they faster than the other ones? Part of my research activity is devoted to the investigation of this topic, which could be quite relevant both from the theoretical and experimental point of view. After lot of struggling with data analysis, we now know that sea quarks are likely to be faster than up quarks, which are then faster than down ones.

The statistical analysis of the huge amount of data collected at hadron colliders like the Tevatron and the LHC strongly relies on the detailed knowledge of parton distribution functions, both in 1D and 3D (the TMDs!). Before now, data analysis has been carried out assuming that quarks have all the same velocity, but we now know that this is not the case! This means that it will be important to refine the knowledge of quark (and gluons too, in the future) velocities, in order to improve the accuracy and reliability of data analysis. That’s what a PhD student can do during his/her amazing time in scientific research!

Share

Geometry and interactions

Tuesday, November 25th, 2014

Or, how do we mathematically describe the interaction of particles?

In my previous post, I addressed some questions concerning the nature of the wavefunction, the most truthful mathematical representation of a particle. Now let us make this simple idea more complete, getting closer to the deep mathematical structure of particle physics. This post is a bit more “mathematical” than the last, and will likely make the most sense to those who have taken a calculus course. But if you bear with me, you may also come to discover that this makes particle interactions even more attractive!

The field theory approach considers wavefunctions as fields. In the same way as the temperature field \(T(x,t)\) gives the value of the temperature in a room at space \(x\) and time \(t\), the wavefunction \(\phi (x,t)\) quantifies the probability of presence of a particle at space point \(x\) and time \(t\).
Cool! But if this sounds too abstract to you, then you should remember what Max Planck said concerning the rise of quantum physics: “The increasing distance between the image of the physical world and our common-sense perception of it simply indicates that we are gradually getting closer to reality”.

Almost all current studies in particle physics focus on interactions and decays of particles. How does the concept of interaction fit into the mathematical scheme?

The mother of all the properties of particles is called the Lagrangian function. Through this object a lot of properties of the theory can be computed. Here let’s consider the Lagrangian function for a complex scalar field without mass (one of the simplest available), representing particles with electric charge and no spin:

\(L(x) = \partial_\mu \phi(x)^* \partial^\mu \phi(x) \).

Mmm… Is it just a bunch of derivatives of fields? Not really. What do we mean when we read \(\phi(x)\)? Mathematically, we are considering \(\phi\) as a vector living in a vector space “attached” to the space-time point \(x\). For the nerds of geometry, we are dealing with fiber bundles, structures that can be represented pictorially in this way:

fibers

Click on image for larger version

The important consequence is that, if \(x\) and \(y\) are two different space-time points, a field \(\phi(x)\) lives in a different vector space (fiber) with respect to \(\phi(y)\)! For this reason, we are not allowed to perform operations with them, like taking their sum or difference (it’s like comparing a pear with an apple… either sum two apples or two pears, please). This feature is highly non-trivial, because it changes the way we need to think about derivatives.

In the \(L\) function we have terms containing derivatives of the field \(\phi(x)\). Doing this, we are actually taking the difference of the value of the field at two different space-time points. But … we just outlined that we are not allowed to do it! How can we solve this issue?

If we want to compare fields pertaining to the same vector space, we need to slightly modify the notion of derivative introducing the covariant derivative \(D\):

\( D_\mu = \partial_\mu + ig A_\mu(x) \).

Here, on top of the derivative \(\partial\), there is the action of the “connection” \(A(x)\), a structure which takes care of “moving” all the fields in the same vector space, and eventually allows us to compare apples with apples and pears with pears.
So, a better way to write down the Lagrangian function is:

\(L(x) = D_\mu \phi(x)^* D^\mu \phi(x) \).

If we expand \(D\) in terms of the derivative and the connection, \(L\) reads:

\(L(x) = \partial_\mu \phi(x)^* \partial^\mu \phi(x) +ig A_\mu (\partial^\mu \phi^* \phi – \phi^* \partial^\mu \phi) + g^2 A^2 \phi^* \phi \).

Do you recognize the role of these three terms? The first one represents the propagation of the field \(\phi\). The last two are responsible for the interactions between the fields \(\phi, \phi^*\) and the \(A\) field, referred to as the “photon” in this context.

interactions

Click on image for larger version

This slightly hand-waving argument involving fields and space-time is a simple handle to understand how the interactions among particles emerge as a geometric feature of the theory.

If we consider more sophisticated fields with spin and color charges, the argument doesn’t change. We need to consider a more refined “connection” \(A\), and we could see the physical interactions among quarks and gluons (namely QCD, Quantum Chromo Dynamics) emerging just from the mathematics.

 Probably the professor of geometry in my undergrad course would call this explanation “Spaghetti Mathematics”, but I think it can give you a flavor of the mathematical subtleties involved in the theory of particle physics.

Share

This blog is all about particle physics and particle physicists. We can all agree, I suppose, on the notion of the particle physicist, right? There are even plenty of nice pictures up here! But do we know or are we aware of what a particle really is? This fundamental question tantalized me from the very beginning of my studies and before addressing more involved topics I think it is worth spending some time on this concept. Through the years I probably changed my opinion several times, according to the philosophy underlying the topic that I was investigating. Moreover, there’s probably not a single answer to this question.

  1. The Standard Model: from geometry to detectors

The human mind conceived the Standard Model of Particle Physics to give a shape on the blackboard to the basic ingredients of particle physics: it is a field theory, with quantization rules, namely a quantum field theory and its roots go deep down to differential geometry.
But we know that “particles” like the Higgs boson have been discovered through complex detectors, relying on sophisticated electronic systems, tons of Monte Carlo simulations and data analysis. Quite far away from geometry, isn’t it?
So the question is: how do we fill this gap between theory and experiment? What do theoreticians think about and experimentalists see through the detectors? Furthermore, does a particle’s essence change from its creation to its detection?

  1. Essence and representation: the wavefunction

 Let’s start with simple objects, like an electron. Can we imagine it as a tiny thing floating here and there? Mmm. Quantum mechanics already taught us that it is something more: it does not rotate around an atomic nucleus like the Earth around the Sun (see, e.g., Bohr’s model). The electron is more like a delocalized “presence” around the nucleus quantified by its “wavefunction”, a mathematical function that gives the probability of finding the electron at a certain place and time.
Let’s think about it: I just wrote that the electron is not a localized entity but it is spread in space and time through its wavefunction. Fine, but I still did not say what an electron is.

I have had long and intensive discussions about this question. In particular I remember one with my housemate (another theoretical physicist) that was about to end badly, with the waving of frying pans at each other. It’s not still clear to me if we agreed or not, but we still live together, at least.

Back to the electron, we could agree on considering its essence as its abstract definition, namely being one of the leptons in the Standard Model. But the impossibility of directly accessing it forces me to identify it with its most trustful representation, namely the wavefunction. I know its essence, but I cannot directly (i.e. with my senses) experience it. My human powers stop to the physical manifestation of its mathematical representation: I cannot go further.
Renè Magritte represented the difference between the representation of an object and the object itself in a famous painting “The treachery of images”:

magritte_pipe

“Ceci n’est pas une pipe”, it says, namely “This is not a pipe”. He is right, the picture is its representation. The pipe is defined as “A device for smoking, consisting of a tube of wood, clay, or other material with a small bowl at one end” and we can directly experience it. So its representation is not the pipe itself.

As I explained, this is somehow different in the case of the electron or other particles, where experience stops to the representation. So, according to my “humanity”, the electron is its wavefunction. But, to be consistent with what I just claimed: can we directly feel its wavefunction? Yes, we can. For example we can see its trace in a cloud chamber, or more elaborate detectors. Moreover, electricity and magnetism are (partly) manifestations of electron clouds in matter, and we experience those in everyday life.

bubbleplakat

You may wonder why I go through all these mental wanderings: just write down your formulas, calculate and be happy with (hopefully!) discoveries.

I do it because philosophy matters. And is nice. And now that we are a bit more aware of the essence of things that we are investigating, we can move a step forward and start addressing Quantum Chromo Dynamics (QCD), from its basic foundations to the latest results released by the community. I hope to have sufficiently stimulated your curiosity to follow me during the next steps!

Again, I want to stress that this is my own perspective, and maybe someone else would answer these questions in a different way. For example, what do you think?

Share

I feel it mine

Tuesday, October 21st, 2014

On Saturday, 4 October, Nikhef – the Dutch National Institute for Subatomic Physics where I spend long days and efforts – opened its doors, labs and facilities to the public. In addition to Nikhef, all the other institutes located in the so-called “Science Park” – the scientific district located in the east part of Amsterdam – welcomed people all day long.

It’s the second “Open Day” that I’ve attended, both as a guest and as guide. Together with my fellow theoreticians we provided answers and explanations to people’s questions and curiosities, standing in the “Big Bang Theory Corner” of the main hall. Each department in Nikhef arranged its own stand and activities, and there were plenty of things to be amazed at to cover the entire day.

The research institutes in Science Park (and outside it) offer a good overview of the concept of research, looking for what is beyond the current status of knowledge. “Verder kijken”, or looking further, is the motto of Vrije Universiteit Amsterdam, my Dutch alma mater.

I deeply like this attitude of research, the willingness to investigating what’s around the corner. As they like to define themselves, Dutch people are “future oriented”: this is manifest in several things, from the way they read the clock (“half past seven” becomes “half before eight” in Dutch) to some peculiarities of the city itself, like the presence of a lot of cultural and research institutes.

This abundance of institutes, museums, exhibitions, public libraries, music festivals, art spaces, and independent cinemas makes me feel this city as cultural place. People interact with culture in its many manifestations and are connected to it in a more dynamic way than if they were only surrounded by historical and artistic.

Back to the Open Day and Nikhef, I was pleased to see lots of people, families with kids running here and there, checking out delicate instruments with their curious hands, and groups of guys and girls (also someone who looked like he had come straight from a skate-park) stopping by and looking around as if it were their own courtyard.

The following pictures give some examples of the ongoing activities:

We had a model of the ATLAS detector built with Legos: amazing!

IMG_0770

Copyright Nikhef

And not only toy-models. We had also true detectors, like a cloud chamber that allowed visitors to see the traces of particles passing by!

ADL_167796

Copyright Nikhef

Weak force and anti-matter are also cool, right?

ADL_167823

Copyright Nikhef

The majority of people here (not me) are blond and/or tall, but not tall enough to see cosmic rays with just their eyes… So, please ask the experts!

ADL_167793

Copyright Nikhef

I think I can summarize the huge impact and the benefit of such a cool day with the words of one man who stopped by one of the experimental setups. He listened to the careful (but a bit fuzzy) explanation provided by one of the students, and said “Thanks. Now I feel it mine too.”

Many more photos are available here: enjoy!

Share

Why pure research?

Thursday, October 2nd, 2014

With my first post on Quantum Diaries I will not address a technical topic; instead, I would like to talk about the act (or art) of “studying” itself. In particular, why do we care about fundamental research, pure knowledge without any practical purpose or immediate application?

A. Flexner in 1939 authored a contribution to Harper’s Magazine (issue 179) named “The usefulness of useless knowledge”. He opens the discussion with an interesting question: “Is it not a curios fact that in a world steeped in irrational hatreds which threaten civilization itself, men and women – old and young – detach themselves wholly or partly from the angry current of daily life to devote themselves to the cultivation of beauty, to the extension of knowledge […] ?”

Nowadays this interrogative is still present, and probably the need for a satisfactory answer is even stronger.

From a pragmatic point of view, we can argue that there are many important applications and spin-offs of theoretical investigations into the deep structure of Nature that did not arise immediately after the scientific discoveries. This is, for example, the case of QED and antimatter, the theories for which date back to the 1920s and are nowadays exploited in hospitals for imaging purposes (like in PET, positron emission tomography). The most important discoveries affecting our everyday life, from electricity to the energy bounded in the atom, came from completely pure and theoretical studies: electricity and magnetism, summarized in Maxwell’s equations, and quantum mechanics are shining examples.

It may seem that it is just a matter of time: “Wait enough, and something useful will eventually pop out of these abstract studies!” True. But that would not be the most important answer. To me this is: “Pure research is important because it generates knowledge and education”. It is our own contribution to the understanding of Nature, a short but important step in a marvelous challenge set up by the human mind.

Personally, I find that research into the yet unknown aspects of Nature responds to some partly conscious and partly unconscious desires. Intellectual achievements provide a genuine ‘spiritual’ satisfaction, peculiar to the art of studying. For sake of truth I must say that there are also a lot of dark sides: frustration, stress, graduate-depression effects, geographical and economic instability and so on. But leaving for a while all these troubles aside, I think I am pretty lucky in doing this job.

source_of_knowledge

Books, the source of my knowledge

During difficult times from the economic point of view, it is legitimate to ask also “Why spend a lot of money on expensive experiments like the Large Hadron Collider?” or “Why fund abstract research in labs and universities instead of investing in more socially useful studies?”

We could answer by stressing again the fact that many of the best innovations came from the fuzziest studies. But in my mind the ultimate answer, once for all, relies in the power of generating culture, and education through its diffusion. Everything occurs within our possibilities and limitations. A willingness to learn, a passion for teaching, blackboards, books and (super)computers: these are our tools.

Citing again Flexner’s paper: “The mere fact spiritual and intellectual freedoms bring satisfaction to an individual soul bent upon its own purification and elevation is all the justification that they need. […] A poem, a symphony, a painting, a mathematical truth, a new scientific fact, all bear in themselves all the justification that universities, colleges and institutes of research need or require.”

Last but not least, it is remarkable to think about how many people from different parts of the world may have met and collaborated while questing together after knowledge. This may seem a drop in the ocean, but research daily contributes in generating a culture of peace and cooperation among people with different cultural backgrounds. And that is for sure one of the more important practical spin-offs.

Share

Matter and energy have a very curious property. They interact with each other in predictable ways and the more energy an object has, the smaller length scales it can interact with. This leads to some very interesting and beautiful results, which are best illustrated with some simple quantum electrodynamics (QED).

QED is the framework for describing the interactions of charged leptons with photons, and for now let’s limit things to electrons, positrons and photons. An electron is a negatively charged fundamental particle, and a positron is the same particle, but with a positive charge. A photon is a neutral fundamental particle of light and it interacts with anything that has a charge.

That means that we can draw a diagram of an interaction like the one below:

An electron radiating a photon

An electron radiating a photon

In this diagram, time flows from left to right, and the paths of the particles in space are represented in the up-down direction (and two additional directions if you have a good enough imagination to think in four dimensions!) The straight line with the arrow to the right is an electron, and the wavy line is a photon. In this diagram an electron emits a photon, which is a very simple process.

Let’s make something more complicated:

An electron and positron exchange a photon

An electron and positron make friends by exchanging a photon

In this diagram the line with the arrow to the left is a positron, and the electron and positron exchange a photon.

Things become more interesting when we join up the electron and positron lines like this:

An electron and positron annihilate

An electron and positron get a little too close and annihilate

Here an electron and positron annihilate to form a photon.

Now it turns out in quantum mechanics that we can’t just consider a single process, we have to consider all possible processes and sum up their contributions. So far only the second diagram we’ve considered actually reflects a real process, because the other two violate conservation of energy. So let’s look at electron-positron scattering. We have an electron and a positron in the initial state (the left hand side of the diagram) and in the final state (the right hand side of the diagram):

What happens in the middle?  According to quantum mechanics, everything possible!

What happens in the middle? According to quantum mechanics, everything possible!

There are two easy ways to join up the lines in this diagram to get the following contributions:

Two possible diagrams for electron-positron scattering

Two possible diagrams for electron-positron scattering

There’s a multiplicative weight (on the order of a percent) associated with each photon interaction, so we can count up the photons and determine the contribution each process has. In this case, there are two photon interactions in each diagram, so each one contributes roughly equally. (You may ask why we bother calculating the contributions for a given pair of initial and final states. In fact what we find interesting is the ratio of contributions for two different pairs of initial and final states so that we can make predictions about rates of interactions.)

Let’s add a photon to the diagram, just for fun. We can connect any two parts of electron and positron lines to create a photon, like so:

Taking up the complexity a notch, by adding a photon

Taking up the complexity a notch, by adding a photon

A fun game to play in you’re bored in a lecture is to see how many unique ways you can add a photon to a diagram.

So how do we turn this into a fractal? Well we start off with an electron moving through space (now omitting the particle labels for a cleaner diagram):

A lonely electron :(

A lonely electron 🙁

Then we add a photon or two to the diagram:

An electron with a photon

An electron with a photon

An electron hanging out with two photons

An electron hanging out with two photons

An electron going on an adventure with two photons

An electron going on an adventure with two photons

Similarly let’s start with a photon:

A boring photon being boring

A boring photon being boring

And add an electron-positron pair:

Ah, that's a bit more interesting

Ah, that’s a bit more interesting

This is all we need to get started. Every time we see an electron or positron line, we can replace it with a line that emits and absorbs a photon. Every time we see a photon we can add an electron-positron pair. We can keep repeating this process as much as we like until we end up with arbitrarily complex diagrams, each new step adding more refinement to the overall contributions:

A very busy electron

A very busy electron

At each step the distance we consider is smaller than the one before it, and the energy needed to probe this distance is larger. When we talk about an electron we usually think of a simple line, but real electrons are actually made of a mess of virtual particles that swarm around the central electron. The more energy we put into probing the electron’s structure (or lack of structure) the more particles we liberate in the process. There are many diagrams we can draw and we can’t pick out a single one of these diagrams as the “real” electron, as they all contribute. We have to take everything to get a real feel of what something as simple as an electron is.

As usual, things are even more complicated in reality than this simple picture. To get a complete understanding we should add the other particles to the diagrams. After all, that’s how we can get a Higgs boson out of proton- in some sense the Higgs boson was “already there” inside the proton and we just liberated it by adding a huge amount of energy. If things are tricky for the electron, they are even more complicated for the proton. Hadrons are bound states of quarks and gluons, and while we can see an individual electron, it’s impossible to see an individual quark. Quarks are always found in groups, so have the take the huge fractal into account when we look inside a proton and try to simulate what happens. This is an intractable problem, so need a lot of help from the experimental data to get it right, such as the dedicated deep inelastic scattering experiments at the DESY laboratory.

The view inside a proton might look a little like this (where the arrows represent quarks):

The crazy inner life of the proton

The crazy inner life of the proton

Except those extra bits would go on forever to the left and right, as indicated by the dotted lines, and instead of happening in one spatial dimension it happens in three. To make matters worse, the valence quarks are not just straight lines as I’ve drawn them here, they meander to and fro, changing their characteristic properties as they exchange other particles with each other.

Each time we reach a new energy range in our experiments, we get to prober deeper into this fractal structure of matter, and as we go to higher energies we also liberate higher mass particles. The fractals for quarks interact strongly, so they are dense and have high discovery potential. The fractals for neutrinos are very sparse and their interactions can spread over huge distances. Since all particles can interact with each other directly or through intermediaries, all these fractals interact with each other too. Each proton inside your body contains three valence quarks, surrounded by a fractal mess of quarks and gluons, exactly the same as those in the protons that fly around the LHC. All we’ve done at the LHC is probe further into those fractals to look for something new. At the same time, since the protons are indistinguishable they are very weakly connected to each other via quantum mechanics. In effect the fractals that surround every valence particle join up to make one cosmological fractal, and the valence particles just excitations of that fractal that managed to break free from their (anti-)matter counterparts.

The astute reader will remember that the title of the post was the seemingly fractal nature of matter. Everything that has been described so far fulfils the requirements of any fractal- self similarity, increased complexity with depth and so on. What it is that makes matter unlike a fractal? We don’t exactly know the answer to that question, but we do know that eventually the levels of complexity have to stop. We can’t keep splitting space up into smaller and smaller chunks and finding more and more complex arrangements of the same particles over and over again. This is because eventually we would reach the Planck scale, which is where the quantum effects of gravity become important and it becomes very difficult to keep track of spatial distances.

Meanwhile, deep inside an electron, something weird happens at the Planck scale

Meanwhile, deep inside an electron’s fractal, causality breaks down and something weird happens at the Planck scale

Nobody knows what lies at the Planck scale, although there are several interesting hypotheses. Perhaps the world is made of superstrings, and the particles we see are merely excitations of those strings. Some models propose a unification of all known forces into a single force. We know that the Planck scale is about fifteen orders of magnitude higher in energy than the LHC, so we’ll never reach the energy and length scales needed to answer these questions completely. However we’ve scratched the surface with the formulation of the Standard Model, and so far it’s been a frustratingly good model to work with. The interactions we know of are simple, elegant, and very subtle. The most precise tests of the Standard Model come from adding up just a handful of these fractal-like diagrams (at the cost of a huge amount of labour, calculations and experimental time.)

I find it mind boggling how such simple ideas can result in so much beauty, and yet it’s still somehow flawed. Whatever the reality is, it must be even more beautiful than what I described here, and we’ll probably never know its true nature.

(As a footnote, to please the pedants: To get a positron from an electron you also need to invert the coordinate axes to flip the spin. There are three distinct diagrams that contribute to the electron positron scattering, but the crossed diagram is a small detail might confuse someone new to these ideas.)

Share

I know that the majority of the posts I’ve written have focused on physics issues and results, specifically those related to LHCb. I’d like to take this opportunity, however, to focus on the development of the field of High Energy Physics (HEP) and beyond.

As some of you know, in 2013, we witnessed an effectively year-long conversation about the state of our field, called Snowmass. This process is meant to collect scientists in the field, young and old alike, and ask them what the pressing issues for the development of our field are. In essence, it’s a “hey, stop working on your analysis for a second and let’s talk about the big issues” meeting. They came out with a comprehensive list of questions and also a bunch of working papers about the discussions. If you’re interested, go look at the website. The process was separated into “frontiers,” or groups that the US funding agencies put together to divide the field into the groups that they saw fit. I’ll keep my personal views on the “frontiers” language for a different day, and instead share a much more apt interpretation of the frontiers, which emerged from Jonathan Asaadi, of Snowmass Young and Quantum Diaries. He emphasizes that we are coming together to tackle the biggest problems as a team, as opposed to dividing into groups, illustrated as Voltron in his slide below.

snowmass_young_asaadi

Slide from presentation of Jonathan Asaadi at the USLUO (now USLUA) 2013 annual meeting in Madison, Wisconsin. The point here is collaboration between frontiers to solve the biggest problems, rather than division into separate groups.

And that’s just what happened. While I willingly admit that I had zero involvement in this process aside from taking the Snowmass Young survey, I still agree with the conclusions which were reached about what the future of our field should look like. Again, I highly encourage you to go look at the outcome.

Usually, this would be the end of the story, but this year, the recommendations from Snowmass were passed to a group called P5 (Particle Physics Project Prioritization Panel). The point of this panel was to review the findings of Snowmass and come up with a larger plan about how the future of HEP will proceed. The big ideas had effectively been gathered, now the hard questions about which projects can pursue these questions effectively are being asked. This specifically focuses on what the game plan will be for HEP over the next 10-20 years, and identifies the distinct physics reach in a variety of budget situations. Their recommendation will be passed to HEPAP (High Energy Physics Advisory Panel), which reviews the findings, then passes its recommendation to the US government and funding agencies. The P5 findings will be presented to HEPAP  on May 22nd, 2014 at 10 AM, EST. I invite you to listen to the presentation live here. The preliminary executive report and white paper can be found after 10 EST on the 22nd of May on the same site, as I understand.

This is a big deal.

There are two main points here. First, 10-20 years is a long time, and any sort of recommendation about the future of the field over such a long period will be a hard one. P5 has gone through the hard numbers under many different budget scenarios to maximize the science reach that the US is capable of. Looking at the larger political picture, in 2013, the US also entered the Sequester, which cut spending across the board and had wide implications for not only the US but worldwide. This is a testament to the tight budget constraints that we are working in now, and will most certainly face in the future. Even considering such a process as P5 shows that the HEP community recognizes this point, and understands that without well defined goals and tough considerations of how to achieve them, we will endanger the future funding of any project in the US or with US involvement.

Without this process, we will endanger future funding of US HEP.

We can take this one step further with a bit more concrete example. The majority of HEP workings are done through international collaboration, both experiment and theory alike. If any member of such a collaboration does not pull their weight, it puts the entire project into jeopardy. Take, for example, the US ATLAS and CMS programs, which have 23% and 33% involvement from the US, respectively, in both analysis and detector R&D. If these projects were cut drastically over the next years, there would have to be a massive rethinking about the strategies of their upgrades, not to mention possible lack of manpower. Not only would this delay one of the goals outlined by Snowmass, to use the Higgs as a discovery tool, but would also put into question the role of the US in the future of HEP. This is a simple example, but is not outside the realm of possibility.

The second point is how to make sure a situation like this does not happen.

I cannot say that communication of the importance of this process has been stellar. A quick google search yields no mainstream news articles about the process, nor the impact. In my opinion, this is a travesty and that’s the reason why I am writing this post. Symmetry Magazine also, just today, came out with an article about the process. Young members of our community who were not necessarily involved in Snowmass, but seem to know about Snowmass, do not really know about P5 or HEPAP. I may be wrong, but I draw this conclusion from a number of conversations I’ve had at CERN with US postdocs and students. Nonetheless, people are quite adamant about making sure that the US does continue to play a role in the future of HEP. This is true across HEP, the funding agencies and the members of Congress. (I can say this as I went on a trip with the USLUO, FNAL and SLAC representatives to lobby congress on behalf of HEP in March of this year, and this is the sentiment which I received.) So the first step is informing the public about what we’re doing and why.

The stuff we do is really cool! We’re all organized around how to solve the biggest issues facing physics! Getting the word out about this is key.

Go talk to your neighbor!

Go talk to your local physicist!

Go talk to your congressperson!

Just talk about physics! Talk about why it excites you and talk about why it’s interesting to explore! Maybe leave out the CLs plots, though. If you didn’t know, there’s also a whole mess of things that HEP is good for besides colliding particles! See this site for a few.

The final step is understanding the process. The biggest worry I have is what happens after HEPAP reviews the P5 recommendations. We, as a community, have to be willing to endure the pains of this process. Good science will be excluded. However, there are not infinite funds, nor was a guarantee of funding ever given. Recognition of this, while focusing on the big problems at hand and thinking about how to work within the means allowed is *the point* of the conversation. The better question is, will we emerge from the process unified or split? Will we get behind the Snowmass process and answer the questions posed to us, or fight about how to answer them? I certainly hope the answer is that we will unify, as we unified for Snowmass.

An allegorical example is from a slide from Nima Arkani-Hamed at Pheno2014, shown in the picture.

One slide from Nima Arkani-Hamed's presentation at Pheno2014

One slide from Nima Arkani-Hamed’s presentation at Pheno2014

 

The take home point is this: If we went through the exercise of Snowmass, and cannot pull our efforts together to the wishes of the community, are we going to survive? I would prefer to ask a different question: Will we not, as a community, take the opportunity to answer the biggest questions facing physics today?

We’ll see on the 22nd and beyond.

 

*********************************************

Update: May 27, 2014

*********************************************

As posted in the comments, the full report can be found here, the presentation given by Steve Ritz, chair of P5 can be found here, and the full P5 report can be found here.  Additionally, Symmetry Magazine has a very nice piece on the report itself. As they state in the update at the bottom of the page, HEPAP voted to accept the report.

Share

— by T.I. Meyer, Head of Strategic Planning & Communication

This past Saturday, I attended a “celebration of life” for Erich W. Vogt, one of the founders of TRIUMF and perhaps the last of the generation of “Renaissance-man” style leaders who helped shape the modern era of particle and nuclear physics.

“Celebration of life” is North American politeness for memorial service. Erich passed away on February 19, 2014, at the age of 84. He was with family and friends until the very end, and each day he would tell us a new historical anecdote, hilarious and penetrating as always, and then comment on his intentions to return to work at TRIUMF the next morning.

The service itself was spectacular with about 400 people packed into the former faculty club on the UBC campus. We were regaled with a litany of precise, powerful speeches that mirrored Erich’s personality in so many ways: witty, thoughtful, provocative, and unabashed. The collected wisdom and life experience in the room was stupefying, perhaps an even larger testament to the impact that Erich had on all of us—and the entire world.

I went with my wife and our three-month old daughter. I told people that I was hoping she’d be inspired by the legacy and soak up some of the aura of longevity and greatness.

But that got me to thinking. Erich was one of “those” scientists, the ones who were shrewd, sharp-witted, and educated in everything from particle physics and international politics to porcelain plateware and the development of the modern piano. In his spare time, he met Einstein, befriended prime ministers, raised money for and founded a laboratory in Israel, wrote an authoritative history of his family and its origins, and helped articulate and lead the vision for a national subatomic-physics laboratory in Canada that became TRIUMF.

We can look through the records and the recollections of those who knew Erich to trace out how he became who he was. But I often wonder where the next generation of Erichs is coming from. Are they here and I just don’t see them? Is our society still inspiring and retaining people like this? Is there still a valuable role for these types of “Renaissance” people? Moreover, are they needed, or is there even a place for them in our 21st century culture?

It does seem that the best and brightest of any generation tend to seek their personal, financial, and intellectual fortunes at the edgy frontiers. Some people argue that science has faded from the position of being The Most Exciting and Challenging Frontier and is now replaced by entrepreneurship, social expression, and so on. These people would argue that the next generation of “Renaissance” types are still there, but they are no longer flocking to science, or even more specifically, to physics. They are simply going elsewhere.

Others will argue that the modern system of measuring achievement works against the Renaissance individual. In the 20th century, the ambitious intellectual was able to develop mastery in multiple fields and to pursue vigourously multiple interests in an environment that placed fewer burdens on them. The culture allowed—and even encouraged—such a person to seek greatness. But in today’s landscape, to be successful, one needs to be increasingly specialized and spend more time writing grants, reviewing articles, and attending soft-skills training classes. It is said that we’ve moved into the era where “Jack of all trades, master of none” holds true, and that is how we dismiss the Renaissance person.

But are we in a society that no longer allows these broad-minded, passionate individuals to blossom and flourish? Has there been a recalibration of culture where these types are now as important as the focused specialist? Or perhaps the world is so complicated and fractured that a classical approach to mastery is simply ineffective?

In my view, the truth is somewhere in the middle. The 21st century is going to require a new type of individual to make pivotal contributions. The qualities of leadership and greatness do last more than one generation, but they evolve perhaps every three or four generations. Instead of wishing for the leaders of the last era, our task is to look at the world today: who is making an impact, what are they bringing to the table, and how can we make more of that happen?

And in our world of networks (virtual and social) and complexities, greatness can emerge more easily from the combined contributions of dozens or even hundreds of people. For instance, a select few physicists won the Nobel Prize for the experimental work that discovered the electron, the neutrino, and so on. For the Higgs boson, however, the Nobel Prize went to the two surviving theorists who posited its existence, in part because the discovery-in-reality was the product of a cast of 10,000 people. It would be silly to try and select just two or three people that made it happen. It took everyone! Now, and perhaps for the 21st century, that is greatness.

Looking across the frontiers of science, who are the leaders today? Are there common characteristics? How do they distinguish themselves?

Tell me what you see!

Share

Grad School in the sciences is a life-changing endeavour, so do not be afraid to ask questions.

Hi Folks,

Quantum Diaries is not just a place to learn the latest news in particle physics; it is also a resource. It is a forum for sharing ideas and experiences.

In science, it is almost always necessary to have a PhD, but what is a PhD? It is a certification that the holder has demonstrated unambiguously her or his ability to thoroughly carry out an independent investigation addressing a well-defined question. Unsurprisingly, the journey to earning a PhD is never light work, but nor should it be. Scientists undertake painstaking work to learn about nature, its underpinnings, and all the wonderful phenomena that occur in everyday life. This journey, however, is also filled with unexpected consequences, disappointment, and sometimes even heartbreak.

It is also that time of year again when people start compiling their CVs, resumes, research statements, and personal statements, that time of year when people begin applying for graduate programs. For this post, I have asked a number of good friends and colleagues, from current graduate students to current post docs, what questions they wished they had asked when apply for graduate school, selecting a school, and selecting a research group.

However, if you are interested in applying to for PhD programs, you should always first yourself,  “Why do I want a research degree like a PhD?”

If you have an experience, question, or thought that you would like to share, comment below! A longer list only provides more information for applicants.

As Always, Happy Colliding

– Richard (@bravelittlemuon)

PS I would like to thank Adam, Amy, John, Josh, Lauren, Mike, Riti, and Sam for their contributions.

Applying to Graduate School:

“When scouting for grad schools, I investigated the top 40 schools in my program of interest.  For chemistry, research primarily occurs in one or two research labs, so for each school, I investigated the faculty list and group research pages.  I eliminated any school where there werre fewer than two faculty members whose fields I could see myself pursuing.  This narrowed down my list to about a dozen schools.  I then filtered based on location: I enjoy being near a big city, so I removed any school in a non-ideal location.  This let me with half a dozen schools, to which I applied.” – Adam Weingarten, Chemistry, Northwestern

“If there is faculty member you are interested in working for, ask both the professor and especially the students separately about the average length of time it takes students to graduate, and how long financial support might be available.” – Lauren Jarocha, Chemistry, UNC

“My university has a pretty small physics program that, presently, only specializes in a few areas. A great deal of the research from my lab happens in conjunction with other local institutes (such as NIST and NIH) or with members of the chemistry or biology departments. If you are interested in a smaller department, ask professors about Institutes and interdisciplinary studies that they might have some connection to, be it within academia or industry.” – Marguerite Brown, Physics, Georgetown

“If you can afford the application fees and the time, apply as broadly as you can.  It’s good to have options when it comes time to make final decisions about where to go. That said, don’t aim too high (you want to make sure you have realistic schools on your list, whatever “realistic” means given your grades and experience), and don’t aim too low (don’t waste time and money applying to a school that you wouldn’t go to even if it was the only school that accepted you, whether because of academics, location, or anything else).  Be as honest as possible with yourself on that front and get input from trusted older students and professors.  On the flip side, if you don’t get rejected from at least one or two schools, you didn’t aim high enough.  You want a blend of reach schools and realistic schools.” – Amy Lowitz, Physics, Wisconsin

Choosing a School

“One of the most common mistakes I see prospective graduate students make is choosing their institution based on wanting to work with a specific professor without getting a clear enough idea of the funding situation in that lab.  Don’t just ask the professor about funding.  Also ask their graduate students when the professor isn’t present.  Even then, you may have to read between the lines; funding can be a delicate subject, especially when it is lacking.” – Amy Lowitz, Physics, Wisconsin

“If you have a particular subfield/group you *know* you are interested in, check how many profs/postdocs/grads are in these groups, check if there are likely to be open slots, and if there are only 1 or 2 open slots make sure you know how to secure one. If they tell you there are currently no open slots, take this to mean that this group is probably closed for everything but the most exceptional circumstances, and do not take into account that group when making your decision.” – Samuel Ducatman, Physics, Wisconsin

“When choosing a school, I based my decision on how happy the grad students seemed, how energetic/curious the faculty appeared, and if the location would allow me to have extracurricular pursuits (such as writing, improv, playing games with people, going to the movies…basically a location where I could live in for 4-6 years).” – Adam Weingarten, Chemistry, Northwestern

“At the visitor weekend, pay attention to how happy the [current] grads seem. Remember they are likely to be primarily 1st years, who generally are the most happy, but still check. Pay attention to the other students visiting, some of them will be in your incoming class. Make sure there is a good social vibe.” – Samuel Ducatman, Physics, Wisconsin

“When I was visiting a prospective grad student, there was a professor at a university I was visiting whose research I was really interested in, but the university would only allow tuition support for 5 years. When I asked his students about graduation rates and times, however, the answer I got was, ‘Anyone who graduates in 5 years hasn’t actually learned anything, it takes at least 7 or 8 years before people should really graduate anyway. Seven years is average for our group.’ In some fields, there is a stigma associated with longer graduation times and a financial burden that you may have to plan for in advance.” – Lauren Jarocha, Chemistry, UNC

Choosing a Group

“When considering a sub-field, look for what interests you of course, but bear in mind that many people change their focus, many don’t know exactly what they want to do immediately upon entering grad school, and your picture of the different areas of research may change over time. Ask around among your contemporaries and older students, especially when it comes to particular advisers.” – Joshua Sayre, PhD, Physics, Pittsburgh
“If you know that you’re interested in an academic career that is more teaching oriented or research oriented, ask about teaching or grant writing opportunities, respectively. I know plenty of fellow students who didn’t start asking about teaching opportunities their 4th or 5th year of their program, and often by then it was too late. If you know that finding funding will be a big part of your future, joining a group where the students take an active part in writing grants and grant renewals is invaluable experience.” –  Lauren Jarocha, Chemistry, UNC
“For choosing groups, I attended group and subgroup meetings, met with faculty to discuss research and ideas, and read several recent publications from each group of interest.  What I did not do (and wish I had) was talk with the graduate students, see how they and the group operated.  For example, I am very motivated and curious to try new ideas, so in my current research group my PI plays a minimal role in my life.  The most important aspect is how well one’s working style fits with the group mentality, followed by research interest.  There’s a ton of cool, exciting research going on, but finding a group with fun, happy, motivated people will make or break the PhD experience.” – Adam Weingarten, Chemistry, Northwestern
“I went into [Condensed Matter Theory] and not [X] because (1) In the summer of my first year I had no research, and I came close to having no income because of this. I realized I needed someone who could promise me research/funding and real advising. The [X] group was pretty filled up (and there were some politics), so it was impossible to get more than this. (2) I thought the professors in CMT treated me with more respect then the [X] profs I talked to.” – John Doe, Physics
“I believe that choosing which grad schools to apply to should primarily be about the research, so this question is more for after you’ve (hopefully) been accepted to a couple schools.  If you are going into theoretical physics, and if you don’t have some sort of fellowship from them or an outside agency, ask them how much their theory students [teach].  Do they have to TA every semester for their funding?  Do they at least get summers off?  Or do they only have to TA for the first one or two years?  This shouldn’t be the primary factor in deciding where to go – research always is – but it’s not something that should be ignored completely.  Teaching is usually somewhat rewarding in my experience, but it adds absolutely no benefit to your career if you are focused on a professorship at a research university.  Every hour you spend steaching is an hour someone else is researching and you aren’t.  And 10-20 hours a week of teaching adds up.” – Michael Saelim, Physics, Cornell
Share