• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Andrew Adare | USLHC | USA

Read Bio

Finding structure in hadronic collisions

Wednesday, September 7th, 2011

When you see event displays from head-on lead-lead collisions like this one,

you might be skeptical that we can learn much of anything about the quark-gluon plasma from such a dense profusion of tracks. And the skepticism is justifiable: getting at the physics of the strong interaction by studying individual events is a hard problem. Fortunately, hundreds or thousands of events like this can be collected each second during the November LHC heavy-ion running period. ALICE recorded tens of millions of collisions last year, and armed with the power of certain statistical techniques, we find patterns that would never clearly emerge from direct examination of individual events.

When the quarks or gluons within nucleons are scattered off one another at high energies, QCD confinement causes the outgoing particles fragment into di-jets, as explained nicely by Brian Dorney in this entry. In proton-proton collisions, the picture is fairly clear: events where a hard scattering occurs tend to show up in the detector with a characteristic back-to-back signature (again from Brian’s entry). Now consider a central (i.e. head-on) collision between two lead nuclei, each with 208 nucleons, at an energy of several TeV. What happens to the di-jets then? Can you just scale up the number of jet fragments from a proton-proton collision by the number of binary \((2 \to 2)\) collisions that occurred?

This in fact serves as our naive baseline expectation for \(R_{AA}\), the quintessential heavy-ion observable. This is the ratio of the yield from nuclear collisions to that from binary-scaled proton-proton collisions. If \(R_{AA} = 1\), either nuclear collisions behave like a superposition of independent hard scatterings, or multiple effects are canceling each other just right to make it look that way.

But since RHIC started up more than ten years ago, we have seen that \(R_{AA}\) is not 1; it’s more like 1/5 for the hadronic fragments we measure at a few GeV/c. That answer hasn’t changed much at the LHC, although we are learning more about what’s happening with the higher-momentum jet fragments. So there’s a big suppression of particles compared to the independent superposition expectation. What does it mean? The conventional interpretation is that the outgoing particles are losing energy in the nuclear material; the jets are being “quenched” like bullets passing through water. See this recent Courier article for a bit more info.

\(R_{AA}\) is an important result, but it can be complicated to interpret because it reflects a combination of several things: there’s the initial hard-scattering cross-section, the interaction between the outgoing partons and the highly dynamic nuclear medium, and the subsequent (or contempoaneous?) jet fragmentation. The theorists have to make assumptions to model the situation, and unfortunately, many different physical pictures lead to equally good matches with the data.

One step towards a more specific measurement is to systematically pair up particles within each event, accumulating over many events a well-populated distribution of angles between their momentum vectors. We correlate a set of “trigger” particles belonging within a specific transverse momentum \((p_T)\) range, say 8-10 GeV, with a set of “associated” or “partner” particles in another \((p_T)\) range.

These two-particle correlation functions contain rich information about the underlying physics, especially when we compare correlations from proton-proton vs. nucleus-nucleus collisions. For example, in p+p collisions, particles clump together azimuthally near 0 and 180 degrees because of di-jets.

In Pb+Pb collisions, we see similar features, depending on the momentum of the particles we look at, but the situation becomes more complicated as we throw new effects into the mix: jet quenching, hydrodynamics, and fluctuations. These are the things that hold keys to the physics we are interested in. In the next post we will explore further how the picture is changed when we correlate particles from Pb+Pb collisions.

Share

Before QCD, there were fireballs

Wednesday, May 4th, 2011

How many different particles can you make from quarks? A lot. Every two years or so, the particle data group puts out a catalog of the ones we know about. I always love getting mine in the mail. It’s as big as a phone book, with thin paper like a Bible. The compilation of all the particles and their properties represents a truly massive intellectual effort. Most of the hadrons are just labeled with Greek letters, but they’re festooned with all kinds of superscripts and asterisks, and their properties have names as colorful and idiosyncratic as their discoverers. For example, the neutral Ξ or “cascade” hyperon is a doubly-strange baryon with negative half-integer isospin. To my ear, most science fiction falls flat compared to real conversations between particle physicists.

By adding energy to hadrons, they can change their nature and go into excited states called resonances. The idea is loosely analogous to exciting atoms in a laser or fluorescent lamp, except more relativistic. Their mass can change. The humble proton, for example, can be excited into something called a Δ resonance, which is around 30% more massive, because some of the absorbed energy converts to mass. They don’t hang around very long, but as you look at higher and higher masses, you see more and more of them. By the 1960’s, the number of newly discovered particles and resonances had grown rapidly in step with the energy of the accelerators that produced them. This proliferation led to questions about how to explain such large variety, and what, if any, the limitations are in the number of states. When the quantum-mechanical rules governing properties like spin, charge, angular momentum, etc. were taken into account, the number of hadronic states was found to rise exponentially with mass. This plot is a fairly recent example:

 

Up to a certain mass, the number of hadrons rises exponentially. The red curve includes particles that weren't plotted in earlier references, represented by the green curve.

When you see a straight line on a semi-log plot, it’s a dead giveaway for an exponential form. Why is that pattern followed? What’s even more interesting is that the number of particles rises with mass at the same rate as it falls with increasing (transverse) momentum, at least below a few GeV. Several creative ideas emerged as attempts to explain the hadron spectra, but a physicist named Rolf Hagedorn gets the credit for developing a theory using statistical mechanics. This is before the era of quarks, remember: he referred to hadrons as “fireballs”, and considered that the heavy resonances were compositions of lighter ones, which were in turn composed of still lighter ones. In one of his lively papers, he said:

His mathematical line of reasoning implied that if you were to collect a bunch of hadrons together and treat them as a gas of particles, their energy would become infinite as the temperature approached a limiting value. He seems to have been quite a character. In the same paper, he concluded:

It follows that T is the highest possible temperature—a kind of ‘boiling point of hadronic matter’ in whose vicinity particle creation becomes so vehement that the temperature cannot increase anymore, no matter how much energy is fed in.

And now we come to the point. Hagedorn’s argument implies a change in the number of fundamental degrees of freedom of the system. In other words, it has to break down to more fundamental building blocks. Instead of remaining as a gas of hadrons, a superheated system would melt into a phase with simpler constituents at a temperature near what is now known as the Hagedorn temperature. Using the best data available, he extrapolated from the known spectra to obtain a value of the critical temperature near 160 MeV, or in more familiar units, a trillion degrees Celsius.

With a more sophisticated understanding thanks to Quantum Chromodynamics (QCD), more tools have become available to check this number. It’s a tough job, because this physics lies in the so-called “non-perturbative” regime,  where pencil-and-paper solutions to the QCD equations don’t work well. But that’s what supercomputers are for. The founders of QCD devised a way to crunch out the answers by dividing space-time itself into a grid of points called the lattice, “playing” the equations forward numerically in steps. It takes a lot of CPU cycles, but the answer seems to corroborate Hagedorn’s estimate.

So nuclear matter melts if you get it hot enough. It was suggested over 40 years ago, and theoretical innovations only seem to confirm it. So what happens then? And is this temperature achievable in the lab? I’ll post again soon to follow up on these questions.

 

Share

Hallo from…Utrecht!

Friday, April 8th, 2011

Greetings. My first post is coming to you from the Netherlands, where a small group of physicists have gathered this week for the 6th International Workshop on High-pT physics at the LHC. Utrecht is a university town, and by far, bikes are the preferred way to get around. The town has a wonderful air of sophistication, without feeling snobby. It is like a smaller rendition of Amsterdam, without the excessive tourism. Everyone’s pedaling along effortlessly through the crowds in wool jackets, scarves, and leather shoes, talking on the phone or chatting with another rider. It helps that their country is dead flat, but still, they make neon-wearing American bike commuters look kind of dorky in comparison.

Dutch girls on bikes

The conference this week was valuable because it was a workshop–an unintimidating environment to ask questions, with ample time for discussion–and you can pick up a lot during the coffee breaks and over dinner. On the other hand, there were not a lot of new results unveiled. The problem is the timing.

In many ways, the rhythm of our field is set by one major quasi-annual conference: the International Conference on Ultrarelativistic Nucleus-Nucleus collisions, better known as Quark Matter, is coming up in May. QM 2011 is in Annecy this year, within an hour from Geneva by train. Last year, the conference didn’t happen, and this year, its timing was selected to be about six months after the LHC Pb+Pb colliding period…just enough time for the collaborations to churn out exciting (but believable) results.

This year will be a major showcase of the LHC’s eagerly-awaited heavy ion results. Thus, any conference falling a few weeks beforehand is bound to be a little drab in comparison. If any experiment is holding a good hand at this point, they are unlikely to choose a small workshop to lay it down on the table. Since credit and recognition are the currencies of our field, most will choose to let the pot get a little bigger.

With Quark Matter coming up so soon, we need to get you caught up on some of the hot topics in high-energy nuclear physics. In the next few installments, I will try to add some insight, from the perspective of an experimentalist in the trenches, on some of the hottest topics…in the universe. Check back soon!

bike parking lot

The parking lot near my hotel.

Share