## View Blog | Read Bio

### Us Too!

Hi there!

ATLAS just put out its first paper, much like CMS did a few weeks ago. Ours is called Charged-particle multiplicities in pp interactions at sqrt(s)=900 GeV measured with the ATLAS detector at the LHC. It’s a light 19-page read with an extra page of acknowledgements, 3 pages of references, and a 17 page authors list at the end!! (Look for your favorite bloggers hidden in there somewhere!)

Here’s a very quick run down of what on earth that title means. First, the LHC collides protons, so these were proton-proton collisions (pp interactions). Each of the incoming protons had 450 GeV of energy (about half of the current energy of the Tevatron, and about 1/3 of the highest energy we’ve reached at the LHC). Instead of writing “450 GeV each”, we write down one of the Mandelstam variables describing the collision. It’s a better measure because it includes both particles’ energies in a “natural” way. For example, if you had a car accident, it matters whether you were going 30 and the other car was stopped, or you were both going 30 and collided head on, or you both were going 30 in the same direction and bumped each other. Each of those would have a different “Mandelstam s”.

We use a detector called the “tracker” in ATLAS to measure charged particles bending in a magnetic field. By just counting how many we see coming out of a collision we can say some interesting things about what physics we see. We can count the number in terms of momentum, or in terms of numbers per event (roughly equivalent to “how fast are the cars going on each road,” and “how many cars are there on each road”).

In my opinion, the hardest part of the measurement is putting good errors on everything. We have to be very quantitative – we can’t just say “it’s probably right.” And each piece has to be quantified. The easiest analogy I know of is polling. When someone takes a poll, they usually say, for example, 45% “+/- 3%”. That 3% is the “error” on the poll – though it’s usually only statistical, or telling you something about how many people they sampled. If they wanted to add systematic errors, they would have to include other effects that can be very hard to quantify like: how did the sample they polled differ from the general population? How likely are certain populations to answer the phone or respond to a survey? How likely are people to give honest answers on a survey? Those can vary from hard to almost impossible to quantify, but we have to be honest about how they might affect our results before we can publish with any confidence!

–Zach