• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Vivek Jain | USLHC | USA

Read Bio

ATLAS Upgrade

Friday, November 6th, 2009

You might think it odd that work has already started to upgrade parts of the ATLAS detector, even before the LHC has started running!

As I wrote in a previous post, one of the key operating parameters for the LHC is luminosity, i.e., the beam intensity. The design calls for a luminosity of 1034/cm2/sec; this means that if you look at the beam head-on, it will contain 1034 protons/sec spread over an area of 1 sq. cm. In reality, each beam consists of 2800 bunches each containing ~1011 protons and about 0.03 mm in radius. Two such bunches collide every 25 nanoseconds, i.e., 40 million times/second; this is known as a bunch crossing.

The number of events that we collect is directly proportional to the luminosity. I had also written that the most common type of events that occur have a very large rate. What this implies is that when two proton bunches collide, most of the time we produce these “ordinary” events. For instance, when the luminosity is 1034, at each bunch crossing we get an average of 23 such events; they are easy to recognize, in the sense that they produce mainly low energy particles. So, when we produce an interesting event, say, a pair of top quarks or a Higgs boson, that event will be sitting atop this low-level “fuzz”. This background produces detectable signals in ATLAS, causing higher load on the trigger system and readout electronics, and pattern recognition problems in the reconstruction software. For instance, the (TRT) sub-detector, could have an occupancy rate as high as 10-20%, i.e., a non-negligible fraction of all wires will register hits; pattern recognition will be tough, but manageable. Other tracking sub-detectors are not a problem since they have much finer granulation (see Seth’s post on tracking). In addition, as particles produced in these “ordinary” events travel through the various sub-systems that are made out of silicon, they cause a slow degradation in them; detectors and electronics are made to resist this deterioration, but they do eventually fail.

When the luminosity starts to increase, as the LHC program calls for, these problems become much more serious. For instance, at a luminosity of 1035, each interesting event will be sitting atop a background of about 400 of these “junk” events; there is no way around it. As for the TRT, you can see that it will be useless.

The upgrade is planned in two phases. In the first phase, corresponding to a luminosity of 3*1034, currently scheduled for 2015 (but will depend on well the LHC performs), we plan to insert an extra layer of silicon sensors between the beam-pipe and the first silicon layer; by then, the first layer’s performance will have deteriorated enough to be noticeable. We will also upgrade some other detectors and electronics that sit close to the beam.

In the second phase, corresponding to a luminosity of 1035, all of the sub-systems that are used for tracking charged particles, i.e., the silicon layers and the TRT, will be replaced. All electronic readout for the detector will be replaced with more sophisticated stuff; the trigger logic and associated electronics will undergo a big overhaul, etc. This is scheduled for 2020.

These upgrades take time and effort, hence the early start. Most groups in the US and UK are already steering resources, in the form of people and money, towards the Phase I upgrade, e.g., my colleague, Hal Evans at Indiana University, is looking at some trigger improvements, and other countries are also joining in. We are used to this mode of operation; at one of my previous experiments (at the accelerator at Cornell University), we were simultaneously, analyzing data collected with version II of the experiment, starting to commission version II.5, and doing R&D on version III!

–Vivek Jain, Indiana University

Share

What is a Grid?

Wednesday, October 28th, 2009

In a previous post, I mentioned that ATLAS will be collecting enormous amounts of data, approximately 6 Petabytes/year (i.e., 6,000,000 Gigabytes). How in the world are we going to handle it, and how do we make it available to all physicists on ATLAS? I spoke to one of my colleagues at Indiana University, Fred Luehring, who has major responsibilities for the US part of the Grid, to get some details.

First, let me mention some ATLAS jargon; if you wish, you can skip this paragraph for now, and come back to it when you run into strange acronyms.

The raw data that we collect is called ByteStream, basically, it is a stream of 1’s and 0’s, and is approximately 3-6 Megabytes/event. This gets “massaged” into Raw Data Objects (RDO); the only difference is that the data now has a “structure” that can be analyzed with software written in C++; it is now about 1 Megabyte/event. At this point, the ATLAS reconstruction software (written in C++) runs over these RDOs and produces tracks in the inner detector, electron and muon candidates, jets, etc., and outputs two other structured formats, ESD (Event Summary Data), AOD (Analysis Object Data), which contain different levels of detail; they are approximately 500 Kilobytes and 100 Kilobytes/event. As you can tell by the name, most physicists will run on AOD’s; there are other smaller formats, but I will skip them.

In a “normal” year, we expect to collect about 2 billion events. How do we handle all this data? To do this, physicists and computer scientists have been working on the Grid. It is basically a whole lot of computers spread all over the world that are networked with very fast connections; typical data transfer rates are 1-10 Gigabits/sec (in contrast, broadband connections to your home are about a thousand times slower).

You may ask why we need the Grid. Why can’t all the collaborating institutes just buy computers and send them to CERN, and let them set up a gigantic processing center? That is one approach. However, funding agencies don’t like this mode of operation. They would much rather keep the hardware in their respective countries, and build upon existing infrastructure at universities and laboratories, which includes people, hardware, buildings, etc. Another advantage of the Grid approach is the built-in redundancy; if one site goes down, jobs can be steered to others. Also, if a Grid site is appropriately configured, then if ATLAS is not using the computers, other scientists can use them (in an opportunistic manner); although, we keep the system pretty busy. In the US, each LHC experiment has its own grid sites, whereas in Europe, they tend to share them.

(more…)

Share

A stroll down memory lane!

Tuesday, October 6th, 2009

I recently read an article by Nick and Lizzie that described the “weird” architecture at Fermilab. It mentioned the 15 foot bubble chamber, a detector of yore that has now been turned into the “world’s strangest lawn ornament”. I did my thesis work on an experiment that used this detector; I am probably one of the last students to have worked on a large bubble chamber experiment.

Before the advent of high speed electronics and powerful computers, bubble chambers were the detectors of choice. Many crucial discoveries were made with them, e.g., the discovery of the Omega- particle at BNL, which set the foundation for theories based on quarks, discovery of Neutral currents at CERN, which confirmed the validity of Glashow-Weinberg-Salam model of electro-weak unification, not to mention the plethora of particles found in the late 50’s and 60’s at Berkeley.

“A bubble chamber is a vessel filled with a superheated transparent liquid (most often liquid hydrogen) used to detect electrically charged particles moving through it.” You basically shot a beam of particles at the chamber, which would interact with the protons/neutrons in the target liquid. Just as the beam arrived, you would compress “expand” the liquid with a piston (see Kenneth’s comment below); this would cause the liquid to become superheated, and as charged particles moved through the liquid, they would cause local boiling that would show up as bubbles. We took photographs of these “bubbly” tracks, which were scanned (by humans) on specially designed tables that used precise instruments to measure the trajectory of various particles, thus obtaining their momentum. The data was then fed into computers. From this point on, analysis was similar to that on modern detectors, e.g., ATLAS. There was an “army” of scanners who measured the events.

Here are some pictures taken in bubble chambers. In the left panel of Figure 1, you can see the actual photograph of the “famous” Omega- event, and in the right panel you can see an annotated version (you can see much better if you print out the photograph and look at it edge-wise); you should also read the description in the caption. This experiment used a kaon beam.

Figure 1. The discovery of the Omega-

Figure 1. The discovery of the Omega-

(more…)

Share

Mountains of data

Friday, October 2nd, 2009

In a previous post, Regina gave an overview of triggers. Let me add to that and give some numbers.

When the LHC is operating at design parameters, we will have collisions every 25 ns, i.e., at a 40 MHz rate (40 million/second). Obviously, we can’t collect data at the rate, so we pick the interesting events, which occur infrequently. A trigger is designed to reject the uninteresting events and keep the interesting ones; your proverbial “needle in the haystack”, as you will see below. The ATLAS trigger system is designed to collect about 200 events per second, where the amount of data collected for each event is expected to be around 1 Mbyte (for comparison, this post corresponds to about 4-5 kilobytes).

Before I get to the numbers of events that we will collect, let me first explain a couple of concepts; cross-section of a particular process and luminosity. Cross-section is jargon; basically, it gives you an estimate of the probability of a certain kind of event happening. Luminosity is a measure of the “intensity” of the beam. The number of events that we collect of a given type is given by the product of Luminosity and Cross-section.

One common kind of interaction is when two protons “glance” off each other, without really breaking up; these are called Elastic Collisions”. Then you have protons colliding and breaking up, and producing “garden-variety” stuff, e.g., pions, kaons, protons, charm quarks, bottom quarks, etc; these are labelled Inelastic Collisions. The sum of all these processes is the “total cross-section”, and is about 100 millibarns, i.e., 1/10th of barn; the concept of a “barn” probably derives from the expression “something is as easy as hitting the side of a barn”! So, a cross-section of 100 millibarns implies a very, very large probability1; for 7 TeV collisions, this cross-section decreases by about 10-20%, i.e., not much.

In contrast, the cross-section for producing a Higgs boson (with mass = 150 GeV, i.e., 150 times the mass of a proton) is 30 picobarns (30*10-12 barns), i.e., approximately 3 billion times less than the “total cross-section” (at 7 TeV, the Higgs cross-section decreases by a factor of four). The cross-section for producing top quarks is about 800 picobarns (at 7 TeV, this is down by a factor of eight). So, you can see the need for a good trigger!

The design parameters imply a luminosity of 1034, i.e., looking head-on at the beam there are 1034 protons/square cm/second. So, taking the product of cross-section and luminosity, we estimate that we will get approximately 109 “junk events”/second and 0.3 Higgs events/second! Of course, there are other interesting events that we would like to collect, e.g., those containing top quarks will come at a rate of 8 Hz. We also record some of the “garden-variety” events, because they are very useful in understanding how the detector is working. So, this is what the trigger does, separate what we want from what we don’t want, and do it all in “real time”.

As mentioned above, we plan to write to disk approximately 200 events per second. If we run the accelerator continuously for a year, we will collect 6*1015 bytes of data, i.e., 6 petabytes; this will fill about 38,000 IPods (ones with 160 GB of storage)! Each event is then passed through the reconstruction software, which will add to its size. We have come up with ways to handle all this data; I can talk about that in a later post.

–Vivek Jain, Indiana University

p.s. For fun facts about ATLAS, check out the new ATLAS pop-up book that is coming out soon! If you are on Facebook, go here. You can also see a video of this book.

1In standard units, 1 barn = 10-24 cm2

Share

What does 7 TeV mean?

Sunday, September 20th, 2009

Inspired by Regina’s excellent post on the CERN accelerator complex, I thought I’d give you some fun facts about the LHC (in “human units”).

1) What does 7 TeV beam energy mean?

Please look at Wikipedia for a discussion of units. Briefly, 1 Joule is the energy of a 1 Kilogram mass moving with a speed of 1 meter/second (1 J = 1 Kg * (1 m/s)2). In particle physics units, it is about 6*1018 electron volts, i.e., 6*106 TeV.

When operating at design parameters, the LHC will have two beams of protons, where each beam consists of ~2800 individual bunches, and each bunch contains ~1011 protons. Each proton will have energy of 7 TeV, so the energy of each bunch of protons is ~ 7*1011 TeV, i.e., 110,000 Joules (or 110 kilo Joules).

A bullet fired from a rifle typically weighs 4 grams, and can have speeds of up to 1000 m/s when it leaves the barrel. This corresponds to an energy of about 2000 Joules, i.e., roughly 1/55 the energy of one bunch of protons. Anti-tank shells (used in WW II) had energies anywhere from 150-800 kilo Joules.

So, it is crucial that the beam does not hit something that it is not intended to hit! (BTW, I have not included the energy stored in the magnets, which is a whole different story, and is many times larger).

2) How cold is the LHC?

The magnets in the LHC are superconducting. For this, the magnet mass and the wires carrying the electrical current (which generates the magnetic field) have to be cooled to about 2° K, i.e., -271° Celsius, or -455° Fahrenheit; the refrigeration plant uses 50,000 tons of liquid Helium.

By studying the Cosmic Microwave Background, which is a form of electromagnetic radiation filling the universe, astronomers have deduced that the current average temperature of the known universe is about 2.7° K.

This makes the LHC the coolest place in the Universe! (Well, not quite. Some atomic physics experiments attain much lower temperatures – thanks to Tim for pointing this out).

3) How about those magnets?

To keep the proton beam circulating in the accelerator ring at 7 TeV, we need very strong magnetic fields. For this purpose, the LHC has 1232 dipole magnets, each of which is 14 m long, weighs about 35 tons, and the required magnetic field is generated by passing about 11700 Amps of current through 5 Km of superconducting wire.

Then there are about 7066 magnets that focus the beam, and otherwise correct the path of the proton beam. For instance, if nothing was done, a proton will “fall” down due to gravity and hit the beampipe after travelling a mere 850 times around the ring (in one second, it goes around the ring about 11000 times).

To learn more, please take a look at this web page and links therein.

— Vivek Jain, Indiana University

Share

Back in the USA

Sunday, September 13th, 2009

I am back after a week’s trip to CERN. It was a productive week. I gave a talk on the work I have been doing and received valuable feedback in the sense that some of my results were met with a bit of skepticism; it took me a day of work to produce more plots, but I think I was able to address these concerns. I also attended the SUSY group meeting to see what people are up to; see Seth’s note (and search for Supersymmetry) or Wikipedia (although I don’t know what they are talking about when they say that “there is indirect evidence for SUSY”). I had a lengthy chat with one of my colleagues on how to understand Missing Energy in events containing energetic top quarks, which, if not understood, can contaminate the signal due to SUSY, thus leading to a false positive; we came up with some ideas on how to study this particular issue. Once I am closer to finishing my current project, I will start to look into this. I had spent part of the summer working with an undergraduate student (through the Research Experience for Undergraduates Program) learning about Missing Energy, so I am prepared.

It was not all work, however! A cousin recently moved from Singapore to Geneva (to work at the World Wildlife Fund), and I spent some time with him and his family; I hadn’t seen them in years. After the weeklong meetings, I took the train from Geneva to Frankfurt, where I spent a couple of days with my sister before flying back to the US. Trains in Europe are not exactly cheap (~ $180 for a one-way ticket), but they are very comfortable and punctual; we left within a minute of the listed time, and after six hours, which included a change of trains in Basel, arrived at the scheduled time (the airline industry could learn a few things from them).

Notwithstanding some of the perks of going to Europe, traveling is a pain. You have to deal with lines at security and immigration, layovers and delays at airports, being crammed into steerage (aka Economy), missed connections1, jet lag, lack of sleep, a paucity of vegetarian food, to name a few things; I don’t think I eat as much pizza or French fries as when I am at CERN! From my house in Bloomington, IN to the hostel at CERN takes anywhere from 14 to 20 hours; it is probably longer for people coming from the West Coast. It would be really nice if we had Mr. Scotty to “beam us up”!

— Vivek Jain, Indiana University


1 Once on the way back from CERN, my connecting flight from Newark, NJ was cancelled due to weather related problems. It took me another 30 hours to get back to Indiana; this included an overnight stay in a motel in the middle of NJ where the view was a dug up parking lot, waiting unsuccessfully to go on standby, finding a seat on a flight from Kennedy airport and making a mad dash by taxi ($130) only to be delayed by a traffic jam on the Belt Parkway and missing this flight; luckily, I found another flight that left a few hours later.

Share

Data is coming!

Friday, September 4th, 2009

I am at CERN this week for our regular monthly meetings; this time the focus is on ensuring the readiness of the reconstruction software and the various physics analysis groups for analyzing data. According to the current schedule we should see some collisions toward the end of the year, and we have to be ready to start analyzing data on Day 1!

On the software end, we are planning to release an updated version of our reconstruction software within the next week; the aim is to use this version for the first round of data taking. As I wrote in an earlier post, the ATLAS reconstruction software has over a million lines of code, so it is a non-trivial task to make sure that all aspects of the software are working as expected. This validation process can easily take a month or two, and if we find problems we will apply patches; hopefully, all the major bugs have already been found and fixed!

The Physics analysis groups are doing a walk-through of their analysis software and techniques. This is especially true of Physics topics that will be studied in the early days; they are undergoing last-minute reviews to make sure that all bases are covered. Since first running is scheduled to be at lower collision energy than we were originally expecting, some studies need to be re-examined to see if the analysis techniques are sensitive to the change in energy. We are testing our software and producing Physics results with cosmic rays (see Figure 2 in a previous post), but data from collisions is the real thing.

Not only were there formal meetings, but you also see a larger number of small groups of physicists holding impromptu meetings, in the coffee area or huddled around their laptops. I have been coming to CERN for about three years now, and I notice a definite buzz in the air; people appear more rushed; there is a certain sense of urgency.

In my career, I’ve been lucky enough to have worked on experiments (where I played an instrumental role on the hardware or the software end of things) when they were starting to take data. It is a very exciting time but also very chaotic and stressful; the experts are over-worked and sleep-deprived – almost like new parents!

— Vivek Jain, Indiana University

p.s. I forgot to mention that people have also been meeting to discuss how to upgrade the detectors for the time when the LHC beams become very intense (a few years down the road).

Share

Another brick in the wall!

Wednesday, August 19th, 2009

I thought I’d give you a sense of what it takes to put together a detector like ATLAS, e.g., how much time, how many people, etc. For an overview of the ATLAS detector, please look at the ATLAS webpage and Monica’s post. Since ATLAS is huge, I will focus on just one sub-system, the Barrel Transition Radiation Tracker (TRT), which was built in the US. Its main purpose is to provide hits so that we can map the trajectory of charged particles and improve the measurement of their momentum (see Seth’s post on tracking). It can also discriminate between electrons and pions.

Figure 1: End view of the barrel TRT

Figure 1: End view of the barrel TRT

You can see the barrel TRT in Fig. 1 (this is an end view where you can see the electronics and cables); more information is ATLAS website. This detector is divided into 96 modules and extends from about 50 cm to 108 cm in radius, contains about 52,000 individual wires (about 2 m long) each of which is strung inside a specially built plastic straw. As the name suggests, the barrel TRT is in the central part of ATLAS. Two other parts of the inner detector, the Pixel and the Silicon tracker, reside inside the barrel TRT 1 (that unit was being inserted into the assembly at the time this picture was taken – you can see it at the other end of the barrel).

To set the scale, the barrel TRT occupies about the half the volume of the inner detector in the barrel, which ends at a radius of about 1 m. The calorimeters, solenoid magnet and cryostat come after and go out to about 5 m in radius, and the muon system goes out to about 10 m in radius. The barrel TRT probably represents a few percent of the total cost of building the ATLAS detector, and is arguably the most sophisticated of a class of detectors called “drift chambers”; one of its selling points was that it was a low-cost way of tracking charged particles. It also has fewer electronic channels to read out (each wire is read out at both ends); in comparison, the Pixel and the Silicon tracker detectors have about 80 million electronic channels to read out.

I spoke with my colleague at Indiana University, Harold Ogren, who was one of the lead physicists on this project, as well as being the manager of the construction effort in the US. Harold and one of his colleagues originally built a similar straw tube device for an experiment that ran at the Stanford Linear Accelerator in the 1980’s. When the Superconducting Super Collider was proposed in the US, a straw tube tracker design was accepted for one of the two main experiments; when it was cancelled in 1993, he and his colleagues moved onto ATLAS, where they joined forces with the groups already working on a straw tube design.

They started building a prototype for ATLAS around 1994. Some of the groups who were on SSC joined this effort, and they had a working chamber by about 1999 that was then put in a test beam at CERN. Actual construction of the 96 modules began after the successful beam test, and it took them another 3 years to finish; each of the 52,000 wires had to be individually strung. The construction effort involved about 6-7 physicists and about 40 technicians, engineers, graduate students from Indiana, Duke and Hampton Universities and the University of Pennsylvania The electronics to read out the detector was also designed by them.

Since it was a modular design, they could ship individual modules to CERN as they were being completed, where they were put through extensive tests, e.g., each wire was scanned along every inch of its length with X-rays to check for uniformity of performance. A few wires were bad and had to be disconnected; since the bad wires are randomly distributed they don’t affect performance. These tests took another 2-3 years. All in all, the detector was ready sometime around 2006. The picture you saw above was when it was being readied to be installed in ATLAS.

Fig 2: Cosmic shower in the TRT

Fig 2: Cosmic shower in the TRT

The barrel TRT has been running successfully and collecting cosmic ray data. In Fig 2, most likely a cosmic shower hit the TRT, the kind described in Regina’s post. You are looking at an end view of the hit wires. Each blue dot represents a single wire being hit; we can locate the position of a track within a straw with an accuracy of 0.15 mm (human hair has a thickness of about 0.1 mm). You can see curved tracks; they are curving because the magnetic field was on. You will also see that one sector, at about 8 o’clock, was (temporarily) turned off. Isolated hits are due to electronic noise; operating parameters are set so that these wires register the presence of nearby charged particles with a very high efficiency, but this also leads to 1-2% of the wires “firing” randomly; our reconstruction algorithms can easily ignore them. The tracks that you see here are then matched to hits in the Pixel and Silicon Tracker detectors to get a complete trajectory.

Now for real data!

— Vivek Jain, Indiana University



1 There is also the endcap TRT, which is on the two ends of the ATLAS detector, but that was built by other groups; it uses the same design as the one in the barrel.

Share

Back to work!

Thursday, August 13th, 2009

Last week I attended the 3rd ATLAS Physics Workshop of the Americas, hosted by NYU, which is located in the Greenwich Village area of Manhattan. This workshop series is jointly organized by Canada, U.S and Latin America, and provides a good venue for collaborators based in the Americas to come together and find out about the latest happenings on ATLAS, talk to ATLAS management in a relaxed setting. This time there were about 140 people in attendance. There was a special poster session that gave graduate students and post-docs a great opportunity to discuss their work. We also had a town hall style meeting with the ATLAS spokesperson and the Physics Coordinator taking questions from the audience. Of course, there were some fun things too, e.g., one evening there was a reception, co-hosted by the New York Academy of Sciences; this was held on the 40th floor of a building in lower Manhattan. (Although views were great, the location was also a grim reminder of the world we live in. This building was right next to where the Twin Towers used to be).

I also spent a day taking in the sights of New York. One afternoon I took the No. 7 subway to “Little India” in Queens; most people on the subway were of Chinese, Korean, Indian (both from East and West Indies), and Hispanic descent. It struck me then that New York was such an apt place to hold an international workshop.

Now I am back at work in Indiana, trying to remember what I was doing before I left for New York!

— Vivek Jain, Indiana University

Share

From 0-60 in 10 million seconds!

Wednesday, August 5th, 2009

OK, so I’ll try to give you a flavour of how the data that we collect gets turned into a published result. As the title indicates, it takes a while! When an experiment first turns on this process is longer than when it has been running for a while. It also depends on the complexity of the analysis one is doing. To be familiar with some of the terms I mention below, you should take the electronic tour of the ATLAS experiment ; slides 7 and 8 will give you an overview of how different particle species are detected and what the various sub-systems look like. For more details you should go take the whole tour; it is meant for non-scientists.

For each event, data recorded by ATLAS is basically a stream of bytes indicating whether a particular sensor was hit in the tracking detectors or the amount of energy deposited in the calorimeter or the location of a hit in the muon system, etc. Each event is then processed through the reconstruction software. For instance, the part of the software that deals with the tracking detectors will find hits that could be due to a charged particle like a pion or a muon or an electron; in a typical event there may be as much as 100 such particles, mostly pions. By looking at the curvature of the trajectory of a particle as it bends in the magnetic field, we determine its momentum (see Seth’s post on tracking ). Similarly, the software dealing with the calorimeter will look at the energy deposits and try to identify clusters that could be due to a single electron or to a spray of particles (referred to as a “jet”), and so on. I believe the ATLAS reconstruction software runs to more than 1 million lines of code!

However, before the reconstruction software can do its magic, a lot of other things need to be done. All the sub-detectors have to be calibrated. What this means is that we need to know how to convert, say, the size of the electronic signal left behind in the calorimeter into energy units such as MeV (million electron volts – the mass of the electron is 0.5 MeV). This work is being done now using data taken with test beams, simulation, and cosmic rays . Similarly, we have to know the location of the individual elements of the tracking detectors as precisely as possible. For instance, by looking at the path of an individual track we can figure out precisely where detector elements are relative to one another; this step is known as alignment. Remember, the Pixel Detector can measure distances of the order of 1/10th the thickness of human hair, so knowing its position is critical. This work is going on as we speak, but we will need real data for the final calibrations and alignment.

At this point, let’s say, I decide to use the data to prove/disprove the latest version of string theory or extra dimensions or what have you. What steps do I need to take? Well, first I have to understand what prediction this theory is making; is it saying that there will be multiple muons in an event or there will be only one very energetic jet in the event, etc? If the signature is unique, then my life is considerably simpler; essentially, I will write some software to go through each event and pick out those that match the prediction (you can think of this as finding the proverbial (metal) needle in a haystack). If I find some candidate events, the excitement level starts to increase!

But before I contact my travel agent to buy a ticket to Stockholm (for the Nobel Prize ceremony), I need to do a lot more work. I have to check whether some garden variety physics effect (which usually occurs much more frequently) will produce a similar signature. This can happen because our reconstruction software could mis-identify a pion as a muon, or make a wrong measurement of an electron’s energy, or if we produce enough of these garden-variety events a few of them may look like new physics. So, I have to think of all the standard processes that can mimic what I am searching for. One way to do this is to run my analysis software on simulated events; since we know what this garden variety process looks like, we generate tons of fake data and see if some events look like the new effect that I am looking for. Of course, physicists being skeptics, we also have to check if our simulation is correct! So, that takes more time and effort.

If the signal I am searching for is not very unique, then I have to be much cleverer. I have to figure out how to tease out the signal from a large background of garden variety physics (you can think of this as finding a wooden needle in a haystack). Also, since there is no fixed recipe to do an analysis, I can sometimes run into obstacles, or my results may look “strange”; I then have to step back and think about what is going on.

After I get some preliminary results I have to convince my colleagues that they are valid, which involves giving regular progress reports within my analysis group; these are usually phone meetings, since everyone is on a different continent. I then write an internal note, which is reviewed by experts within the group. If the experts are happy, the note is bumped up to the Physics coordinator. If I pass this hurdle, the note is released to the entire collaboration for further review. All along this process, people ask pointed questions, ask me to do all sorts of checks, or tell me that I am completely crazy, or whatever. Given that every physicist thinks that he/she is smarter than the next, this process can be a little cantankerous at times.

Then the note is sent to a peer-reviewed journal for publication, where the external referee(s) can make you jump through hoops, essentially challenging the validity of your work; sometimes their objections are valid, sometimes not. I know because I have been on both sides of this process.

Depending on the complexity of my analysis, the time from the start to finish can be anywhere from a few months to a year or more (causing a few more grey hair or in my case a few less hair!).

— Vivek Jain, Indiana University

Share