• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Jim Rohlf | USLHC | USA

Read Bio

CERN July 4 Seminar

Wednesday, July 4th, 2012

Summary from CMS by Joe Incandela:

We have observed a new boson with a mass of 125.3 ± 0.6 GeV at 4.9 σ significance !

Summary (excerpt) from ATLAS by Fabiola Gianotti:

We observe an excess of events at m_H ~ 126.5 GeV with local significance  5.0 sigma

Remarks from the CERN Director General Rolf Heuer:

“As a laymen I would now say I think we have it. Do you agree?”

Global effort -> Global success

Results today only possible do to extraordinary performance of accelerators – experiments –  Grid computing

Observation of a new particle consistent with a Higgs boson (but which one… ?)

Historic Milestone but only the beginning

Global implications for the future

Michel Spiro (President of CERN Council):

“If I may say so it is another giant leap for mankind.”

Chris Llewellyn Smith (former DG):

“I really am amazed that we can use these luminosities.”

Francois Englert:

“I am extraordinary impressed by what you have done.”

Peter Higgs:

“it is an incredible thing that it has happened in my lifetime.”

Gerald Guralnik:

“It is wonderful to be at a physics event where there is applause like there is at at football game.”

Rolf Heuer:

“Everybody that was involved in the project can be proud of this day. Enjoy it.”

 

 

Share

Preparation for YETS another physics run

Friday, February 3rd, 2012

As a young student, I was taught that mathematics is the language of physics. While largely true, one also cannot communicate in CMS at the CERN LHC without learning a plethora of acronyms. When we wrote the CMS Trigger and Data Acquistion System (TriDAS) Technical Design Report (TDR) in year 2000, we included an appendix that contained a dictionary of 203 acronyms from ADC to ZCD, quite necessary to digest this document.  In the next years, the list of acronyms would grow exponentially. We even have nested acronyms, LPC, for example standing for LHC Physics Center. In a talk of many years ago, one of my distinguished collaborators flashed a clever new creation and quipped “I believe this is the first use of a triply-nested acronym in CMS.” I do not know if since then we have reached  quads or quints. Somehow it would not surprise me.

One of the latest creations is YETS: Year End Technical Stop, referring to the period between the end of the heavy ion run on 7 December 2011 and the restart of LHC operations due to begin next week with hardware commissioning leading ultimately to pp collisions in April. So what to physicists do during YETS? A lot as it turns out!

One of the major activities is how to cope with the projected instantaneous luminosity of 7e33 (per cm**2 per s). This luminosity will likely come with a 50 nanosecond beam structure (the time between collisions) as was used in 2011. This means that the average number of pp interactions per triggered readout will be about 35, the one you tried to select with the trigger, plus many more piled on top of it. This affects trigger rates and thresholds, background conditions, and the algorithms used in the physics analysis. In addition, we shall likely run at 8 TeV total energy (compared to 7 last year). These new expected conditions are being simulated, a process requiring a huge amount of physicist manpower and computing resources. The results are carefully scrutinized in collaboration-wide meetings. That is the “glory” activity.

Besides the glory work, there is also a huge amount of technical service work, both hardware and software. At CMS in Point 5 (P5) we have observed beam-induced pressure spikes (rise and fall) in the vacuum. The pumping required for recovery is using up the supply of non evaporable getter (NEG) needed to achieve ultrahigh vacuum (UHV). The UHV in turn is needed to ensure that the beams do not abort which nearly happened last year. A huge effort was launched to radiograph the region in question to see if the same problems of drooping radio frequency (RF) fingers are present as has been observed in other sectors. An electrical discharge from the RF fingers can possibly cause the UHV spikes. Also at P5 work will be done on the zero degree calorimeter (ZDC), the Centauro And Strange Object Research (CASTOR) detector (not to be confused with CERN Advanced Storage Manager), the cathode strip chamber (CSC), the restive plate chamber (RPC) and the drift tube (DT) muon detectors which are accessible without opening the yoke of CMS. In addition, there is maintenance of the water cooling and rewiring of the magnet circuit breaker.

Each of the CMS subsystems has work to do as evident by a recent a trip into the P5 pit. The detailed activities of the pixel (PX), silicon tracker (TK), electromagnetic calorimeter (ECAL), and muon (MU) subdetectors are beyond the scope of this blog. I can give you some idea of what is going on with the hadron calorimeter (HCAL), where a bit of the details are fresh in my mind.

The HCAL activities are quite intense. Detector channel-by-channel gains, the numbers that are needed to convert electrical signals into absolute energy units can vary with time for a variety of reasons (e.g. radiation damage) and need periodic updating. This information has to go into the look up tables (LUTs) that are used by the electronics to provide TPGs (trigger primitive generation) which are in turn used by the level-1 hardware trigger to select events. If these numbers in the LUTs are slightly off, then the energy threshold that we think we are selecting is off target which is very bad because trigger rates vary exponentially with energy.

The HCAL uses 32 optical S-LINKs (where the S stands for simple, although I don’t remember anything simple about getting it to work) to send the data to DAQ computers. My group at Boston designed and built the front end driver (FED) electronics that collects and transmits the data on these links. The data transmission involves a complex buffering and feedback system so that the data flow can be throttled without crashing in case something goes wrong. The data flow reached its design value of 2 kBytes per link per event at the end of 2011 so we are going to reduce the payload by eliminating some redundant data bits which were previously useful for commissioning the detector but are no longer needed. This will allow us to comfortably handle the expected increase in event size due to increased pileup. Also 4 of our boards developed dynamic random access memory (DRAM) problems after a sudden power failure which took up two days of my time at CERN to inventory spares, isolate the affected DRAMS, and arrange for repairs.

The HCAL computers at P5 are running 32 bit Scientific Linux CERN (SLC4, another nested acronym). While we enjoyed the stability of this release over a number of years, it will no longer be supported by CERN after February 2012.  These computers are being upgraded (as I write this!) to 64 bit SLC5.

The HF calorimeters will have their photomultiplier tubes (PMTs) replaced in the LS1. We would like to do measurements with a few new PMTs in order to study performance stability and aging in the colliding beam environment. This activity requires building and testing new high-voltage (HV) distribution printed circuit boards (PCBs). The HV PCBs require testing and installation in the current HF read out boxes (ROBOXs) while there is still access to the detector.

Our group at Boston in also involved with designing electronics needed for the HCAL upgrade, the first part of which will take place in the first long shutdown (LS1). The new electronics is based on micro telecommunications computing architecture (uTCA). In Boston we have built a uTCA advanced mezzanine card for the unique slot number 13 (AMC13). This card will distribute the LHC clock signals needed for trigger timing and control (TTC) as well as serve as the FED. We plan to test these cards during the 2012 run. To prepare for these tests we have installed an AMC13 card in the central DAQ (cDAQ) lab which can transmit data on optical fibers to a multi optical link (MOL) card which exists in the form of a personal computer interface (PCI) card that can be readily attached to a computer. I addition, to be able to perform the readout tests with the new electronics without interrupting the physics data flow, we have installed optical splitters on the HCAL front end digital signals for a portion of the detector, parts of the HCAL barrel (HB), HCAL end cap (HE), and HCAL forward (HF), so that one path can be used for physics data and the other path for uTCA tests.

I can assure you that the activities in parts of CMS are (almost) as intense as during physics runs. There has been a lot to do!

I once met a secretary in California, the land of innovative thinkers, who was exposed to physics through typing exams, that could not understand why students thought physics was so hard. She thought each letter always stood for the same thing and once you learned them you were pretty much set. I am not sure she believed me when I told her there weren’t enough letters to go around. Same thing with acronyms. A quick search for CMS will include: Center for Medicare & Medicaid Services (a nested acronym), Content Management System, Chicago Manual of Style, Chronic Mountain Sickness, Central Middle School, City Montessori School, Charlotte Motor Speedway, Comparative Media Studies, Central Management Services, Convention on Migratory Species, Correctional Medical Services, College Music Society, Colorado Medical Society, Cytoplasmic Male Sterility, Certified Master Safecracker, Cryptographic Message Syntax, Code Morphing Software, Council for the Mathematical Sciences, Court of Master Sommeliers, and my own favorite, a neighborhood landscaper Chris Mark & Sons, of which am proud owner of one of their shirts.

And for those against acronym abuse, you can buy an AAAAA T-shirt (maybe I will too):

Thanks to Kathryn Grim for suggesting a blog about what goes on at an LHC experiment during shutdown.

 

Share

The CERN Higgs seminar

Tuesday, December 13th, 2011

from Fabiola’s conclusion

“We observe an excess of events around m_H ~ 126 GeV:   local significance of 3.6 sigma, with contributions from the  H –>gam gam (2.8 sigma), H –> ZZ* –>4l (2.1 sigma), H –> WW* –> lvlv (1.4 sigma), SM Higgs expectation: 2.4 sigma local –> observed excess compatible with signal strength, the global significance (taking account Look-Elsewhere-Effect) is ~2.3 sigma”

from Guido’s conclusion

“…we observe in our data a modest excess of events between 115 and 127 GeV that appears, quite consistently, in five independent channels. The excess is most compatible with a SM Higgs hypothesis in the vicinity of 124 GeV and below, but the statistical significance (2.6 sigma local and 1.9 sigma global after correcting for the LEE in the low mass region) is not large enough to say anything conclusive.”

final remarks from the DG

“Keep in mind these are preliminary results. Keep in mind these are small numbers. Keep in mind we are running next year… We have not found it yet. We have not excluded it yet.”

 

 

Share

The Ponderator: a one-trick pony

Saturday, November 5th, 2011

The annual US LHC Users Organization (USLUO) meeting took place at Argonne National Lab yesterday and today. It is a time when colleagues from competing LHC experiments get together and discuss a variety of common issues. It is also a time to meet informally with representatives from the funding agencies. John Huth told me he was going to blog about the meeting so I will leave all the wonderful physics which was discussed to him. It is also a time to meet new people. I met Eva Holtzer early on Friday morning when to her surprise her car was iced over and I lent her my scraper, “breaking the ice” so to speak. A short time later, she was to deliver a delightfully informative talk on the status and future of the LHC.

Most of us work at the LHC because of the excitement of being on the energy frontier. This has been the case also for many physicists working at the famous accelerators of yesteryear. Let us examine a bit of history. One of the great accelerator builders of the last century was Milton Stanley Livingston who was a student of E. O. Lawrence who won the Nobel Prize in 1939 for development of the cyclotron and the physics that was carried out with this marvelous device.

Livingston and Lawrence with the 27 inch cyclotron which accelerated protons to several MeV, about a million times less kinetic energy than the LHC (see EOL’s Nobel lecture).

Throughout his distinguished career, Livingston was at the center of just about every major development in accelerator technology. As a student he built the first working cyclotron at Berkeley under the direction of Lawrence who was so impressed with his work that he rushed into the lab one day and informed Livingston that he had to stop all work and immediately write his Ph.D. thesis in 2 weeks so he could meet a university deadline and be hired so they could begin work at once on a bigger cyclotron! His most lasting contribution was to that of  strong or “alternating gradient” focusing which greatly reduced the size and cost of magnets and is the operating principle of today’s synchrotrons, including the LHC.

Livingston was the first to point out that the maximum achievable energy was growing exponentially over time since the beginning of particle accelerators. Here is the famous original “Livingston” plot which has been copied and altered many times over since then:

 

The original plot from M. Stanley Livingston, “High Energy Accelrators” (1954). Livingston famously noted that accelerator energy was doubling every six years (dashed line). Life was good!

Now back to the LHC and the Eva’s talk. Of prime concern to the experimenters is the plan for this coming year in which there will be 20 weeks of proton-proton physics. The key parameters to yet be fixed are bunch spacing (25 or 50 ns, see my previous post on this), beam energy (3.5 or 4 TeV), and the value of β*. This last parameter is a measure of how focused (squeezed) the proton beams are when they collide. It may be thought of as the distance at which the beams are twice as spread out as at the collision point. Smaller is better in this case, corresponding to a better “squeeze”. In 2011 we ran at β*=1.5 m, which may be compared to the LHC design value of 0.5 m. The most likely scenario for 2012 would seem to be 4 TeV per beam, 50 ns bunch spacing, and β=0.7 m.

In 2013-14 there will be a long shutdown to prepare the LHC for operation at 7 TeV per beam. This will be followed by 3 years of physics from 2015-17. Keep in mind, however, that this is just a plan and the plan can change if something unforeseen happens. The year 2018 will be another shutdown year to prepare the LHC for its “ultimate parameters”. These are 2808 bunches (25 ns), β*=0.5, 7 TeV per beam, and a whopping 2.3 x 10^34 /cm^2/s (or 23/nb/s). See my cross/section luminosity post for what this means and why we care.

Now comes the most interesting part of Eva’s talk. What can we do next? Historically, this question has been asked over and over again and the answers are given in the Livingston plot. Needless to say such scenarios are VERY PRELIMINARY, especially extrapolating out more than 20 years. Anyway, three scenarios were presented: 1) high luminosity LHC (HL-LHC) over the period 2023-2032 which is an approved upgrade project big “bang for the buck” as we like to say, taking the luminosity up to 70/nb/s, 2) a possible electron proton collider (LHeC) which I dare say is not worth doing, and 3) a higher energy LHC (HE-LHC) which is very exciting because it advances the energy frontier! These last two scenarios are in the state of feasibility studies.

The fundamental energy limit of the LHC comes from the size of its ring. Here large is good. Let’s examine the fundamental formula for the magnetic force that makes a proton travel in a circle

p = erB

where p is the momentum, e is the electric charge, and B is the magnetic field. Multiplying each side by the speed of light c, we can write

pc/Berc = e (4200 m) (3e8 m/s) = 1.2 TeV/T

where I have used the units conversion that a m^2 tesla per s equals a volt. This is a very nice and practical way to specify the radius of the LHC tunnel:  it is 1.2 TeV per tesla. That means a tesla of magnet field is needed for each 1.2 TeV of proton energy to keep it in orbit. But wait. In practice, one cannot fill up the entire LHC tunnel with bending magnets. Space is needed for the experiments, focussing magnets, radio frequency acceleration, etc. The LHC is currently designed to give 7 TeV with 8.3 tesla, or 0.84 TeV/T. The feasibility study for the HE-LHC is looking at 20 tesla magnets, This, according to our formula would give (20 T) (0.84 TeV/T) = 17 TeV. That would be a nice energy increase, a bit more than a factor of 2 over LHC original design.
The HE-LHC feasibility study as presented by Eva Holzer. Besides stronger bending magnets, the HE-LHC upgrade would need development of high-gradient quadrupole magnets to focus the beams, and an upgrade of the SPS from 450 to 1300 GeV.
Let’s return to the Livingston plot to see where we are. Livingston retired just about the time the era of colliding proton beams was being ushered in, beginning with the CERN intersecting storage rings (ISR), followed by the CERN proton-antiproton collider, and finally the Fermilab Tevatron. Colliding beams seems to offer a unique technique for exponential growth. This is a kinematic trick. The figure of merit for the energy frontier is center of mass energy, which grows only as the square root of single beam energy hitting a stationary target.  But this is a ONE-TRICK PONY. There is no corresponding technique to make the next step. In 1990 I made my own version of the Livingston plot which I published in my modern physics book. I reproduce it here with the LHC added:
The blue point is where we are today with 3.5 TeV beams, the green point is the 7 TeV beams that are coming “soon”, and the red point is the future projection.  You can see that after 60 years of exponential growth we are now falling short. Painfully short actually. Right  now we are either 20 years behind or 100 TeV short of the prosperity that was enjoyed from 1930 to 1990, take your pick. In the next decades it is going to get worse. Why is the Livingston line important? It is important and interesting because it is a measure of the pace of experimental particle physics on the energy frontier over the last century. Life is now tough.

I just missed getting to know Livingston as he retired from his last project as part of the Fermilab management team during the construction of the main synchrotron as I was beginning my graduate research at Fermilab.  I did encounter him, however, in the form of a bronze plaque next to the elevator at 42 Oxford Street, the site of the Cambridge Electron Accelerator (CEA). I always thought this plaque was really cool, a steely-eyed Livingston checking you out as you entered the lab. Amusingly, Livingston referred to the CEA as a “ponderator” rather than an accelerator because it was imparting energy rather than speed to relativistic particles. I wonder what he would think of our current ponderator, our one-trick pony?

Share

The 25 ns pumpkin teeth

Monday, October 24th, 2011

If you looked at LHC page 1 in the last days, you may have noticed something interesting. They have been doing machine development (MD) for 25 nanosecond (ns) operation. In the lower left it says “25 ns MD Injecting 72 b trains”. This is an important development for the LHC and an important step toward operation at design. The “b” stands for “bunch” as explained below. The word “trains” refers to several such groups of 72 bunches hooked together.

Let us examine what 25 ns with 72 bunch trains means. Here is the layout of the CERN accelerator complex. The three main components that I am going to discuss here are the synchrotrons: the “proton synchrotron” (PS), the”super proton synchrotron” (SPS) and the LHC. A synchrotron is an accelerator with a circular ring of magnets that has some capability of accelerating the particles (in this case protons) and at the same time increasing the magnetic field in “sync” with the acceleration such that the bending radius stays fixed, i.e., the protons stay in the ring.

 

The relative sizes of the 3 synchrotrons are key to the injection scheme. The circumference of the SPS is 11 times that of the PS and the LHC is 27/7 that of the SPS. Now think of putting N proton bunches (with N an integer) into the PS with equal spacing. Then 11N bunches would fit into the SPS and (11N)(27/7) would fit into the LHC. If we want  things to come out even, then N must be divisible by 7. The value of N has been chosen to be 84 and the machine people refer to this as “harmonic 84”. By choosing harmonic 84, we have divided the LHC orbit into

(84) (11) (27/7) = 3564

parts. Since the orbit time for protons in the LHC is 88924 ns (26659 meters divided by the speed of the protons, very nearly the speed of light), and we have divided this orbit into 3564 pieces, each “bucket” as it is referred to corresponds to

(88924 ns) / 3564 = 24.95 ns .

Experimenters often speak of this number as 25 ns, after all what’s 50 picoseconds amongst friends?, but its precise value is important for the operation of the electronics.  So 25 ns is the time between collisions when the LHC is running at design. (Note: as of late the LHC has been running at 50 ns.) This inverse of this number is the collision frequency:

1 / (24.95 ns) = 40.079 MHz .

This is the clock frequency for LHC electronics. Note that MHz means million times per second, so the proton beams hit each other 40 million times per second. The protons pass through each other in a couple of ns and then nothing happens until the 25 ns later when the next bunches come along. Thus collisions occur every 25 ns. But wait! The beam structure is much, much richer than that!

When protons are injected into each of the synchrotrons, there is an injection kicker rise time. This is the time needed for the magnets that transport the protons between synchrotron rings to turn on. So of the 84 time slots in the PS, only the first 72 are filled and the last 12 are purposely left empty. Then fills of protons from the PS are injected into the SPS leaving an 8 bucket gap between them. This eight bucket gap is needed to “kick” the protons into the SPS. We do this 3 times after which we need to leave a larger gap of 38 buckets to kick the protons into the LHC. So far we have injected 3 groups of 72 bunches of protons from the PS into the SPS and into the LHC with the pattern:

72b  8e  72b  8e  72b 38e

where b stands for bunches of protons and e stands for buckets where there are no protons. Now let’s keep going with this to fill up the 3564 time slots. Notice that (11) (27/7) is about 42. We could fit 42 of these groups of protons into the LHC. But we want to leave a gap at the end as explained later, so we will only put in 39, in the form of 3 groups of 10 plus 9. To achieve this, first we do the same thing twice and then on the 3rd time we add a 4th bunch from the PS plus one extra empty bucket at the end. So now we have

72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e .

This is 10 trains. Now we repeat this 3 times. So far we have 30 trains

72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e .

For the last step we add 3 more shots from the SPS each containing 3 PS shots and at the end leave 119 missing buckets to add the last 9 trains (making 39 trains)

72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 8e  72b 39e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 38e 72b  8e  72b  8e  72b 119e .

Whew! We have filled up the 3564 time slots; 2808 (39 times 72) of them have protons and the rest are empty. But VERY important for the experimenters, the empty buckets occur in a known pattern. This pattern is used to synchronize the electronics. We can only get collisions in the 2808 buckets that contain protons and we know which buckets have the protons. The LHC bunch structure is like a “pumpkin tooth” pattern. For each part of the detector we line up these pumpkin teeth to adjust our clocks to the LHC machine.

Harmonic 84 injection scheme figure thanks to P. Collier.

The long string of 119 empty buckets adds up to 3 microseconds. This is referred to as the “abort gap” and it corresponds to the time needed to turn on the kicker magnets to dump the proton beams. This also turns out to be a blessing for the experiments because this time is used to do many things such as reset certain electronics and take calibration events.

 

 

Share

CMS TV and the Funny fish… or the higgs is a tick

Saturday, October 15th, 2011

Well it finally happened. My 13 year-old daughter has been pushing me for the last year to update my antiquated Nokia candy-bar cell phone. I think she was embarrassed to be seen with me and my phone. She, of course has the latest fashionable iPhone with a hot pink skin. I resisted because I did not want the novelty of a smartphone to become a semi-infinite time sink. In 20/20 hindsight perfectly explained by conservation of momentum, my Nokia and I fell into the ocean as I was hoisting an 8D battery from my dinghy into my lobster boat. If you don’t know, an 8D is the size used big diesel trucks and weights in the vicinity of 150 pounds (70 kg). About the same time, my digital camera which has served me well for many years started acting up by not recognizing the pictures stored on the smart card. To add insult to injury, my aging MacBook took a dump by deciding not to respond to any keys including the power on button. It seemed like a good time to get a smartphone while waiting for my replacement designed-by-Apple computer to arrive from China.

This turned out to be a good move. The UPS driver claimed he could not find Room 255 at 590 Commonwealth Avenue in Boston (go figure) and my new MacBook Air ended up in a warehouse in an obscure part of town, a not very nice part, at that. I was headed to CMS Physics week in Brussels without my computer which is rarely out of my reach. But thanks to a trip to Target with my daughter, I had my new iPhone!

Although the iPhone may be a fabulous toy in the right hands, it actually can be very useful. First of all, email is actually easier on the iPhone. Like most people, I use multiple accounts: Boston University for the most important items, but also a special gmail account for CMS hypernews where detector and physics analysis information is posted and can be accessed by subscription, another gmail account for personal use, a CERN account, and a Fermilab account. I can see all of these accounts at a glance on the iPhone and toggle between them, much easier than looking at multiple pages on my notebook. For sending longer messages or even typing this blog, I use the Apple wireless keyboard which works really well.

The first thing I did was access the CMS web pages, which have been recently upgraded. Thanks to Gilles Raymond and Lucas Taylor, we have a really nice set of public pages on CMS called “CMS TV”. Additional thanks to Tom McCauley who has made CMS TV especially useful for mobile devices.

Here is the LHC Page 1 broadcast on CMS TV. It shows fill number 2216 (this the number of times to date that protons have been injected into the LHC) and that we have “Stable Beams”. The red and blue traces show the intensities of beam-1 and beam-2 (number of protons in each ring) to be 1.8e+14. The graph on the right shows the instantaneous luminosity in units of protons per square cm per second, about 3000e+30, which we may write in the more compact form of 3/nb/s. More on what this number means and why we follow it so closely later in this blog. The beam status flags at the bottom show at a glance that everything is stable, when only the beam setup boxes are red.

Here is CMS page 1. It shows we are taking data. Yeah! The plot shows that the data taking started a bit before 08:00 and that data has been accumulated nearly linearly with time. The units are 1/pb. Note that using the instantaneous luminosity from LHC Page 1 (3/nb/s), that the conversion 1000/nb = 1/pb means that we are accumulating data at the rate of 1/pb every 1000 s. This means something rare with a cross section of 1 pb will be produced approximately every 15 minutes, on the average, sort of an Andy Warhol 15 minutes of fame. The red line tells us how many proton collisions were delivered and the blue line tells us what fraction of those collisions were actually processed by CMS electronics. Note that these two quantities can never be exactly exactly equal to each other even if nothing breaks because it takes time to read the data out even with our super-fast electronics; but as you can see the blue and red lines are very close to each other. The flags at the bottom show at a glance that all components of CMS are being read out (DAQ) and that the detector control systems (DCS) are all on. The DCS protects the detector  from all kinds of bad things that might go wrong underground. The numbers at the bottom right are very important to us and that is why they are displayed on Page 1. First is the fill number followed by the CMS run number. This run number is how we will reference these data in the future. The data are labeled by “lumi sections” within the run so that if something goes wrong such as a voltage trip, we can go back and find those affected events quickly. Physics Bit Set ON means the data are tagged by the shift leader in the control room as being certified for physics analysis. The magnet is seen to be in its normal on state with a field of 3.8 tesla. The L1 rate of 77929 tells us how many events per second are being sent from the CMS hardware into the online computers for refined selection. More than 97 million events have been sent so far in this run. The last two numbers tell us the proton collision rate in luminosity units 2.4e+33 per square cm per second and the total number of collisions that have occurred 35.5/pb = 3.55e+37 per square cm. So thats a lot of useful information attainable at a glance and updated continuously in real time.

 

Here is the CMS data acquisition (DAQ) status. The proton beams are colliding and the detector is in stable operation for physics data taking. The column to the left labeled “Data to Surface” shows the status of each detector component. The middle table labeled “SM streams” shows the status of the Storage Manager (SM) which is sorting the events into various categories for offline storage. The key rates from the SM are graphed below for quick visual inspection. The “Data Flow” column on the right contains a wealth of information about the health of CMS and the most important numbers are listed across the top of the page. The main data flow is through stream “A” which is seen to have a rate of 343 Hz (343 events per second are being sent to the offline computers). Other key numbers are the event size (478 kB), the dead time (2.3%) or percentage of time that the detector is insensitive to collisions, and the percentage load on the online computers (58.5%) which are selecting the events to be recorded. There is even a live event display on the upper left. This page is very busy but it can be zoomed on the iPhone to see all the numbers clearly.

Finally, we come to the event display which is fun because it gives you a quick visual of how the collisions are seen by CMS. The yellow lines are the trajectories of the charged particles that are created in the collision (only the electrically charged particles leave an ionization trail). All particles, however (except neutrinos and muons), are absorbed in the CMS calorimetry. There are two types of interactions: mostly electromagnetic in the crystals (ECAL)  represented by the red boxes and mostly nuclear in the brass (HCAL) represented by the blue boxes. It is the combination of these three sets of measurements plus muons which are identified by penetration of the massive CMS iron if they are present that gives us the total picture of the collision. As expected, when the protons collide lots of particles fly forward along the original directions of the protons. Interesting things tend to pop out at large angles.

Now back to the luminosity. Particle physicists measure probability of occurrence as something called the event “cross section”. Mathematically, cross section is defined as the transition rate (number of times something that you have defined happens per second) divided by the incident flux (number of times per area per second that the protons cross each other). The time cancels out and the area goes upstairs so our cross section unit is that of area, or square meters. Intuitively, cross section is the EFFECTIVE size that the target presents for the thing to happen. The famous textbook example is crows in a tree. You can’t see them because of the leaves, but you pick up your gun and start firing bullets randomly into the tree at one shot per second. The tree has an area of 100 square meters and all your bullets hit the tree. Once every 3 hours you hit a crow and it falls to the ground. The transition rate is 1 per 10000 seconds (I have rounded off). The incident flux is 1e-2  per square meter per second. Dividing these two numbers, we get 1e-2 square meters as the cross section for the crow. Now if you were shooting the gun and wanted to hit a wood tick it would be much harder because the tic has a much smaller size, say 1e-6 square meters. If you wanted to hit a tic, how many times would you need to shoot the gun?  You fire 1 shot per second per 100 square meters (into the tree), which equals 1e-2 per square meter per second. This is the instantaneous luminosity. After 100 s, you will have accumulated an integrated luminosity of 1/ square meter. After 1e+8 seconds, you have an integrated luminosity of 1e+6/ square meter and you could have expected on the average to have hit one tick. The tick is hard to hit and it will take 3 years of shooting to hit one, on the average. The tikc has a small cross section compared to the crow.

In particle physics things are very small, indeed the proton itself is about 1e-15 square meters in size. The cross section units that we use is the “barn” as in “big as barn” originally so named because of the observed relatively large nuclear cross sections. Uranium presents itself as a huge target to an incoming neutron. One barn (b) is 1e-28 square meters and on our scale this is indeed big. One nanobarn (nano means 1e-9) is equal to 1e-37 square meters, or 1e-33 square cm. One picobarn  (pico means 1e-12) is 1e-36 square cm and one femtobarn (femto means 1e-15) is 1e-39 square cm. The total cross section for pp collisions, that is for protons to interact by the strong force is about 100 mb (milli means 1e-3) or 0.1 b. If we take the proton radius to be 1e-15 meters and estimate the cross section as the area of a circle containing two protons just touching each other, then we get the same order of magnitude, 0.1 b. Coincidence? No, this works because the proton is “strong charge” neutral (“colorless” as the jargon goes) made up of quarks and the wavelengths of the quarks define its size, just as the wave lengths of electrons define the atomic size. The wavelength, by the way, is inversely proportion to momentum and the momentum scale is set by strength of the pull, stronger for quarks in a proton compared to electrons in an atom, resulting in bigger momenta for quarks in protons compared to electrons in atoms, resulting in small protons compared to atoms. Protons interact when they touch each other, just as atoms also do to form molecules. Atoms are not mostly empty space just because the nucleus is tiny, in spite of the fact that you can find this statement in many school books. At atom is a ball of electron and photon waves. The proton is a ball of quark and gluon waves.

So our geometric picture is that the protons interact if they touch each other. The cross section is 0.1 b. Let’s take our LHC instantaneous luminosity of earlier this morning of  2.4e+33 per square cm per second which we may write as 2.4/nb/s. Remember cross section is interaction rate divided by luminosity so interaction rate is cross section times luminosity, or (0.1 b) (2.4/nb/s) = 2.4e+8 per second. (Remember nano is 1e-9 and it is in the denominator of the luminosity). So that is a lot of proton interactions, 240 million per second. We love this as long as our electronics can handle it because we are getting a high rate of proton collisions. We need this to observe something rare. To put these rarer collisions in perspective, the  W and Z have cross sections around 100 nb, the top around 1 nb and the higgs if it exists around 30 pb or so, roughly speaking. So right now we can expect to be creating a couple hundred W/Z per second, 2 top events per second and a higgs particle every 15 seconds. I got these numbers by multiplying 2.4/nb/s times the known particle cross sections. The proton is the leaf on the tree- easy to hit- the crow is our W and Z, a smaller birdie the top, and the higgs is the wood tick.

It was recently announced that CMS had recorded 5/fb. This is becoming an interesting number. Let’s estimate how many interesting events have been produced. (Remember femto is 1e-15 and it is in the denominator). We get 1e+8 W/Z, 1e+6 top events, and 3e+4 standard model higgs. This does not mean that we have recorded 30 thousand higgs particles (if it even exists), because we are only sensitive to certain branching fractions that distinguish themselves from the enormous number of ordinary collisions (remember the 240 million per second). The same goes for the W, Z and top. It may be that the big discovery to be made at the LHC is sitting at a cross section in the fb range. If so, we will need to observe a LOT of collisions to see it. The LHC design luminosity is 10/nb/s. This would give us an inverse femtobarn every 1e+5 seconds, or 28 hours. The super-LHC upgrade would push that number to 0.1/pb/s, or an inverse femtobarn every 3 hours!

Now back to the iPhone. For the princely sum of $9.99 (CMS TV being free would seem to be a bargain by comparison), I purchased the iSSH app, allowing my iPhone to become a secure remote terminal.  I can log into any machine (for which I have an authorized account, of course) from anywhere in the world (needing either wireless or an ATT signal)… Adding the Apple wireless keyboard makes the iPhone very useful for many physics-type geeky tasks.

I can even run Root from my phone (Root is the powerful object-orented data analysis framework used in high energy physics). You can even follow Root on Twitter to get the latest development news!

Oh, I almost forgot, I wanted to report on my recent fishing trip. Fishing is great. It gets you outside in the saltwater air, relaxes you so you can think about physics (or not), and provides fantastic vistas and the rythym of the sea. Also, I love seafood, and everybody knows that nothing tastes better than the fish you catch yourself. Each year in New England as the water reaches a certain temperature, the Atlantic bonito (Sarda Sarda) and the false albacore (Euthynnus alletteratus, or “little tunny”) both members of the tuna family, peel off the jet stream and venture inshore. They are extremely fast swimmers and notoriously hard to catch earning them the collective name amongst local anglers as “funny fish”.  A few years ago, I decided to become something of an expert at catching these fish. This past week, I ventured to the back side of Martha’s Vineyard and hooked this rather large bonito:

which became my favorite meal… sashimi!

Before departing for home, I searched for and downloaded a nifty free app called iSailor. To make proper use of it, you do have to purchase a set of nautical charts for $4.99 (always a catch right?). Anyway it works really well. After leaving Oak Bluffs harbor to cross Nantucket Sound, we came happened across a large school of albies on a nearby shoal. The iPhone allowed us to criss-cross this school multiple times (worked better than the more expensive boat GPS) and I hooked 5 of these beautiful fish in short order. I only kept one, but I will be flush with sashimi for a while.

 

 

So the iPhone is more than just a toy in the right hands. That’s my story anyway and I am sticking by it.

 

Share

LHC: raison d’être

Tuesday, October 11th, 2011

This is my first blog ever. I am not going to make any advanced apologies about my writing. I am not going to attempt to impress you by chirping that I am writing this blog on such-and-such flight, or  explain that I am reporting from a particularly nice part of the world only because I was forced to go there for work. I am going to try to enlighten you, the intelligent reader, about physics and  stimulate interest in the LHC: what we do and how we do it and why. And then I am going to go fishing.

Here is my short take on why we have built the LHC, its true raison d’être. We are searching for the mechanism of electroweak symmetry breaking (ESB). A particle physicist puts this in the context of the so-called gauge bosons that mediate the forces: W, Z, and γ. Why are the W and Z massive (100 times that of the proton) while the photon (γ), the ordinary particle of light, is massless? To make it more intuitive, let’s look at the two fundamental forces involved, the “weak” nuclear force (bad name, but we are stuck with it) and electromagnetism. The weak force allows the sun to shine and you can’t get any more fundamental than that! Those weak interactions, that burn protons by allowing one of them to transform into a neutron which gets fused to another proton to form a deuteron, occur only at extremely short distances, even orders of magnitude smaller than the proton size. On the other hand, electricity has an infinite range, as easily demonstrated by looking at a distant star at night. The electron in the star emits a photon that travels many light years before it is captured by an electron in your eye. Add a sophisticated telescope and one may observe photons that have traveled billions of light years to reach us. So one fundamental force has an extremely TINY range and its close sibling has an INFINITE range!

How does an electron know that it should interact with another electron? Here is the conceptual picture of the interaction. (I was lucky enough to have Richard Feynman explain this to me in his own scruffy manner. Maybe I will write a Feynman blog in the future to relate my own stories.) An electron or any other charge is perpetually surrounded by a cloud of photons that it is continually emitting and absorbing photons. By doing this, the electron is checking if there is another charge around to push or pull. A free electron can’t really emit a photon and conserve energy and momentum. BUT the photon can “borrow” some energy from the electron for a short time as long as the product of borrowed energy times the time interval is smaller than Planck’s constant. This rule is called the uncertainty principle and lies at the heart of quantum mechanics. Although it retains a mysterious je ne sais quoi to this day, it is well tested experimentally. So our electron sends out its messenger photons on a mission to check if there are any other charges to push or pull. Since the photon is traveling on borrowed energy, it can only go large distances because it is massless. It is this masslessness of the photon that gives the electric force an infinite range.

Now how about the weak force? Enter W and Z. In the 1980s I was fortunate to work on the experiment that discovered these massive particles which had been searched for for decades. This discovery experimentally established the quantum nature of the weak force, that quarks and leptons really interact by exchanging W and Z particles. That there are 2 particles has to do with the detailed properties of the weak force: the W changes a quark or lepton into a different type (flavor) while the Z cannot. A quark in the proton sends out a W messenger to see if there is another quark around that wants to play (analogous to the electron sending out its photon messenger). This W can only live for a time allowed by the uncertainty principle. Now comes the big difference between electricity and weak. The W not only has to borrow kinetic energy to move but it also has to borrow some energy for its mass meaning that our W messenger cannot travel very far. The quark in one proton can only interact with the quark in the other proton (via weak force) if the quarks are VERY close together. The large mass of the W gives the weak force its short range. There is our broken symmetry: the W and Z have large mass and the photon is massless. Approximate symmetry can be restored if we can study interactions at an energy scale so large, or equivalently a distance scale so small, that the W and Z mass energies and the short range of the weak force are irrelevant. The forces become unified resulting in one happy electroweak force.

Another way to look at the consequence of the W having mass is that probability for a the weak interaction grows with energy. One can see this on dimensional grounds. This interaction probability cannot grow forever. There is a mathematical bound referred to as the unitarity limit of about 1.7 TeV. This means when Ws and Zs with this energy scale interact, we do not understand much of anything about what will happen. How’s that for exciting (!)? The reader will notice that 1.7 TeV is a VERY large energy for W and Z particles. The LHC will not reach this scale for Ws and Zs for a very, very long time. This is why once upon a time we wanted to build a 40 TeV machine (but don’t get me started on that…). However, all is not so gloomy as the following lesson tells.

There is an elegant historical analogy to ESB. Before the age of modern physics, the classical radius of the electron- the distance beyond which where the electrostatic potential energy exceeds the mass energy of the electron- posed a formidable barrier beyond which classical physics made no sense. This distance is 10^-15 m which corresponds to an electron approaching the GeV/c scale. It turns out, however, that we did not have to get anywhere this limit to discover revolutionary new physics: quantum mechanics was waiting to be discovered at the Bohr radius (10^-10 m) and relativistic quantum field theory at the Compton wavelength (10^-12 m). Okun called the classical electron radius the “paper tiger” and QM and QFT the “real tigers”. Here is a slide that Okun showed on my first trip to Moscow on Oct. 9, 1989:

The LHC was not yet a project and we were designing a detector for the 40 TeV machine. Of the zillion talks I have heard since then on supercollider physics, not one has been as clear and as informative and void of nonsense as the 5 slide talk by Okun. I gave a colloquium at ITEP in on Dec. 3, 2003 at the invitation of Okun and Michael Danilov, the lab director and I showed the 5 slides to the amusement of Okun. So the lesson is: when we collide Ws and Zs at a TeV or so, we WILL learn something exciting BUT if we are lucky we may learn something exciting well before reaching the unitarity limit. Let us hope so or its going to be a very long ride!

For the up and coming experts, a superb technical explanation of the electroweak physics has been given in a series of lectures by Tini Veltman (Nobel Prize, 1999) that have been published in a CERN Yellow Report 1997.

Picture (courtesy Claudia-Elizabeth Wulz) of me with Tini and Carlo Rubbia on the occasion of the later’s 75th birthday.

Having suffered through my explanation of why we have built the LHC, I now owe you something fun. Dark energy- which we shall NOT observe at the LHC- has become increasingly fashionable with the announcement of this year’s Nobel Prize. Dark energy is explained in a brilliant 1 m 39 s video by Sean Carroll:  2011

Time for me to go fishing. More on that later…

 

Share