• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for April, 2011

Science friendly browsing

Wednesday, April 27th, 2011

(Note: This post requires JavaScript to be enabled on your browser.)

For years the internet has been a wonderful tool for people all over the world, bringing distant communities together, changing the way think about communication and information and having a huge impact for the better in nearly every aspect of our lives. Unfortunately there are still major problems when it comes to sharing scientific knowledge. This is changing very quickly though, making this an exciting time for internet-savvy scientists.

LaTeX sets the high standards we have come to expect for mathematical markup.

How did this kind of situation arise? Science journals have had their own markup language, LaTeX, for decades, predating the internet by many years. LaTeX is available to anyone makes it very easy to generate simple, attractive documents with excellent support for a wide variety of mathematical symbols. (Making complicated documents isn’t quite so easy, but still possible!) Making documents like this can be very intensive, as every margin and the space between every character is analyzed, with restrictions imposed by paper sizes.

On the other hand, the hypertext markup language (HTML) and cascading style sheets (CSS) are the standards which are widely used on the internet, and they are focused mainly on the aesthetics of more popular kinds of journalism. The HTML standards are intended to work on any operating system, and they should give a semantic description of the content of a webpage, without consideration for style. The CSS then take over and decide how the information is displayed on the screen. (Check out the CSS Zen Garden to see the power of CSS.) In principle, writing a webpage that follows the HTML and CSS standards is quite easy, but in reality it’s it can be a very problematic and tedious task. The internet is a dynamic medium, with different developers trying different tricks, different browsers supporting different features and no real control concerning the best practices. Groups such as W3C have tried to standardize HTML and CSS, with quite a lot of success, but it’s a slow process and it has taken years to get to where we are today.

CSS makes the internet an aesthetically compelling medium. (CSS Zen Garden)

Trying to get mathematical markup with these kinds of constraints is quite tricky! Math is inherently two dimensional, making good use of subscripts, superscripts, indices, fraction, square roots… HTML is much better at handling long passages of text which flow from one line to the next, without much scope for anything as exciting as a nested superscript. And so for a long time it became very awkward to include math on a webpage.

Over the years there have been many approaches to this problem, including LaTeX2HTML, MathML, using images, or expecting the poor user to interpret LaTeX markup! Eventually, the CSS standards settled down, browsers started to conform to the same behavior, and it became possible to display math without the use of any images, plugins or other suboptimal solutions.

With the exciting developments of Web 2.0, we have access to MathJAX. We can take LaTeX markup and put it directly into a webpage and MathJAX can turn this:

\[
  \nabla \times \vec{H} & = & \vec{J} + \frac{\partial \vec{D}}{\partial t}
\]

into this:

\[
\nabla \times \vec{H} = \vec{J} + \frac{\partial \vec{D}}{\partial t}
\]

Beautiful! It also works inline like this: \(E=mc^2\) becomes \(E=mc^2\). (None of this will work if JavaScript is disabled on your browser, which is a shame for you, because it looks very pretty on the page!) Using MathJAX is as simple as writing normal LaTeX between \[ and \] symbols for block-level text, \( and \) symbols for inline text.

We finally have a way to show equations on any browser, with any operating system, that complies with all the standards laid out by the W3C. So much for math markup. What about technical drawings and graphs? Scientists have been using vector graphics in their work for decades, so it would also be nice to have a way to show these kinds of images.

This is the kind of image we can make with the canvas! Making graphs can be easy, and the output can be beautiful and interactive.

Some browsers have supported vector graphics for a few years, but once again, different browsers behave differently, and vector graphics have been developed rather late, so there are large performance issues. However, with the development of the next generation of HTML browsers should support a brand new kind of image, the HMTL5 canvas. It allows designers of websites to draw detailed images on the fly, even allowing the user to interact with the images! It will take some time before most of the users on the internet can have access to the HTML5 canvas, so until then we can’t rely on these new features to share information.

On the other hand it means that we living in a very exciting time where anyone can develop their own work using the canvas, and help shape our experiences with the internet in the future! The standards used online have always lagged behind how the latest developers are using the tools at their disposal, and when the standards get updated the ingenuity of the developers is taken into account. Soon the canvas will support 3D graphics, making our online experiences even richer! Want to help shape how this is developed? Then get involved! Try out the canvas today and see what you can create! There are dozens of fascinating examples at Canvas Demos. Here are some of my favorites:

  • MolGrabber 3D– a great way to visualize molecules in three dimensions.
  • Flot– how to show graphs on a webpage.
  • Pacman– a clone of the classic arcade game!

The internet is going to get very cool in the near future, giving us the ability to share information like never before! When anyone can create animations and simulations, blogs like this will become even more interactive, even more compelling and even more useful. I can’t wait to see what MathJAX and the HTML5 canvas will deliver!

Share

「はしもとせんせーい、反物質チョコ食べてくださーい!」の声に、娘と駆け寄ってみると、なんとそこには、は、は、反物質チョコが!!しかもいろんな種類があるではないか.おそるおそる手を伸ばしてみた.反物質なら手を触れたら手が消滅してしまうことも忘れて・・・

ここは理研の研究本館6階エレベーターホール.山崎原子物理学研究室は、先日新聞などで取り上げられ有名になった、あの反水素の大量生成に成功したグループだ.今日は年に一度の、理研の一般公開.山崎研は、最後の手段をついに披露してしまった.「反物質チョコ」を配る!

物質は、反物質と接触すると、対消滅し光を出す.反水素とは、水素の反粒子である.反物質チョコは、正確には反チョコとも言うべきものであろう.あ、手が対消滅する!そう思って手を引っ込めた瞬間、娘の手には反物質チョコが既に四つも握られていた.そう、これは、チロルチョコの特別包装だったのだ.包装にはしっかり、反陽子と陽電子、すなわち反水素が印刷されていた.これ、レアやわ.

ほくそ笑みながら、娘とその反物質チョコを奪い合い、一つを口に入れた.すると、一瞬で融けて消えた.さっすが反物質.その甘さは爆発的だった.ガンマ線は出なかったと信ずる.反物質チョコは、意外なことに、キナコモチの味がした.

そんなこんなの理研一般公開.広い和光キャンパスの至る所で、科学とふれあうイベントがてんこもり.幼稚園の子供たちからおじいさんおばあさんまで、理研を広く解放し、科学の心をちらりと持ってもらうイベントである.今年はうちの研究室は、この時期に研究室構成員が何人になるか見当がつかなかったため、研究室としての参加は見送ったのだが、おとなりの延與放射線研のお手伝いをするという有志PDが立ち上がり、昨年と同様の磁石の担当をさせていただくことになった.僕もその解説員としてお茶を濁すことに.午後の自分の担当が始まる前に娘といろんなところを見ておこうと思って、RIBF棟などを回る.

小学校低学年でも作れる分光器.30分かけて一所懸命娘が作る.出来上がったのを覗いて虹色が見えたときの娘のうれしそうな顔と来たら.ふと横を見ると、50歳くらいのおばさんが、一人でまた一所懸命作っている.あーでもないこーでもないとうんうんいいながら、両面テープを貼っている.なんと素晴らしい光景だろう.日常的には科学研究に全く触れることもないであろう人たちが、自分の手で実験器具を作りそれで実験をし、科学を体験する.素晴らしいとしか言いようが無い.


轟音をたてて動いているスーパーコンピュータ、三等身になれるフレネルレンズ、空中に静止するシャボン玉、原子核衝突を見立てた空気砲、などなど、一日では到底全部は体験できない数の楽しい科学とのふれあい.娘は去年もらったビー玉が忘れられないらしく、今年も原子核衝突を擬したビー玉実験に並び、見事実験をしてビー玉を獲得していた.

自身の担当の段になり、若干緊張して持ち場に向かう.早速いろんな人が話しかけてくる.必死に対応していると、いつの間にか、宇宙ってすごいよね、対称性って何だろう、と熱心に熱心に語っている自分の姿があった.こちらが熱心になると、聞いてくださる方も熱心になる.この世の中は不思議だな、どんな風になっているんだろう.その気持ちを伝えたい一心だった.

小学生の子供たちにはクイズを出す.そのクイズに答えられた子供のうれしそうな顔と言ったら!で、その子が言った.「ぼくはしょうらい、かがくしゃになりたいんです」

ほんまに嬉しかった.「りけんでまってるで」

Share

In experimental particle physics, the term “background” refers to events that can be easily confused for signal.  In my last post , I introduced the Mu2e experiment and pointed out that this experiment needs a huge amount of muons (1 million trillion, 1018,  or more) and hopes to be sensitive to even one muon decaying directly into an electron.  To achieve such a single-event sensitivity the sources of backgrounds must be minimized and/or understood extremely well.

So, what is so difficult about that?  Mu2e must have a striking experimental signature that is extremely hard to fake, right?  Not exactly!  The signal for the Mu2e experiment is just a single electron!  Hmmm… That sounds like it could be a problem because every ordinary atom making up the experiment, the building housing the experiment and planet Earth that it sits on is made up of electrons! 

The figure shows the muon-electron conversion energy distribution in light green and the energy distribution for electrons from one of the backgrounds in red. The signal energy is spread out due to the limited resolution of the Mu2e detector (not all of the signal events are measured to have the exact energy produced in the decay). The source of the background shown in red is from muons that decay in orbit (DIO) into an electron and neutrinos. This decay is allowed in the Standard Model. Because of the extra neutrinos produced in the final state, the electron carries less energy than the signal events where the muon decays only to electrons since no neutrinos are involved to take away some of the energy.

So, let’s state the problem again:  the Mu2e experiment wants to stop 1018 muons on a target nucleus, and then be sensitive to even one event in which the muon decays directly into an electron.  It isn’t easy! In fact, the experiment is carefully designed to minimize all potential sources of background events.  

Luckily, the electrons produced from the direct muon-to-electron conversion are special in that their energy will always have the same value 105 megaelectron volts, or 105 MeV.  This is an important point, because now, assuming that we can measure the energy of the electron well, our background has been reduced from “all electrons” to “electrons that have an energy close to 105 MeV” (see the figure at right).  In the case of the Mu2e experiment, this means that we can reduce our total background to less than one event expected over the total running time of the experiment!

Taking this into account, it is clear that the amount of background will depend on how well the experiment can measure the energy of the 105 MeV electron.  In other words, the sensitivity of the experiment depends critically on its ability to resolve the energy of an electron.

Future posts will include a series of “tricks” used by the experiment to control each of the major background sources.

— Craig Group

Share

Lawrence Lessig, Harvard Law professor and champion of the free culture movement, came to CERN last week to talk about the architecture of access to scientific knowledge. Lessig transformed the relatively mundane subject of copyright law into one of the most engaging and cogent presentations I have seen, while raising some truly valuable points of interest to the scientific community.

Lawrence Lessig

At issue is the inherent incompatibility of copyright law and open access to published research. Under the current system, many individuals – especially those not associated with a university – face a surprisingly high burden to access articles. Some journals make it impossible, and restrict access to U.S. and major world universities willing or able to cover the subscription costs; others will make articles available for typically $30 a pop, which can add up rather quickly. Given that researchers don’t benefit from paywalls (they don’t receive royalties) or restricted access (universal spread of ideas is a good thing for science), Lessig makes the case that publishing in open-access journals should be the preferable choice. One could further make the case that since the majority of scientific research is publicly funded, the results of such research should be easily available to the public.

So why hasn’t everyone already made the switch to open access journals? Two large reasons come to mind. First, there is incredible inertia in academic fields to maintain tradition (in Lessig’s words, academia is “fad-resistant”). To expect the academy to suddenly switch to a new set of journals on account of philosophy, especially if it is a switch away from the more prestigious journals, is unrealistic. It will take time for open-access journals to build prestige and prove themselves steadfast and stable. Second, academics are largely unaffected by the problems mentioned above – most often, they belong to universities with subscription agreements to journals and do not personally bear the costs or difficulties of access. As a result, there is little access-related or economic incentive for change.

It is likely that the response to the problem of open-access will be driven forward at the highest institutional levels. As a Harvard grad student working at CERN, I find it particularly praiseworthy that both of these institutions have been pioneers in the open-access publishing movement. Since 2005, CERN’s publication policy requires its researchers to deposit a copy of all their published articles in an open access repository, and encourages publication in open access journals. Similarly at Harvard, authors grant to the university a “non-exclusive, irrevocable, worldwide license to distribute their scholarly articles, provided it is for non-commercial uses.” Of course comparable practices have been adopted by many other universities, and will almost certainly percolate throughout all of academia in the next few years. I think this can only be a good thing for us as scientists, for science as a whole, and probably even for the general public.

And finally, in the spirit of true open-access, CERN has made Prof. Lessig’s talk freely available here.

Share

The CERN Accelerator Complex

Sunday, April 24th, 2011

With all the buzz this past week regarding the breaking of the world instantaneous luminosity record, I thought it might be interesting for our readers to get an idea of how we as physicists achieved this goal.

Namely, how do we accelerate particles?

(This may be a review for some of our veteran readers due to this older post by Regina)

 

The Physics of Acceleration

Firstly, physicists rely on a principle many of us learn in our introductory physics courses, the Lorentz Force Law.  This result, from classical electromagnetism, states that a charged particle in the presence of external electric and/or magnetic fields will experience a force.  The direction and magnitude (how strong) of the force depends on the sign of the particle’s electric charge and its velocity (or direction its moving, and with what speed).

So how does this relate to accelerators?  Accelerators use radio frequency cavities to accelerate particles.  A cavity has several conductors that are hooked up to an alternating current source.  Between conductors there is empty space, but this space is spanned by a uniform electric field.  This field will accelerate a particle in a specific direction (again, depending on the sign of the particle’s electric charge).  The trick is to flip this current source such that as a charged particle goes through a succession of cavities it continues to accelerate, rather than be slowed down at various points.

A cool Java Applet that will help you visualize this acceleration process via radio frequency cavities can be found here, courtesy of CERN.

Now that’s the electric field portion of the Lorentz Force Law, what about the magnetic?  Well, magnetic fields are closed circular loops, as you get farther and farther away from their source the radii of these loops continually increases.  Whereas electric fields are straight lines that extend out to infinity (and never intersect) in all directions from their source.  This makes the physics of magnetic fields very different from that of electric fields.  We can use magnetic fields to bend the track (or path) of charged particles.  A nice demonstration of this can be found here (or any of the other thousands of hits I got for Googling “Cathode Ray Tube + YouTube”).

Imagine, if you will, a beam of light; you can focus the beam (make it smaller) by using a glass lens, you can also change the direction of the beam using a simple mirror.  Now, the LHC ring uses what are called dipole and quadropole magnets to steer and focus the beam.  If you combine the effects of these magnets you can make what is called a magnetic lens, or more broadly termed “Magnetic Optics.”  In fact, the LHC’s magnetic optics currently focus the beam to a diameter of ~90 micro-meters  (the diameter of a human hair is ~100 micro-meters, although it varies from person to person, and where on the body the hair is taken from).  However, the magnetic optics system was designed to focus the beam to a diameter of ~33 micro-meters.

In fact, the LHC uses 1232 dipole magnets, and 506 quadrupole magnets.  These magnets have  a peak magnetic field of 8.3 Tesla, or 100,000 times stronger than Earth’s magnetic field.  An example of the typical magnetic field emitted by the dipole magnets of the LHC ring is shown here [1]:

Image courtesy of CERN

 

The colored portions of the diagram indicate the magnetic flux, or the amount of magnetic field passing through a given area.  Whereas the arrows indicate the direction of the magnetic field.  The two circles (in blue) in the center of the diagram indicate the beam pipes for beams one and two.  Notice how the arrows (direction of the magnetic field) point in opposite directions!  This allows CERN Accelerator physicists to control two counter-rotating beams of protons in the same beam pipe (Excellent Question John Wells)!

Thus, accelerator physicists at CERN use electric fields to accelerate the LHC proton/lead-ion beams and the magnetic fields to steer and squeeze these beams (Also, these “magnetic optics” systems are responsible for “Lumi Leveling” discussed by Anna Phan earlier this week).

However, this isn’t the complete story, things like length contraction, and synchrotron radiation affect the acceleration process, and design of our accelerators.  But these are stories best left for another time.

 

The Accelerator Complex

But where does this process start?  Well, to answer this let’s start off with the schematic of this system:

Image courtesy of CERN

One of our readers (thanks GP!) has given us this helpful link that visualizes the acceleration process at the LHC (however, when this video was made, the LHC was going to be operating at design specifications…but more on that later).

A proton’s journey starts in a tank of research grade hydrogen gas (impurities are measured in parts per million, or parts per billion).  We first take molecular hydrogen (a diatomic molecule for those of you keeping track) and break it down into atomic hydrogen (individual atoms).  Next, we strip hydrogen’s lone electron from the atom (0:00 in the video linked above).  We are now left with a sample of pure protons.  These protons are then passed into the LINear ACcelerator 2 (LINAC2, 0:50 in the video linked above), which is the tiny purple line in the bottom middle of the above figure.

The LINAC 2 then accelerates these protons to an energy of 50 MeV, or to a 31.4% percent of the speed of light [2].  The “M” stands for mega-, or times one million.  The “eV” stands for electron-volts, which is the conventional unit of high energy physics.  But what is an electron-volt, and how does it relate to everyday life?  Well, for that answer, Christine Nattrass has done such a good job comparing the electron-volt to a chocolate bar, that any description I could give pales in comparison to hers.

Moving right along, now thanks to special relativity, we know that as objects approach the speed of light, they “gain mass.”  This is because energy and mass are equivalent currencies in physics.  An object at rest has a specific mass, and a specific energy.  But when the object is in motion, it has a kinetic energy associated with it.  The faster the object is moving, the more kinetic energy, and thus the more mass it has.  At 31.4% the speed of light, a proton’s mass is ~1.05 times its rest mass (or the proton’s mass when it is not moving).

So this is a cruel fact of nature.  As objects increase in speed, it becomes increasingly more difficult to accelerate them further!  This is a direct result of Newton’s Second Law.  If a force is applied to a light object (one with little mass) it will accelerate very rapidly; however, the same force applied to a massive object will cause a very small acceleration.

Now at an energy of 50 MeV, travelling at 31.4% the speed of light, and with a mass of 1.05 times its rest mass, the protons are injected into the Proton Synchrotron (PS) Booster (1:07 in the video).  This is the ellipse, labeled BOOSTER, in the diagram above.  The PS Booster then accelerates the protons to an energy of 1.4 GeV (where  the “G” stands for giga- or a billion times!), and a velocity that is 91.6% the speed of light [2].  The proton’s mass is now ~2.49 times its rest mass.

The PS Booster then feeds into the Proton Synchrotron (labeled as PS above, see 2:03 in video), which was CERN’s first synchrotron (and was brought online in November of 1959).  The PS then further accelerates the protons to an energy of 25 GeV, and a velocity that is 99.93% the speed of light [2].  The proton’s mass is now ~26.73 times its rest mass!  Wait, WHAT!?

At 31.4% the speed of light, the proton’s mass has barely changed from its rest mass.  Then at 91.6% the speed of light (roughly three times the previous speed), the proton’s mass was only two and a half times its rest mass.  Now, we increased speed by barely 8%, and the proton’s mass was increase by a factor of 10!?

This comes back to the statement earlier, objects become increasingly more difficult to accelerate the faster they are moving.  But this is clearly a non-linear affect.  To get an idea of what this looks like mathematically, take a look at this link here [3].  In this plot, the Y-axis is in multiples of rest mass (or Energy), and the x-axis is velocity, in multiples of the speed of light, c.  The red line is this relativistic effect that we are seeing, as we go from ~91% to 99% the speed of light, the mass increases gigantically!

But back to the proton’s journey, the PS injects the protons into the Super Proton Synchrotron (names in high energy physics are either very generic, and bland, or very outlandish, e.g. matter can be charming).  The Super Proton Synchrotron (SPS, also labeled as such in above diagram, 3:10 in video above) came online in 1976, and it was in 1983 that the W and Z bosons (mediators of the weak nuclear force) were discovered when the SPS was colliding protons with anti-protons.  In today’s world however, the SPS accelerates protons to an energy of 450 GeV, with a velocity of 99.9998% the speed of light [2].  The mass of the proton is now ~500 times its rest mass.

The SPS then injects the proton beams directly into the Large Hadron Collider.  This occurs at 3:35 in video linked above, however, when this video was recorded the LHC was operating at design energy, with each proton having an energy of 7 TeV (“T” for tera-, a million million times).  However, presently the LHC accelerates the proton to half of the design energy, and a velocity of 99.9999964% the speed of light.  The protons are then made to collide in the heart of the detectors.  At this point the protons have a mass that is ~3730 times their rest mass!

 

 

So, the breaking of the world instantaneous luminosity record was not the result of one single instrument, but the combined might of CERN’s full accelerator complex, and in no small part by the magnetic optics systems in these accelerators (I realize I haven’t gone into much detail regarding this, my goal was simply to introduce you to the acceleration process that our beams undergo before collisions).

 

Until next time,

-Brian

 

 

 

References:

[1] CERN, “LHC Design Report,” https://ab-div.web.cern.ch/ab-div/Publications/LHC-DesignReport.html

[2] CERN, “CERN faq: The LHC Guide,” http://cdsweb.cern.ch/record/1165534/files/CERN-Brochure-2009-003-Eng.pdf

[3]  School of Physics, University of Southern Wales, Sydney Australia, http://www.phys.unsw.edu.au/einsteinlight/jw/module5_equations.htm

Share

I’m interrupting my descriptions of LHCb to discuss something more relevant to the current status of the LHC. Namely this LHC status from just after midnight the other day:

Ken has already discussed the luminosity record in this post, and today I’ll be discussing luminosity leveling (LUMI LEVELING). You may be wondering what this has got to do with LHCb? Well, interaction point 8 (IP8) is where LHCb is located as can be seen in this image:

Aidan has timely discussed what luminosity is in this post where he said that larger instantaneous luminosity means having more events, we want to do everything we can to increase instantaneous luminosity. However, if you’ve been looking at the LHC luminosity plots for 2011, like the one for peak instantaneous luminosity below, you might have noticed that the instantaneous luminosities of ALICE and LHCb are lower than those of ATLAS and CMS.

The reason for the difference between the experiments is that the design instantaneous luminosities for LHCb and ALICE are much lower than for ATLAS and CMS. The target instantaneous luminosity for LHCb is \(2 \times 10^{32} cm^{-2} s^{-1} \) to \(3 \times 10^{32} cm^{-2} s^{-1}\) and for ALICE is \(5 \times 10^{29} cm^{-2} s^{-1} \) to \(5 \times 10^{30} cm^{-2} s^{-1}\) while ATLAS and CMS are designed for an instantaneous luminosity of \(10^{34} cm^{-2} s^{-1}\).

This means that while the LHC operators are trying to maximise instantaneous luminosity at ATLAS and CMS, they are also trying to provide LHCb and ALICE with their appropriate luminosities.

As Aidan mentioned in his post, there are a couple of different ways to modify instantaneous luminosity: you can change the number of proton bunches in the beam or you can change the area of the proton bunches that collide.

Last year the LHC operators optimised the collision conditions and this year have been increasing instantaneous luminosity by increasing the number of proton bunches.

The varying instantaneous luminosity requirements of the experiments have so far been handled by having a different number of proton bunches colliding at each of the interaction points. For example, last week there were 228 proton bunches in the beam, 214 of which were colliding in ATLAS and CMS, 12 of which were colliding in ALICE and 180 of which were colliding in LHCb.

However as more and more proton bunches are injected into the beam, it is not possible to continue to limit the instantaneous luminosity at ALICE and LHCb by limiting the number of colliding bunches. Instead, the LHC operators need to modify the collision conditions. This is what luminosity leveling refers to.

Luminosity leveling is performed by moving the proton beams relative to each other to modify the area available for interactions as the bunches pass through each other. This concept is much easier to explain diagrammatically: if the centres of the beams are aligned like on the left, there are more interactions than if they are offset from each other like on the right.

This luminosity leveling process can be seen in action in the graph below from the nice long LHC fill from last night. You can see the ATLAS and CMS luminosities slowly decreasing due to collisions, while the LHCb luminosity stays roughly constant at \(1.3 \times 10^{32} cm^{-2} s^{-1} \), where the vertical red lines are when the beam adjustments were made.

Share

We’ve mentioned jets a few times here on the US LHC blog, so I’d like to go into bit more detail about these funny unavoidable objects in hadron colliders. Fortunately, Cornell recently had a visit from David Krohn, a Simons Fellow at Harvard University who is an expert at jet substructure. With his blessing, I’d like to recap parts of his talk to highlight a few jet basics an mention some of the cutting edge work being done in the field.

Before jumping in, a public service announcement for physicists in this field: David is one of the co-organizers of the Boost 2011 workshop next month. It looks like it’ll be a great event for both theorists and experimentalists.

Hadronic Junk

Let’s review what we know about quantum chromodynamics (QCD). Protons and neutrons are composite objects built out of quarks which are bound together by gluons. Like electrons and photons in quantum electrodynamics (QED), quarks and gluons are assumed to be “fundamental” particles. Unlike electrons and photons, however, we do not observe individual quarks or gluons in isolation. You can pull an electron off of a Hydrogen atom without much ado, but you cannot pull a quark out of a proton without shattering the proton into a bunch of other very different looking things (things like pions).

The reason is that QCD is very nonperturbative at low energies. QCD hates to have color-charged particles floating around, it wants them to immediately bind into color-neutral composite objects, even if that means producing new particles out of the quantum vacuum to make everything neutral. These color-neutral composite objects are called hadrons. Unfortunately, usually the process of hadronizing a quark involves radiating off other quarks of gluons which themselves hadronize. This process continues until you end up with a messy spray of particles in place of the original colored object. This spray is called a jet. (Every time I write about jets I feel like I have to reference West Side Story.)

 

Jets

Simulated event from ATLAS Experiment © 2011 CERN

As one can see in the image above, the problem is that the nice Feynman diagrams that we know how to calculate do not directly correspond to the actual mess of particles that form the jets which the LHC experiments measure. And it really is mess. One cannot effectively measure every single particle within each jet and even if one could, it is impractically difficult to calculate Feynman diagrams for very large numbers of particles.

Thus we’re stuck having to work with the jets themselves. High energy jets usually correspond to the production of a single high-energy colored particle, so it makes sense to talk about jets as “single objects” even though they’re really a spray of hadrons.

Update 4/24: David has corrected me and explained that while the process of jet formation is associated with strong coupling, it isn’t really a consequence of non-perturbative physics. At the level of this blog, the distinction is perhaps too subtle to harp over. For experts, however, I should note for complete honesty that it is indeed true that a lot of jet physics is calculable using perturbative techniques while tiptoeing around soft and collinear singularities. David notes that a nice way to think about this is to imagine QED in the limit where the electromagnetic force were stronger, but not incalculably strong (“non-perturbative”). In this case we could still draw Feynman diagrams for the production of electrons, but as we dial up the strength of the electromagnetic force, the actual observation in our detectors won’t be single electrons, but a “jet” formed form an electron and a spray of photons.

Identifying Jets

So we’ve accepted the following fact of life for QCD at a particle collider:

Even though our high energy collisions produce ‘fundamental’ particles like quarks and gluons, the only thing we get to observe are jets: messy sprays of hadrons.

Thus one very important task is trying to make the correspondence between the ‘fundamental’ particles in our Feynman diagrams and the hadronic slop that we actually measure. In fact, it’s already very hard to provide a technical definition of a jet. Our detectors can identify most of the “hadronic slop,” but how do we go from this to a measurement of some number of jets?

This process is called clustering and involves developing algorithms to divide hadrons into groups which are each likely to have come from a single high energy colored particle (quarks or gluons). For example, for the simple picture above, one could develop a set of rules that cluster hadrons together by drawing narrow cones around the most energetic directions and defining everything within the cone to be part of the jet:

Jet Clustering

Simulated event from ATLAS Experiment © 2011 CERN

One can then measure the energy contained within the cone and say that this must equal the energy of the initial particle which produced the jets, and hence we learn something about fundamental object. I’ll note that this kind of “cone algorithms” for jet clustering can be a little crude and there are more sophisticated techniques on the market (“sequential recombination”).

Boosted Jets

Even though the above cartoon was very nice, you can imagine how things can become complicated. For example, what if the two cones started to approach each other? How would you know if there was one big jet or two narrow jets right next to each other? In fact, this is precisely what happens when you have a highly boosted object decaying into jets.

By “boosted” I mean that the decaying particle has a lot of kinetic energy. This means that even though the particle decays into two colored objects—i.e. two jets—the jets don’t have much time to separate from one another before hitting the detector. Thus instead of two well-separated jets as we saw in the example above, we end up with two jets that overlap:

Collimation of two jets into a single jet as the decaying particle is boosted. Image from D. Krohn.

Now things become very tricky. Here’s a concrete example. At the LHC we expect to produce a lot of top/anti-top pairs (tt-bar). Each of these tops immediately decays into a b-quark and a W. Thus we have

t, t-bar → b, b-bar, W W

(As an exercise, you can draw a Feynman diagram for top pair production and the subsequent decay.) These Ws are also fairly massive particles and can each decay into either a charged lepton and a neutrino, or a pair of quarks. Leptons are not colored objects and so they do not form jets; thus the charged lepton (typically a muon) is a very nice signal. One promising channel to look for top pair production, then, is the case where one of the Ws decays into a lepton and neutrino and the other decays into two quarks:

t, t-bar → b, b-bar, W Wb, b-bar, q, q-bar, lepton, ν

The neutrino is not detected, and all of the quarks (including the bottoms) turn into jets. We thus can search for top pair production by counting the number of four jet events with a high energy lepton. For this discussion we won’t worry about background events, but suffice it to say that one of the reasons why we require a lepton is to help discriminate against background.

Here’s what such an event might look like:

Simulated event from ATLAS Experiment © 2011 CERN

Here “pT” refers to the energy (momentum perpendicular to the beam) of the top quarks. In the above event the tops have a modest kinetic energy. On the other hand, it might be the case that the tops are highly boosted—for example, they might have come from the decay of a very heavy particle which thus gives them a lot of kinetic energy. In the following simulated event display, the tops have a pT that is ten times larger than the previous event:

Simulated event from ATLAS Experiment © 2011 CERN

Now things are tricky! Instead of four clean jets, it looks like two slightly fat jets. Even though this simulated event actually had the “b, b-bar, q, q-bar, lepton, ν” signal we were looking for, we probably wouldn’t have counted this event because the jets are collimated.

There are other ways that jets tend to be miscounted. For example, if a jet (or anything really) is pointed in the direction of the beam, then it is not detected. This is why it’s something of an art to identify the kinds of signals that one should look for at a hadron collider. One will often find searches where the event selection criteria requires “at least” some number of jets (rather than a fixed number) with some restriction on the minimum jet energy.

Jet substructure

One thing you might say is that even though the boosted top pair seemed to only produce two jets, shouldn’t there be some relic that they’re actually two small jets rather than one big jet? There has been a lot of recent progress in this field.

Distinguishing jets from a boosted heavy particle (two collimated jets) from a "normal" QCD jet with no substructure. The plot is a cylindrical cross section of the detector---imagine wrapping it around a toilet paper roll aligned with the beam. Image from D. Krohn.

The main point is that one can hope to use the “internal radiation distribution” to determine whether a “spray of hadrons” contains a single jet or more than one jets. As you can see from the plots above, this is an art that is similar to reading tea leaves. (… and I only say that with the slightest hint of sarcasm!)

[For experts: the reason why the QCD jets look so different are the Alterelli-Parisi splitting functions: quarks and gluons really want to emit soft, collinear stuff.]

There’s now a bit of an industry for developing ways to quantify the likelihood that a jet is really a jet (rather than two jets). This process is called jet substructure. Typically one defines an algorithm that takes detector data and spits out a number called a jet shape variable that tells you something about the internal distribution of hadrons within the jet. The hope is that some of these variables will be reliable and efficient enough to help us squeeze as much useful information as we can out of each of our events. There also seems to be a rule in physics that the longer you let theorists play with an idea, the more likely it is that they’ll give it a silly name. One recent example is the “N-subjettiness” variable.

Jet superstructure

In addition to substructure, there has also been recent progress in the field of jet superstructure, where one looks at correlations between two or more jets. The basic idea boils down to something very intuitive. We know that the Hydrogen atom is composed of a proton and an electron. As a whole, the Hydrogen atom is electrically neutral so it doesn’t emit an electric field. (Of course, this isn’t quite true; there is a dipole field which comes from the fact that the atom is actually composed of smaller things which are charged.) The point, however, is that far away from the atom, it looks like a neutral object so we wouldn’t expect it to emit an electric field.

We can say the same thing about color-charged particles. We already know that quarks and gluons want to recombine into color-neutral objects. Before this happens, however, we have high energy collisions with quarks flying all over the place trying to figure out how to become color neutral. Focusing on this time scale, we can imagine that certain intermediate configurations of quarks might already be color neutral and hence would be less likely to emit gluons (since gluons are the color-field). On the other hand, other intermediate configurations might be color-charged, and so would be more likely to emit gluons. This ends up changing the distribution of jet slop.

Here’s a nice example from one of the first papers in this line of work. Consider the production of a Higgs boson through “quark fusion,” i.e. a quark and an antiquark combining into a Higgs boson. We already started to discuss the Higgs in a recent post, where we made two important points: (1) once we produce a Higgs, it is important to figure out how it decays, and (2) once we identify a decay channel, we also have to account for the background (non-Higgs events that contribute to that signal).

One nice decay channel for the Higgs is b b-bar. The reason is that bottom quark jets have a distinct signature—you can often see that the b quark traveled a small distance in the detector before it started showering into more quarks and gluons. Thus the signal we’re looking for is two b-jets. There’s a background for this: instead of qq-bar → Higgs → b-jets, you could also have qq-bar → gluon → b-jets.

The gluon-mediated background is typically very large, so we would like to find a clever way to remove these background events from our data. It turns out that jet superstructure may be able to help out. The difference between the Higgs → b-jets decay versus the gluon → b-jets decay is that the gluon is color-charged. Thus when the gluon decays, the two b-quarks are also color-charged. On the other hand, the Higgs is color-neutral, so that the two b-quarks are also color neutral.

One can draw this heuristically as “color lines” which represent which quarks have the same color charge. In the image below, the first diagram represents the case where an intermediate Higgs is produced, while the second diagram represents an intermediate gluon.

Color lines for qq-bar → Higgs → b-jets and qq-bar → gluon → b-jets. Image from 1001.5027

For the intermediate Higgs, the two b-jets must have the same color (one is red, the other is anti-red) so that the combined object is color neutral. For the intermediate gluon, the color lines of the two b-jets are tied up to the remnants of the protons (the thick lines at the top and bottom). The result is that the hadronic spray that makes up the jets tend to be pulled together for the Higgs decays, while pushed apart for the gluon decays. This is shown heuristically below, where again we should understand the plot as being a cylindrical cross section of the detector:

Higgs decays into two b-jets (signal) versus gluon decays (background). Image from 1001.5027

One can thus define a jet superstructure variable (called ‘pull‘) to quantify how much two jets are pulled together or pushed apart. The hope is that this variable can be used to discriminate between signal and background and give us better statistics for our searches for new particles.

Anyway, that’s just a sample of the types of neat things that people have been working on to improve the amount of information we can get out of each event at hadron colliders like the LHC. I’d like to thank David Krohn, once again, for a great talk and very fun discussions. For experts, let me make one more plug for his workshop next month: Boost 2011.

Share

Last week was yet another exciting moment for those of us who are researching the nature of dark matter. The long-awaited XENON(100) results were released. XENON, is the biggest rival to my own experiment, the Cryogenic Dark Matter Search, or CDMS.  In the world-wide race to discover dark matter, XENON and CDMS have been leading the pack over the past few years. These two experiments have been taking turns nudging ahead of each other, only to have the other pull ahead within about a year’s time.  This time around, XENON has made a fairly big leap ahead. While the XENON collaboration did not report a discovery, their data does provide significant new constraints on the many theories that aim to explain dark matter.  Their new result has lowered the possibility for dark matter interactions by a factor of ~4 over previous world limits.

I’m certain two of the big questions many now have for CDMS, are: What are our plans for the future and will we be taking the next jump that puts us in the lead? Though we’ve had a setback recently, I’m optimistic about CDMS.  We are in the process of starting a new phase of the experiment named SuperCDMS.  For SuperCDMS, we will implement a new detector design which will significantly increase our sensitivity to WIMPs. Last month, we were in the middle of testing these detectors when a fire broke out in the mine where the experiment resides. We are now waiting while the mine infrastructure is repaired. Once that is completed, we will begin our first physics run with the new detectors, which may be as soon as this summer. In the meantime, we are planning a much bigger version of the experiment at a much deeper underground site, SNOLAB in Canada. Both of these endeavors have planned sensitivities that exceed the current XENON limits.  

In the meantime, of course, the XENON collaboration will be continuing to gather more data and working on their next-generation experiment. However, based on their reported results, it is clear that they cannot simply improve their sensitivity with more of the same data. To push their sensitivity further, they must reduce intrinsic radioactive contaminants in their detector. Though they claim to have started a new run with higher purity levels, it’s unclear how long they can sustain the current conditions of the detector.  

So the worldwide race has tipped toward XENON for the time being, but meanwhile, the future of CDMS is bright. We don’t yet know what nature has to reveal or what the future will bring. This makes the world of dark matter research fascinating. No matter what happens, we all look forward to learning what the final outcome will be. For me, these are great reasons to push forward in the race to understand dark matter.

— Lauren Hsu

Share

What’s in a bunch?

Friday, April 22nd, 2011

CERN’s recent tweets have been cramming as much excitement as you can squeeze into 140 characters about the increasing number of bunches in the LHC beams, culminating with the record intensity for a hadron collider that was set with 480 bunches per beam last night. Time, then, to explain what that’s all about.

A beam in the LHC is not a continuous string of particles, but is divided into chunks a few centimetres long squeezed down to the size of a human hair at the collision point. Elsewhere in the ring the beam size varies but is normally less than a millimetre.

These chunks are what we call bunches. Each bunch contains about a hundred billion protons, and it’s a measure of just how small protons are that if you were to scale each one up to the size of a marble, the bunch length would be as far as the distance from Earth to Uranus and the width of the bunch would be something like the distance between the Earth and the Moon. Neighbouring marble sized protons would be as far apart as Geneva and Hamburg. So it’s not surprising that when bunches collide in the LHC, only a handful of proton-proton collisions happen.

Discovery in particle physics is a statistical process, so increasing the number of bunches is important. It increases the number of collisions, or the statistics as physicists put it.  The LHC is designed to run with 2808 bunches per beam, separated by a gap of just 25 nanoseconds. Since this is still early days in LHC running, we’re still at relatively low numbers and the bunch spacing is 50 nanoseconds. Nevertheless, building the number of bunches steadily this year towards last night’s record-breaking 480 per beam and beyond means that LHC experiments have already collected far more data so far this year that they collected in all of 2010.

Increasing the number of bunches in the beam is a stepwise process, since although each proton only has the energy of a mosquito in flight, by the time you multiply that by hundreds of billions, you have a large amount of energy stored in the beams. The operators need to be sure that the systems designed to protect the machine from damage are all ready before increasing the number of bunches. When the LHC reaches its full design potential, the beams will carry the energy of a 20,000 tonne aircraft carrier travelling at 12 knots. With 480 bunches per beam at half the LHC’s design energy, the energy stored in the beams equates roughly to the same aircraft carrier travelling at the rather more sedate pace of a little under 3 knots. Not quite so impressive, but a significant amount of energy nevertheless.

James Gillies and Mike Lamont

Share

A spring clean for the LHC

Friday, April 22nd, 2011

Since restarting on 13th of March and until last week, the LHC had produced very little collision data for the thousands of physicists like me eager to prepare new results for the summer conferences. Why? Mostly because the whole Large Hadron Collider (LHC) team was busy setting-up the machine to recover the luminosity obtained at the end of last year and more recently, scrubbing the beam pipe.

Since the LHC is a brand new machine, you may wonder why did it need such a giant spring clean? Being new is precisely the point. It is just like buying a new car:  just as they smell of new vinyl and whatever materials are used nowadays,  the LHC needed to get rid of that brand-new smell.

Anybody who has ever worked with an extremely high vacuum knows that when pumping down on a new system, it’s like getting the smell out of a new car. The new materials are “outgassing”, that is, releasing molecules trapped in their surfaces. This is why to achieve an ultra pure vacuum, you use non-porous materials, like glass or stainless steel. Nevertheless, molecules of all sorts, and grease in particular, always seem to manage to permeate any surface, especially when you are pumping down to less than a millionth of a millionth of the atmospheric pressure like at the LHC.

Throughout the first full year of operation in 2010, as the LHC operators and accelerator physicists gained expertise, they kept raising the LHC beam intensity nearly every week, overcoming hitches as they found them.  Just think of a garden hose. If the water flow in the hose increases under more pressure, any loose material or dirt attached to the hose wall will eventually come loose. As you double, decuple or even increase the pressure by a hundred, more junk will start coming out, to the point where your water could be contaminated. And that’s precisely what we saw at the end of last year when the LHC had reached “luminosities” (something akin to the pressure in a hose) one hundred thousand times higher than at the beginning.

For the LHC, loose molecules released from the beam pipe ended up getting in the beam’s way. Because we don’t deal with electrically neutral material like water but with a flow of charged protons, the outgassed molecules got ionized, i.e. lost electrons, and electron clouds formed around the beam, making it increasingly difficult to operate at higher luminosities. The vacuum in the beam pipe was no longer good enough. This prevented the injection of more and more protons to increase the luminosity, that is, the chances of having more collisions needed for discoveries.

What was done for ten days was to inject high intensity but low energy beams inside the machine to get a squeaky-clean beam pipe while vacuum pumps evacuated all molecules released from the pipe in the process. In fact, the same electrons that were in the way were put to use to clean the beam pipe surface. As bunches of protons passed by, they electrically repelled the electrons, sending them forcefully against the pipe wall. And that’s how the scrubbing occurred. By doing this repeatedly, the vacuum improved, and the LHC team could inject higher intensity beams for the next step. As with the water hose analogy, they increased the pressure until the water coming out at the other end was crystal clear again.

And the payoffs were immediate: as soon as they resumed operation, they could reach higher luminosities than last year. Within a week, we have already collected more data than in all of 2010! So yes, that means the LHC is on a roll.

This means us experimentalists are getting what we wanted: lots of collisions to get a chance to see extremely rare events. To get there, the strategy is to keep adding more protons per beam. But that’s easier said than done! This is like asking a juggler to go from keeping three or four balls up in the air to several hundreds!  The protons in the beams are regrouped in bunches, with as many as one hundred thousand million protons per bunch. As of now, the operators can manage 480 bunches in the machine at once, with 36 bunches clustered in trains, and each bunch kept 50 nanoseconds apart (yes, that’s 50 billionth of a second!). And this occurs simultaneously for the two beams, with each beam circulating in opposite direction. All this to maximize the number of protons in the two circulating beams and increase the number of collisions produced in each detector. The goal is to get as close as possible to the theoretical limit  with 50ns bunch spacing of 1318 bunches per beam sometime this year.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline
 

Share