• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for August, 2011

How do you align a telescope to make sure it’s pointing in the right direction? Piece of cake: you use any cheap sky mapping software that could give you the position of a star in the sky at a given time for your location, you point your telescope in that direction and make sure that you see the right star. Done!

Now, how do you align a neutrino telescope like IceCube to make sure it’s pointing in the right direction? Well, as you may imagine, this is a little bit harder. With no high energy neutrino sources detected so far, we can’t really use the same procedure as with an optical telescope, but there’s a way: using cosmic rays.

Every second, IceCube detects about 2000 muons coming from the interaction between cosmic rays (protons, most of the times) and the Earth’s upper atmosphere. These protons have energies of 20 TeV on average, about 6 times the energy of the protons going around the LHC ring. Although protons of this energy can’t be used for galactic astronomy since their trajectories are bent by the interstellar magnetic field over scales of tens or hundreds of parsecs, they can travel on pretty straight paths over shorter distances like those characteristic of the inner Solar System.

Just as there are no neutrino point sources that have been observed so far, due to these magnetic deflections there are also no point sources of cosmic rays in the sky, but in this case the idea is not to look for a source of cosmic rays, but for an antisource. The Moon, conveniently located only 360000 km away from our planet, is a very good absorber of cosmic rays. Because of that, the Moon should be casting a cosmic ray shadow, blocking all cosmic rays that come from its direction, and then preventing us from detecting the muons associated with those ill-fated protons in IceCube. Also, since the distance is small the shadow is not destroyed by magnetic field deflections for cosmic rays of TeV energies, so the effect should really be observable by IceCube.

If we map the cosmic rays detected around the location of the Moon, we should see a deficit in the number of cosmic rays that are coming from that part of the sky. If we don’t see such a deficit, then that means that there’s something wrong with the directional reconstruction algorithms that we use for cosmic rays and neutrinos. This is actually an old idea, first proposed by G.W. Clark in 1957 and used in many occasions by different experiments to characterize the angular resolution of their detectors.

In IceCube, this work was performed by Laura Gladstone, David Boersma and two undergraduate students: Jan Blumenthal and Hugo Stiebel. Here’s how IceCube “sees” the Moon. The map shown below corresponds to the Moon shadow observed with IceCube in its 59-string configuration that was operated between May 2009 and May 2010. During this period, the detector recorded 22 million events in an angular window of 8 x 8 degrees centered around the position of the Moon.

The map below shows one result from the analysis. The color scale indicates the total number of events shadowed by the Moon at each position in the map. The deepest deficit corresponds to a total of 8192 events blocked by the Moon, and the location of this deepest point agrees with the expected position of the Moon to within 0.1 degrees. The statistical significance of the detection is around 14 sigma, where the usual rule in particle physics is that anything above 3 sigma indicates “evidence” for something being observed, and 5 sigma indicates a “discovery.” Probably it is too late to claim discovery of the Moon given that people have even walked on it, but those 14 sigma tell us that we are very confident that the shadow we see is not a fluke.

We’re right now writing a paper that will include all the details of the Moon shadow observation.

The shadow of the Moon as was observed with the 59-string configuration of IceCube (preliminary plot).

Not only the fact that we see the Moon is important for IceCube, also its width tells us something about the precision of the reconstruction methods that we use to determine the arrival direction of the cosmic rays. The width of the shadow is of the order of 1 degree, which agrees with simulations that we have of the detector.

Besides its use as a sanity check for the pointing capabilities of the detector, the Moon shadow can also be useful for physics since it can provide a measurement of the antimatter content in cosmic rays, a value that has implications for dark matter since this antimatter could be generated in WIMP annihilations.

Due to the magnetic field of the Earth, the proton shadow of the Moon should be slightly shifted to the left of the expected position of the Moon in the map above. This effect is stronger at lower energies, where the offset can be as large as ~1 degree of offset for cosmic rays of ~1 TeV. For IceCube energies the effect may be too small to be measured, but other experiments working at lower energies actually see this deflection happening. What’s important about this deflection is that if there’s a small fraction of antiprotons in the cosmic ray flux they would produce a shadow that’s deflected by the same amount as the normal proton shadow but in the opposite direction from the Moon due to their opposite electrical charge. At the moment I’m writing a cosmic ray propagation code to see how much and in which direction we should see the shadow shifting when observed from the South Pole (see an example of the propagation code output in the image below).

3D view of beams of GeV protons (green lines) being propagated towards the Earth (light blue sphere in the middle) which are scattered away by the geomagnetic field.

 

If such an “antishadow” is observed, a direct measurement of the antiproton-to-proton ratio in the cosmic rays can be done by comparing directly the strengths of both shadows. If such a shadow is not observed, then a limit could be set on this ratio. This is what several experiments have done. Most recently by ARGO-YBJ, a cosmic ray detector located in Tibet. The current status of the antiproton/proton ratio measurements is summarized in the plot below, which I took from a conference proceeding by ARGO.

Most of the direct antiproton measurements have been performed only up to 100 GeV, with a ratio indicating that there’s only one antiproton for every 10000 protons in the cosmic ray flux. At higher energies only limits have been set so far with values of about 0.1 in the ratio, or equivalently that there’s less than 1 antiproton for every 10 protons.

Direct measurements and limits for the antiproton/proton ratio.

The recently launched AMS-2 cosmic ray detector that’s now taking data attached to the International Space Station should be able to extend the direct measurements up to 10^3 GeV.

I’ll keep you updated about any news on the IceCube moon shadow front. For the moment, I leave you with a song appropriately called ‘Moonshadow’ by Yusuf Islam (aka Cat Stevens.) I’m sure he was thinking of muons and antiprotons as he was writing it!

 

 

 

Share

Working with TR13

Wednesday, August 31st, 2011

– By Kiel Strang, TRIUMF High School Fellowship Student

For the past 6 weeks I’ve been working for Dr. Conny Hoehr and TRIUMF’s Nuclear Medicine group.  The main project I’ve been working on involves a new process for producing Technetium-94m (94mTc) with the TR13 cyclotron.

Kiel and the TR13 Cyclotron

94mTc is a radioisotope used in PET imaging and has some properties that make it an attractive replacement for 99mTc, a commonly used imaging isotope that is now in short supply.  When the positron emitted by 94mTc annihilates with an electron, it emits 2 gamma rays in opposite directions.

Detecting these in coincidence allows the position of the tracer molecule to be determined more precisely than is possible with the single gamma emitted by 99mTc, producing better image quality.

94mTc has been previously produced using solid molybdenum trioxide-94 (94MoO3) targets.  Dr. Hoehr and her team are developing an alternate method of producing 94mTc using a liquid target filled with a solution of 94MoO3, ammonium hydroxide (NH4OH), hydrogen peroxide (H2O2) and water.

My role in this project was developing the software to control and automate handling of the target solution.

Using NI Lookout (http://sine.ni.com/nips/cds/view/p/lang/en/nid/12511), I developed a control interface for the process and automated the expected sequence of operations.  I tried to make the interface easy to understand and operate, and flexible enough to allow for easy adjustment as the procedures are finalized.

As this is an experimental system, I tried to leave the operator lots of flexibility.  In addition to the automated stages, the interface allows manual control of all the valves.

One of the most interesting challenges in developing the interface was controlling the syringe pump used to push solutions into the system.  This pump has an integrated microcontroller that can be programmed with quite complex tasks, but the interface between the pump controller and the Lookout control software is very limited.  There are 2 programmable input pins available and 1 pin, which starts or pauses the pump program.

The Lookout control program needs to be able to select any of 3 preset dispensing volumes (for filling the target, dispensing products, and purging the system).  I did this using timing on one of the input pins – when the pump program is started, it will select a volume based on the length of time the pin is powered.  The other programmable pin is used as an emergency stop signal.

Because this system is not expected to be assembled and run until the fall, I had to test each component individually.  For the Lookout software, I created simulated inputs and indicators for the state of the outputs.  I also tested the pump by manually connecting power to its input pins.  These methods allowed me to verify that each component works as intended before the entire system is assembled.

I’ve been debating between studying Physics or Engineering Physics for a couple of years, but working at TRIUMF has shown me that an engineering education could be very valuable even if I eventually decide to pursue a career in physics.

Share

Update: Section added to include LEP11 Results on Higgs Boson Exclusion (01 Sept 2011)

Expect bold claims at this week’s SUSY 2011 (#SUSY11 on Twitter, maybe) Conference at Fermilab, in Batavia, Illinois. No, I do not have any secret information about some analysis that undoubtedly proves Supersymmetry‘s existence; though, it would be pretty cool if such an analysis does exist. I say this because I came back from a short summer school/pre-conference that gave a very thorough introduction to the mathematical framework behind a theory that supposes that there exists a new and very powerful relationship between particles that make up matter, like electrons & quarks (fermions), and particles that mediate the forces in our universe, like photons & gluons (bosons). This theory is called “Supersymmetry”, or “SUSY” for short, and might explain many of the shortcomings of our current description of how Nature works.

At this summer school, appropriately called PreSUSY 2011, we were additionally shown the amount of data that the Large Hadron Collider is expected to collect before the end of this year and at the end of 2012. This is where the game changer appeared. Back in June 2011, CERN announced that it had collected 1 fb-1 (1 inverse femtobarn) worth of data – the equivalent of 70,000 billion proton-proton collisions – a whole six months ahead of schedule. Yes, the Large Hadron Collider generated a year’s worth of data in half a year’s time. What is more impressive is that the ATLAS and CMS experiments may each end up collecting upwards of 5 fb-1 before the end of this year, a benchmark number a large number of people said would be a “highly optimistic goal” for 2012. I cannot emphasize how crazy & surreal it is to be seriously discussing the possibility of having 10 fb-1, or even 15 fb-1, by the end of 2012.

Figure 1: Up-to-date record of the total number of protons collisions delivered to each of the Large Hadron Collider Detector Experiments. (Image: CERN)

What this means is that by the end of this year, not next year, we will definitely know whether or not the higgs boson, as predicted by the Standard Model, exists. It also means that by next year, experimentalists will be able to rule out the most basic versions of Supersymmetry which were already ruled out by previous, high-precision measurements of previously known (electroweak) physics. Were we to find Supersymmetry at the LHC now and not when the LHC is at designed specifications, which are expected to be reached in 2014, then many physicists would be at a loss trying to rectify why one set of measurements rule out SUSY but another set of measurements support its existence.

What we can expect this week, aside from the usual higgs boson and SUSY exclusion plots, are a set of updated predictions as to where we expect to be this time next year. Now that the LHC has given us more data than we had anticipated we can truly explore the unknown, so trust me when I say that the death of SUSY has been greatly exaggerated.

More on Higgs Boson Exclusion (Added 01 Sept 2011)

This morning a new BBC article came out on the possibility of the higgs being found by Christmas. So why not add some plots, shown at August’s Lepton-Photon 2011 Conference, that show this? These plots were taken from Vivek Sharma’s Higgs Searches at CMS talk.

If there is no Standard Model higgs boson, then the Compact Muon Solenoid Detector, one of the two general purpose LHC detectors, should be able to exclude the boson, singlehandedly, with a 95% Confidence Level. ATLAS, the second of the two general purpose detectors, is similarly capable of such an exclusion.

Figure A: The CMS Collaboration projected sensitivity to excluding the higgs boson with 5 fb-1 at √s = 7 TeV; the black line gives combined (total) sensitivity.

Things get less clear if there is a higgs boson because physical & statistical fluctuations adds to our uncertainty. If CMS does collect 5 fb-1 before the winter shutdown, then it is capable of claiming at least a 3σ (three-sigma) discovery for a higgs boson with a mass anywhere between mH≈ 120 GeV/c2 and mH ≈ 550 GeV/c2 . For a number of (statistical/systematic) reasons, the range might shrink or expand with 5 fb-1 worth of data but only by a few GeV/c2. In statistics, “σ” (sigma) is the Greek letter that represents a standard deviation; a “3σ result” implies that there is only a 0.3% chance of being a fluke. The threshold for discovery is set at 5σ, or a 0.000 06% of being a random fluke.

Figure B: The CMS Collaboration projected sensitivity to discovering the higgs boson with 1 (black), 2 (brown?), 5 (blue), and 10 (pink)  fb-1 at √s = 7 TeV.

By itself, the CMS detector is no longer sensitive. By combing their results, however, a joint ATLAS-CMS combined analysis can do the full 3σ discovery and a 5σ job down to 128 GeV/c2. The 114 GeV/c2 benchmark that physicists like to throw around is lower bound on the higgs boson mass set by CERN’s LEP Collider, which shutdown in 2000 to make room for the LHC.

Figure C: The projected sensitivity of a joint ATLAS-CMS analysis for SM higgs exclusion & discovery for various benchmark data sets.

However, there are two caveat in all of this. The smaller one is that these results depend on another 2.5 fb-1 being delivered by the upcoming winter shutdown; if there are any more major halts in data collection, then the mark will be missed. The second, and more serious, caveat is that this whole time I have been talking about the Standard Model higgs boson, which has a pretty rigid set of assumptions. If there is new physics, then all these discovery/exclusion bets are off. 🙂

Nature’s Little Secrets

On my way to PreSUSY, a good colleague of mine & I decided to stop by Fermilab to visit a friend and explore the little secret nooks that makes Fermilab, in my opinion, one of the most beautiful places in the world (keep in mind, I really love the Musée d’Orsay). What makes Fermilab such an gorgeous place is that is doubles as a federally sanctioned nature preserve! From bison to butterflies, the lab protects endangered or near-endangered habitats while simultaneously reaching back to the dawn of the Universe. Here is a little photographic tour of some of Nature’s best kept secrets. All the photos can be enlarged by clicking on them. Enjoy!

Figure 2: The main entrance to the Enrico Fermi National Accelerator Laboratory, U.S. Dept. of Energy Laboratory Designation: FNAL, nicknamed Fermilab. The three-way arch that does not connect evenly at the top is called Broken Symmetry and appropriately represents the a huge triumph of Theoretical (Solid State & High Energy) Physics: Spontaneous Symmetry Breaking. Wilson Hall, nicknamed “The High-Rise” can be see in the background. (Image: Mine).

Figure 3: Wilson Hall, named after FNAL’s first director and Manhattan Project Scientist Robert Wilson, is where half of Fermilab’s magic happens. Aside from housing all the theorists & being attached to the Tevatron Control Room, it also houses a second control room for the CMS Detector called the Remote Operations Center. Yes, the CMS Detector can be fully controlled from Fermilab. The photo was taken from the center of the Tevatron ring. (Image: Mine)

Figure 4: A wetlands preserve located at the center of the Tevatron accelerator ring. The preservation has been so successful at restoring local fish that people with an Illinois fishing license (See FAQ) are actually allowed to fish. From what I have been told, the fish are exceptionally delicious the closer you get to the Main Ring. I wonder if it has anything to do with all that background neutrino rad… never mind. 🙂
Disclaimer: The previous line was a joke; the radiation levels at Fermilab are well within safety limits! (Image: Mine)

Figure 5: The Feynman Computing Center (left) and BZero (right), a.k.a., The CDF Detector Collision Hall. The Computing Center, named after the late Prof. Richard Feynman, cannot be justly compared to any other data center, except with maybe CERN‘s computing center. Really, there is so much experimental computer research, custom built electronics, and such huge processing power that there are no benchmarks that allows for it to be compared. Places like Fermilab and CERN set the benchmarks. The Collider Detector at Fermilab, or CDF for short, is one of two general purpose detectors at Fermilab that collects and analyzes the decay products of proton & anti-proton collisions. Magic really does happen in that collision hall. (Image: Mine)

Figure 6: The DZero Detector Collision Hall (blue building, back), Tevatron Colling River (center) , and Collision Hall Access Road (foreground). Like CDF (Figure 5), DZero is one of two general-purpose detectors at Fermilab that collects and analyzes the decay products of proton & anti-proton collisions. There is no question that the Tevatron generates a lot of heat. It was determined long ago that by taking advantage of the area’s annual rainfall and temperature the operating costs of running the collider could be drastically cut by using naturally replenishable source of water to cool the collider. If there were ever a reason to invest in a renewable energy source, this would be it. The access road doubles as a running/biking track for employees and site visitors. If you run, one question that is often asked by other scientists is if you are a proton or anti-proton. The anti-protons travel clockwise in the Main Ring and hence you are called an anti-proton if you bike/run with the anti-protons; the protons travel counter-clockwise. FYI: I am an anti-proton. (Image: Mine)

Figure 7: The Barn (red barn, right) and American bison pen (fence, foreground). Fermilab was built on prairie land and so I find it every bit appropriate that the laboratory does all it can to preserve an important part of America’s history, i.e., forging the Great American Frontier. Such a legacy of expanding to the unknown drives Fermilab’s mantra of being an “Ongoing Pioneer of Exploring the Frontier of Discovery.” (Image: Mine)

Figure 8: American bison (bison bison) in the far background (click to enlarge). At the time of the photo, a few calves had just recently been born. (Image: Mine)

 

Happy Colliding.

 

– richard (@bravelittlemuon)

 

 

Share

Almost everyone has been there: You rush to the airport only to stand in line and watch your plane board at what feels like a snail’s pace. Yet few people take the time to come up with a better solution and even fewer see their idea tested.

Frustration and luck helped Fermilab astrophysicist Jason Steffen accomplish both.

In 2008, after a particularly blood-boiling wait in an airport, the frequent flier decided to put his skills developing algorithms to track potentially habitable planets and dark matter particles to use developing a computer model to virtually load passengers. After testing several loading patterns, Steffen determined that loading in groups spaced two or three rows apart makes the process much more efficient. The improvement in boarding time depends on the size of the airplane. Spacing out passengers was key to allowing simultaneous depositing of luggage in over-head bins.

Although Steffen was sure his formula worked, it was still just theory on paper that needed flesh and bone testing.

Enter This vs. That, a new science program that uses often quirky experiments to test the merits of everyday choices such as what burns hotter in your grill – natural gas or propane. The show is being distributed internationally by MIPCOM and under discussion for Internet release in the United States. Jon Hotchkiss, producer of the program, came across Steffen’s paper on the Internet shortly after his own molasses-like airplane boarding experience, and a new episode was born.

“As much as This vs. That is entertaining, I also wanted there to be a take away,”  Hotchkiss said. “I wanted to provide answers to questions that people have in their daily lives.”

The show flew Steffen to Los Angeles during his vacation earlier this year to test five models of airplane boarding, including Steffen’s preferred method, on a mock 757 airplane in a movie sound stage. Steffen and Hotchkiss measured the plane’s interior to make sure it conformed with the specifications of a real 757. Seventy-two people, and their luggage, were loaded with boarding order and seat number randomized. Steffen was a little taken back at first by the producer’s call but said that, “When someone offers to take you’re theoretical work and test it; you should jump at the chance.”

During the filming, Steffen’s nerves were put to the test by the show’s co-host comedian Marc DeCarlo, who also hosts the Travel Channel’s Taste of America and Windy City LIVE’s  man-about-town segments. Steffen wasn’t allowed to know the time it took for each boarding method until the end of the episode and DeCarlo kept implying that Steffen shouldn’t quit his day job.

“I was watching people load and this one girl was slow,” Steffen said. “I kept thinking, ‘Man this last girl is going to make me look bad. Put the bag in the overhead bin already’. But once the time ‘three minutes’ came out of DeCarlo’s mouth, I knew I had it.” Steffen’s method was nearly twice as fast as the nearest competing method.
You can read about Steffen’s method and the sound-stage test in his white paper posted Sunday on the preprint server arXiv.
The first 7 ½ minutes of the This vs. That episode testing Steffen’s method can be viewed here and a few clips of Steffen’s boarding method.

The program’s Twitter feed will announce the U.S. release date of the full one-hour episode , pending a distribution contract.

–Tona Kunz

Share

I sort of embellished for dramatic effect in my last post when I concluded with the statement that: Only more data will tell us the answer to the million dollar question: Is it Standard Model or New Physics?

In the case of the \(B_s\) mixing phase, \(\phi_s\) this isn’t strictly true. More data will improve the measurement of this quantity using the \(B_s \rightarrow J/\psi + \phi\) decay. However, we can measure \(\phi_s\) using other decay modes of the \(B_s\) meson. One such decay is \(B_s \rightarrow J/\psi + f_0\).
As you can see from the Feynmann diagram, this \(J/\psi + f_0\) decay is very similar to the \(J/\psi + \phi\) decay. However, it occurs at a much lower rate. So low in fact, that LHCb was the first experiment to observe it!

This time I’ll present the numerical results for \(\phi_s\) instead of the graphical ones:

The Standard Model prediction is: \(-0.036 \pm 0.002\) rad
The \(J/\psi + \phi\) result is: \(0.13 \pm 0.18 \pm 0.07\) rad
The \(J/\psi + f_0\) result is: \(−0.29 \pm 0.35 \pm 0.02\) rad

The first error value quoted for the experimental results is from statistics, while the second error value from systematic uncertainties.

Looking closely at the numbers, you can see that both experimental results are consistent with each other and with the Standard Model, and both experimental results have larger statistical errors compared to systematic errors.

Comparing the two experimental results now, the \(J/\psi + \phi\) result has a smaller statistical error compared to the \(J/\psi + f_0\) result due to the lower rate of \(J/\psi + f_0\) decays. The \(J/\psi + \phi\) analysis used \(8276 \pm 94\) signal events, while the \(J/\psi + f_0\) analysis only found \(1428 \pm 47\). On the other hand, the \(J/\psi + f_0\) analysis is simpler than the \(J/\psi + \phi\) one, resulting in the smaller systematic error.

Okay, now I can say with a clear conscience: only more data will allow us to measure the \(B_s\) mixing phase, \(\phi_s\) to greater accuracy and determine whether this will be where nature deviates from prediction…

Share

In Transit

Sunday, August 28th, 2011

I’m currently in the middle of two important transitions, one logistical, the other scientific.

The logistical transition is my move to CERN. I got stuck in Cambridge for an extra two weeks because of a delay in getting my long-stay visa for France. With help from people on both sides of the Atlantic, I learned that the problem was that the person in charge of these visas at the French Ministry of Foreign Affairs had quit without leaving instructions for her replacement. After a week of uncertainty, on Friday afternoon I received my visa and rebooked my flights. In the meanwhile, I’m staying with patient, generous friends and living out of the suitcases I packed for my move.

Higgs Branching Ratios The other transition is that I am wrapping up one measurement, with a paper about to clear (I hope!) the last stages of the ATLAS internal approval process, and am about to begin work on a new one. I’m excited because I’m starting to work on the search for the Higgs boson in the H → WW → lνlν channel. The W is the massive gauge boson that mediates the weak force, the l represents an electron or muon, and the ν represents a neutrino.  This is a key search channel for exploring the still-allowed mass values for the Standard Model Higgs, since for all possible Higgs boson masses greater than 120 GeV, the Higgs decays to a WW in at least 10% of events, as shown in the image to the right, taken from the LHC Higgs Cross Section Working Group.  For a sense of scale, masses below 115 GeV are excluded by direct searches at the LEP collider (the electron-positron collider that was the original inhabitant of the LHC tunnel).  Each W can decay to either a quark-antiquark pair, or a charged lepton and a neutrino. Requiring each W to decay to either an eν or μν pair includes only about 5% of WW decays, but channels with leptons potentially yield a much clearer signal, because there is less background. That is, if you choose an event with the characteristic features of a WW → lνlν candidate, odds are pretty good that you actually have a WW event, and not an event from a different source with features that mimic a WW event.

In the results shown at EPS, both the ATLAS and CMS Higgs searches reported limits that were not as good as expected for low masses (130 GeV < m(H) < 160 GeV or so). The excess of events driving the degradation in the limits was mainly in the WW → lνlν final state. In the update shown at the Lepton-Photon conference last week in Mumbai, India, the excess has become less significant (See the CERN post for a nice summary). But this will remain a hot topic for the next year, since whether the excess stays or goes away, we will have enough data to confirm or exclude the Standard Model Higgs Boson by the end of the year.

But it’s kind of a funny choice of projects for me, because I always swore I would never work on a Higgs search. My reasoning has been that even if a Standard-Model-like Higgs Boson does exist, it can’t possibly be the whole story, and the analysis has already got too many people working on it anyways.

I still firmly believe the former, since there are a number of questions, such as the nature of dark matter and the disparate strengths of the different fundamental forces, which the Higgs boson does nothing to answer. But those questions will likely still be around in a year, and there seems to be a puzzle in the WW dataset right now. We have the opportunity in the coming months to discover an elusive particle, or make a definitive statement about its absence. Also right at this moment, I have the chance to devote almost 100% of my attention to some measurement, uninterrupted, for the next several months. This sort of opportunity is likely to be increasingly rare as my career progresses, and the draw of the WW puzzle is powerful.

As for excuse number two, well, it’s just an excuse. It’s true that there are good people already working on this measurement, but that just means that progress can be fast. Working with lots of good people also means that I should learn a lot, one of my goals for any project. I worry a little about what I’ll be able to contribute, but I’ve worked with similar signatures (top-antitop → WbWb → lνb lνb and plain old W → lν) in the past, so I’m hoping that I can help in spite of being a bit late to the party.

Friday night I head to CERN. Higgs or no Higgs, it ought to be an interesting year.

Share

Why B physics? Why not A Physics?

Sunday, August 28th, 2011

In my last post, I showed that LHCb is the best LHC detector for B physics, using the decay of the \(B_s\) meson into a \(J/\psi\) meson and \(\phi\) meson as an example. Today I’m going to try and explain why we want to study this particular decay and show you our latest result.

The reason we are interested in studying the decays of B mesons is that they may shed light on one of the major mysteries of the universe, namely the source of the observed matter-antimatter symmetry. Matter and antimatter are assumed to exist in equal amounts at the beginning of the universe, but as the universe expanded and cooled, an asymmetry developed between them, leaving a universe completely dominated by matter.

The Standard Model predicts an asymmetry between matter and antimatter, but at a level that is too small to explain the observed asymmetry in the Universe. Deviations from the predictions would indicate new physics.

As an aside, the difference between the properties of matter and antimatter is called CP violation. I bring up this factoid as it makes up part of the LHCb logo, which I thought was quite clever when I first saw it.

Anyway, one area in which the Standard Model predicts an asymmetry is the \(B_s\) meson system, that is, anti-\(B_s\) mesons are not exact mirror images[*] of \(B_s\) mesons. This difference is encapsulated in the \(B_s\) mixing[***] phase \(\phi_s\). This phase is what can be measured from the decay, \(B_s \rightarrow J/\psi + \phi\), which we just presented at the Lepton Photon conference in Mumbai.

I’ll spare you all the technical details of the analysis (the details of which should be appearing here soon) and skip to the result…

Okay, I know there’s a lot of information on this graph, so let’s go through it piece by piece. Firstly, the x-axis represents the \(B_s\) mixing phase, \(\phi_s\), while the y-axis represents the \(B_s\) decay width[****] difference \(\Gamma_s\). Both of these properties are shown as it is not possible to measure them independently. The Standard Model prediction for both of these variables is shown as the black point, while the CDF, D0 and LHCb results are shown as coloured contours, with the solid line representing the 68% confidence limit and the dashed line showing the 95% confidence limit.

The results of the measurements favour two regions, one of which is located around the Standard Model prediction, though not centered on it, indicating the possibility for new physics. The LHCb result however, is disappointingly much closer to the prediction than the CDF and D0 results.

Only more data will tell us the answer to the million dollar question: Is it Standard Model or New Physics?

—————————————-
[*] I’m assuming here that you all know what antimatter is. If not, a common analogy is that antimatter is the mirror image of matter. More technically, antimatter has all the same properties of matter, apart from opposite charge and parity[**]. For example, the antimatter particle of a negatively charged left-handed electron is a positively charged right-handed positron.

[**] Parity is another name for chirality, which Flip explains very well in this post.

[***] A very interesting property of neutral mesons, such as the \(B_s\), is that they can spontaneously transform themselves into their own antiparticles (and visa versa). This phenomenon, known as flavor oscillation or mixing and I’ll definitely be discussing it in a future post.

[****] It turns out that one of the possible differences between \(B_s\) mesons and anti-\(B_s\) mesons is a property called decay width.
—————————————-

Oh, if anybody was wondering, there is no such thing as A Physics in particle physics, which is why we don’t study it…

Share

Last owl shift at DZero

Saturday, August 27th, 2011

As I write these lines, I have been on night shift for one hour. Hopefully this time, we are in a store and we can take a physics run – it wasn’t the case over the three previous days because of some issues in the TeVatron. But we have to be prepared for this all night of shift, which is just the fourth of seven days.

11:00 p.m. is time to wake up for me. You have to follow a perfect schedule to avoid health problems when you are working during the night. So you often plan to sleep since 4:00pm to 11:00pm and have a great night/afternoon of sleep. For this, I have my own tricks to fall asleep pretty quickly.

First, I had my dinner during the lunch period, that is to say at 12:00 a.m. typically. It’s a good way, before going to sleep, to tell your body that you expect to sleep in the next hours. Then, I take a shower to cool my body, wash my teeth like a good boy and go to sleep. If you sleep well, you are pretty not tired when you wake up. Then, take a breakfast with bread, cereal, milk and orange juice (again, to trick your body and take some sugars to be efficient during the shift) – of course, at the beginning, it is pretty weird to do that stuff at 11:00 p.m. but anyway, you have to be at work at midnight to begin you shift!

Fermilab DZero experiment logo.

My colleague/friend of my french lab is there so I can avoid taking the bike at midnight to go to work (which is very good, I would like to thank him with this post); then you arrive in the DZero control room and speak with the previous shifter to be aware of all the previous issues encountered and how to fix them if you are lucky. You are waiting for the other new shifters to be there i.e. the Track shifter (for the tracker system part of the detector), the CalMuo shifter (for the calorimeter and muon system parts) and the Captain, who leads all of us. As a DAQ shifter, I am responsible of recording data in the computers to let the worldwide DZero people working on these data. As you can easily understand, it is better not to make errors, especially now, one month before the TeVatron ends!

During the first hour of your shift, you need to make a walkthrough, that is to say, going to see how the servers of computer farms are working and if there is no error during data taking and processing. Usually, there is no problem, or if there is some, you already know it by watching of the nine computer screens you have in front of you.

Each screen has a special duty, necessary for you to know what has actually happened, and most of the time, how to solve the encountered issues. For my job, I have to be sure that the events are well recorded after bypassing though a lot of triggers, whose are responsible of taking only interesting events for our further analysis. There are actually three different levels L1,L2 and L3. The L1 trigger has to be the fastest to make a decision in microseconds, then to send data to L2 and finally to L3 (before recording tapes). The data recorded will be used by the collaboration to make plots you already know.

A panorama of the DZero Control Room with me on the DAQ place. (Copyright. Eduard Delacruz-Burelo).

Then, you can easily expect to make other interesting operations as the beginning of a new store, which consists of reinitializing the computer framework to be able to record data on tapes for several runs. These runs will be used after that by other people to filter all of this information. Of course, there are some troubles sometimes you have to fix (especially when you have a huge responsibility).

I always remember my first day of shift at the end of June when it was such a mess – I was frightened and I can not do anything, waiting for the captain instructions. Most of the time, you have to keep cool, to talk to other shifters and see if the issue already happened in the past.

This is now my last time on shift. I am not working at DZero since a long time but I can say that it is quite sad to think that, in a month, all of that will be over. All of these discussions, encountered problems, laughs sometimes and learning science were wonderful experiences for me. I would like to thank again my PhD advisor which was responsible of this opportunity and, of course, the Fermilab DZero team.

Thanks to Fermi National Accelerator Laboratory, thanks to the great DZero team and thanks to you, readers of this post.

Alexandre

Share

– By Byron Jennings, Theorist and Project Coordinator

Pure logical thinking can give us no knowledge whatsoever of the world of experience; all knowledge about reality begins with experience and terminates in it.  Before you accuse me of scientism[1] let me point out that the previous sentence is a direct quote from Albert Einstein. Poincaré agreed: Experiment is the sole source of truth. It alone can teach us something new; it alone can give certainty. These are two points that cannot be questioned. Wow, tell us what you really think, don’t hold anything back. I might note that both Einstein and Poincaré were theorists.

Cannot be questioned? It certainly has been questioned. The so-called continental rationalists, people like Descartes (1596 – 1650, I think therefore I am), Leibnitz (1646 – 1716, Newton’s rival in inventing calculus), and Kant (1724 – 1804, of synthetic a priori knowledge), based their epistemology on pure thought. Take Descartes—he developed an extensive physics based on pure thought with planetary motion due to vortices. You never heard of it? Shows the folly of trusting pure thought; it sunk without a trace under Newton’s empiricism. Kant developed the idea of synthetic a priori knowledge; knowledge that came from pure thought and not observation. Unfortunately his examples, Euclidean geometry and Newton’s laws, turned out not to be true. Oops. At the risk (or pleasure[2]) of offending some people, I add proofs of God’s existence or non-existence to the list of failed attempts to obtain knowledge by pure thought.

But the lure of obtaining knowledge by pure thought is tempting: certainty and a free lunch. No need to talk to those annoying experimentalists who keep shooting my theories down. One of the people who succumbed to the temptation was David Hume (1711 – 1776). This is more surprising since he was a phenomenologist to the core. But he did not like miracles and tried to eliminate miracles by arguments based on pure thought. Well, if Einstein and Poincaré are correct, Hume is wrong. And in my opinion, wrong he is. It all hangs on the question: What is a miracle?

But before addressing that, let us tackle a simpler question: What is the distinction between natural and supernatural? Consider thunder and lightning. The Vikings believed that thunder was the noise made by the wheels of Thor’s chariot being pulled across the heavens by goats. This view was reinforced by the sparks made from the chariot wheels hitting rocks; sparks otherwise known as lightning. Groves of trees—which are the prime target for lightning—became sacred. Today we have a more prosaic view of thunder and lightning—just electromagnetism. The phenomena have not changed but the meaning has. Observations are given meaning based on the on the model or paradigm (Kuhn’s nomenclature) used to describe them.

We have an apparent collision between Kuhn on the one hand and Einstein and Poincaré on the other:  models giving meaning vs observation being paramount. But it is more in appearance then reality. Observations are used to help build and constrain models, while the models then give meaning to the observations. This is self-consistent, not circular. The wiggles in the data seen recently at the LHC are only meaningful within the context of a model for high-energy physics and the detector.  Would finding the Higgs boson be a miracle? Probably not. But super symmetry… that is another matter.

More seriously: What is a miracle? According to Hume, a miracle is something that violates the laws of nature. That would be fine if we had a definitive list of the laws of nature. We don’t. We have, at best, something that may approximate them, something obtained by observations. If miracles occur, they would be observed and therefore built into in the observationally derived laws, rendering Hume’s definition meaningless. Rather, we define miracle as something that is supernatural. By the argument above, it then depends on the model: the model that is constrained by observation. In the end, the existence of miracles and all other questions of how the universe operates, has to be settled empirically by observations and the models built on them. The medium may be the message, but the meaning is in the (observationally-constrained) model.

[1] I will defend scientism in another blog.
[2] I have a firm policy of never accidentally offending anyone.

Share

CDF (red) and DZero (yellow) recorded the Colorado earthquake. Image courtesy of Todd Johnson, AD

On Tuesday, Aug. 23, the Tevatron accelerator knew something none of the people operating it knew. It felt what employees didn’t, and it reported the news faster than the media could upload it to the Internet.

A 5.9-magnitude earthquake had struck the East Coast, and the super-sensitive Tevatron felt it as it happened about 600 miles away. It had also registered a similar quake in Colorado the night before.

The quakes were recorded by sensors on large underground focusing magnets that compress particle beams from the four-mile Tevatron ring into precision collisions at the CDF and DZero detectors. The sensors keep these areas most sensitive to misalignment under constant surveillance. Quakes can jiggle small numbers of particles – less than one percent of the beam – out of alignment and force the shutdown of parts of the three-story detectors to avoid damage. Tevatron operators compare the sensor recordings with updates from the U.S. Geological Survey to rule out natural causes before having to spend time diagnosing machine trouble that caused beam movement.

Typically, two quakes occurring in this short a timeframe would cause headaches for those who run the Tevatron, but fortunately the machine didn’t have beam in the tunnels at the time.

CDF (red) and DZero (yellow) recorded the East Coast earthquake. Image courtesy of Todd Johnson, AD

The Tevatron has recorded more than 20 earthquakes from all over the globe, as well as the deadly tsunamis in Sumatra in 2005 and in Japan in March.

—Tona Kunz

Share