## USLHC | USA

Ken Bloom
Friday, June 14th, 2013

Now that summer is fully here, are you feeling that old wanderlust, the desire to hit the open road? Well then, there are a lot of interesting places to go on the physics conference circuit between now and Labor Day. There are many fabulous locations on the menu, and who knows, you might get to hear the first public presentation of an exciting new physics result. While it’s true that what many would consider the most glamorous stuff from the LHC has already been pushed out (at the highest priority), you can be assured that scientists are hard at work on new results, and of course there are many other particle-physics experiments that are doing important work. So, find your frequent-flyer card and make sure you’ve changed the oil, and let’s see where you might be headed this summer:

• 2013 Lepton Photon Conference, San Francisco, CA, June 24-29, hosted by SLAC. This is definitely the most prestigious conference this year; it is the international conference that is the odd-numbered year complement to the ICHEP meetings that are held in even-numbered years. Last year’s ICHEP saw the announcement of the observation of the Higgs boson, and if someone wants to make a big splash this year, they will do it at Lepton Photon. I have previously discussed how ICHEP works; the Lepton Photon series has a similarly storied history, but is slightly different in format, in that there are only plenary overview talks rather than a series of shorter, more focused presentations. San Francisco is always a great destination, and a fine place to consider the physics of the cable car and plate tectonics.
• 2013 European Physical Society Conference on High Energy Physics, Stockholm, Sweden, July 18-24. If results aren’t ready in time for Lepton Photon, they could be ready in time for EPS. This conference also appears in odd-numbered years, and with a format that has both parallel and plenary sessions, there are many opportunities for younger people to present their work. It is probably the premier particle-physics conference in Europe this year. Thanks to the tilted axis of the earth, and the position of Stockholm at 59 degrees north of the equator, you’ll be able to enjoy 17 hours and 40 minutes of daylight each day at this conference…starting at 4 AM each morning.
• Community Summer Study 2013, aka Snowmass on the Mississippi, Minneapolis, MN, July 29-August 6. This isn’t really a conference, but it is the culmination of the year-long effort of the US particle-physics community to define its long-range plan. With the discovery of the Higgs boson and important developments neutrino physics, we have better clues on what we should be trying to study in the future. Now we have to understand what facilities are best for this science, and what the technical barriers are to building and exploiting them. But we have to realize that we’re working with a finite budget, and we’ll have to do some hard thinking to understand how to set priorities. You might think that Minneapolis doesn’t have much on San Francisco or Stockholm, but my wife is from there, so I have traveled there many times and I think it’s a great place to visit. You can contemplate the balancing forces and torques on the “Spoonbridge and Cherry” sculpture at the Walker Art Center, or the aerodynamics of Mary Tyler Moore’s hat on the Nicollet Mall.
• 2013 Meeting of the American Physical Society Division of Particles and Fields, Santa Cruz, CA, August 13-17. Like the EPS conference, DPF also meets in odd-numbered years and is a chance for the US particle physics community to gather. It’s one of my favorite conferences, with a broad program of particle physics and neither too big or too small. It is especially friendly to younger people presenting their own work. Measurements that weren’t ready for the earlier conferences could still get a good audience here. Yes, you might have gone to nearby San Francisco in June, but Santa Cruz has a totally different feel, and you can study the hydrodynamics that power the redwood trees that are all over the campus.

And you might ask, where am I going this summer? I’d love to get to all of these, but I have another destination this summer — I will be moving my family to Geneva for a sabbatical year at CERN in July. It’s a little disappointing to be missing some of the action in the US, but I’m looking forward to an exciting year. I will be returning to the US for the Snowmass workshop, where I’m co-leading a working group, but that’s about it for conferences for me this summer. That will still be plenty exciting, and I’ll do my best to report all the news about it here.

• ### Place your bets: 25 or 50?

Ken Bloom
Thursday, May 23rd, 2013

Note to readers: this is my best attempt to describe some issues in accelerator operations; I welcome comments from people more expert than me if you think I don’t have things quite right.

The operators of the Large Hadron Collider seek to collide as many protons as possible. The experimenters who study these collisions seek to observe as many proton collisions as possible. Everyone can agree on the goal of maximizing the number of collisions that can be used to make discoveries. But where the accelerator physicists and particle physicists might part ways over just how those collisions might best be delivered.

Let’s remember that the proton beams that circulate in the LHC are not a continuous current like you might imagine running through your electric appliances. Instead, the beam is bunched — about 1011 protons are gathered in a formation that is about as long as a sewing needle, and each proton beam is made up of 1380 such bunches. As the bunches travel around the LHC ring, they are separated by 50 nanoseconds in time. This bunching is necessary for the operation of the experiments — it ensures that collisions occur only at certain spots along the ring (where the detectors are) and the experiments can know exactly when the collisions are occurring and synchronize the response of the detector to that time. Note that because there are so many protons in each beam, there can be multiple collisions each time two bunches pass by each other. At the end of the last LHC run, there were typically 30 collisions that occurred per bunch crossing.

There are several ways to maximize the number of collisions that occur. Increasing the number of protons in each bunch crossing will certainly increase the number of collisions. Or, one could imagine increasing the total number of bunches per beam, and thus the number of bunch crossings. The collision rate increases like the square of the number of particles per bunch, but only linearly with the number of bunches. On the face of it, then, it would make more sense to add more particles to each bunch rather than to increase the number of bunches if one wanted to maximize the total number of collisions.

But the issue is slightly more subtle than that. The more collisions that occur per beam crossing, the harder the collisions are to interpret. With 30 collisions happening at the same time, one must contend with hundreds, if not thousands, of charged particle tracks that cross each other and are harder to reconstruct, which means more computing time to process the event. With more stuff going on each event, the most important parts of the event are increasingly obscured by everything else that is going on, degrading the energy and momentum resolution that are needed to help identify the decay products of particles like the Higgs boson. So from the perspective of an experimenter at the LHC, one wants to maximize the number of collisions while having as few collisions per bunch crossing as possible, to keep the interpretation of each bunch crossing simple. This argument favors increasing the number of bunches, even if this might ultimately mean having fewer total collisions than could be obtained by increasing the number of protons per bunch. It’s not very useful to record collisions that you can’t interpret because the events are just too busy.

This is the dilemma that the LHC and the experiments will face as we get ready to run in 2015. In the current jargon, the question is whether to run with 50 ns between collisions, as we did in 2010-12, or 25 ns between collisions. For the reasons given above, the experiments generally prefer to run with a 25 ns spacing. At peak collision rates, the number of collisions per crossing is expected to be about 25, a number that we know we can handle on the basis of previous experience. In contrast, the LHC operators generally to prefer the 50 ns spacing, for a variety of operational reasons, including being able to focus the beams better. The total number of collisions delivered per year could be about twice as large with 50 ns spacing…but with many more collisions per bunch crossing, perhaps by a factor of three. This is possibly more than the experiments could handle, and it could well be necessary to limit the peak beam intensities, and thus the total number of collisions, to allow the experiment to operate.

So how will the LHC operate in 2015 — at 25 ns or 50 ns spacing? One factor in this is that the machine has only done test runs at 25 ns spacing, to understand what issues might be faced. The LHC operators will re-commission the machine with 50 ns spacing, with the intention of switching to 25 ns spacing later, as soon as a couple of months later if all goes well. But then imagine that 50 ns running works very well outset. Would the collision pileup issues motivate the LHC to change the bunch spacing? Or would the machine operators just like to keep going with a machine that is operating well?

In ancient history I worked on the CDF experiment at the Tevatron, which was preparing to start running again in 2001 after some major reconfigurations. It was anticipated that the Tevatron was going to start out with a 396 ns bunch spacing and then eventually switch over to 132 ns, just like we’re imagining for the LHC in 2015. We designed all of the experiment’s electronics to be able to function in either mode. But in the end, 132 ns running never happened; increases in collision rates were achieved by increasing beam currents. This was less of an issue at the Tevatron, as the overall collision rate was much smaller, but the detectors still ended up operating with numbers of collisions per bunch crossing much larger than they were designed for.

In light of that, I find myself asking — will the LHC ever operate in 25 ns mode? What do you think? If anyone would like to make an informal wager (as much as is permitted by law) on the matter, let me know. We’ll pay out at the start of the next long shutdown at the end of 2017.

### Another Kind of Science

John Huth
Thursday, May 23rd, 2013

I’ve been away from blogging for quite some time – mainly to finish a book I was working on.   The book is unrelated to particle physics, but follows a course I teach at Harvard, called Primitive Navigation.   We explore navigational techniques used by cultures like the Polynesians and Norse, in addition to looking at environmental topics like the origins of ocean currents and global weather systems.   While doing research for the book and the course, I found that humans have always been exceedingly clever in making sense of their environments and harnessing this knowledge to journey long distances.   I found that the ability of humans to develop sophisticated constructs to bring order to their environment is not limited to the lineage of Western scientific thought but is a more universal trait.

We often think of the roots of science starting with the ancient Greeks, or even further back to the Babylonians.   The canonical history is a marriage of mathematics and logic coupled with empirical observation.  The story stretches through the Arab translations of works like Euclid’s Elements during the Dark and Middle Ages, through the emergence of the scientific revolution, and culminating in the dizzying heights of modern works like quantum field theory.   This is not to say that there weren’t hiccups.   Although most scientists would dismiss astrology as quackery, astronomy and astrology were once deeply intertwined from their Western birth in Babylon through the time of Kepler.

I invite you to take a big step back and ponder the following conjecture – that Homo sapiens has always been intrinsically disposed toward scientific thinking.   This is perhaps not ‘science’ in the way we view Western science, but it still has the existence of conceptual framework on which to hang and connect observations.

In the process of doing research for the book, I interacted with a number of anthropologists who are studying the navigational schemes of Pacific Islanders.   Their work demonstrates the existence of an exceedingly sophisticated ‘toolkit’ of navigational schema that allowed them to travel huge distances across the ocean to find small target islands successfully.   Three anthropologists in particular have uncovered some amazing findings:  Cathy Pyrek, Rick Feinberg, and Joe Genz.

Most archaeological evidence points to the emergence of long-distance voyaging by a group called the Lapita people, circa 1600 BC from the Bismarck Archipelago, near New Guinea.   They built craft capable of sailing into the wind, making jumps of hundreds of miles eastward to locations like Fiji, Tonga, Tahiti and the Marquesas.   Even more astonishing was the rapid explosion of voyages of thousands of miles around AD 1000 to Hawaii and the north island of New Zealand.

In order to sail against the wind, one needs to create a sail capable of lift, like a wing and use it in combination with a hull that ‘grabs’ the water as it slices through.    The Lapita figured how to harness the complex fluid dynamics involved in lift and used it to their advantage.  In the 18th century, Captain James Cook marveled at sophisticated design of the Polynesian voyaging canoes that allowed them to travel at speeds far in excess of Western European vessels.  It wasn’t until 1904 that physicist Ludwig Prandtl laid out the theoretical basis for lift in wings, and wasn’t until the 1970’s that this theory was applied to sails.

The clever design of voyaging canoes was only part of the innovations the Pacific Islanders.   In order to sail across vast stretches of ocean, they needed viable navigational schema.    We don’t have written records from the height of the voyaging period for Polynesians (circa AD 1000), but we do have interviews with modern day practitioners of indigenous navigational techniques that hint at the ways their ancestors crossed large stretches of ocean accurately.

Anthropologists Rick Feinberg and Cathy Pyrek from Kent State have shown how indigenous navigators in eastern Solomon Islands use a ‘navigational tool-kit’, that consists of multiple signs.   Stars that are rising or setting close to the horizon form a natural star-compass.  Their rising and setting positions allow navigators to find the ‘azimuth’ or compass heading toward a destination island.   This requires the navigator to memorize a large number of stars and become familiar with their paths across the sky at different times of the year.

While a star compass may be useful, what does a navigator do during the day or in overcast weather?   Another helpful construct is a wind-compass.  Winds blowing from different directions have different characteristics.    In the eastern Solomons, the trade winds blow from the southeast, and are marked by characteristic ‘trade wind cumulus’ clouds that only grow to heights of roughly 15,000 feet and are then truncated.   These winds mark the direction ‘tonga’, or the southeast, which corresponds to the direction of the island cluster of Tonga.   Winds from the north arrive during the winter months and are associated with variable, stormy weather.

Steady winds and storm systems can also create ocean swells that act as reliable direction indicators.  Often, multiple swells can arise – for example, the Southern Ocean produces a long swell from the south, while trade winds can create shorter wavelength swells from the east.   Even if the wind shifts, the swells retain some ‘memory’ of the winds that created them allowing the navigator to maintain a steady heading.

The above tools are useful in maintaining direction under different conditions, but there’s an inherent uncertainty in the position of a vessel, and this uncertainty grows with time.   A navigator completing a 200-mile journey may only be able to establish a position to within 20 or 30 miles.   Another trick then comes into play:  birds.   Certain birds, like pelicans and frigate birds will fly some distance out to sea to feed, and then will return to their home islands in the evening.   A sailor only has to get to within 30 miles of a target island and then observe land-based birds.   The sail is dropped and when the birds fly home in the evening, a course is set.

The navigational toolkit allows for a kind of successive approximation, where the stars, wind, and swells form a rough guide, and the presence and behavior of birds provides the final precision.

A somewhat related but unique tradition is that of wave-piloting in the Marshall Islands.  Most of us are familiar with refraction and reflection of waves, whether they’re light or sound waves.   Waves on the oceans’ surface are similar, but have some notable differences.   First waves in deep-water have a speed that is proportional to the square root of the wavelength.   Second, waves in shallow water have a speed that’s proportional to the square root of the depth.   This latter relation causes waves to refract in shallow water.   When waves get into very shallow water, they’ll often break, losing much, if not all of their energy.   On the other hand, waves impinging on a steep cliff that extends underwater will reflect with very little energy lost.   Depending on the bathymetry surrounding an island, one can get very different wave patterns produced by the interaction of an incident swell with the island.

Joe Genz from the University of Hawaii studied the tradition of Marshall Island wave piloting for his doctoral thesis.   Navigators in the Marshalls have their own language for describing characteristic wave patterns around islands. Nit in kōt is the name given to a crossing pattern of waves on the lee side of an island.   If a uniform swell impinges in the eastern shore of an island, the waves passing the north shore will be refracted inside the swell-shadow toward the south and the waves passing the south shore will be refracted into the swell-shadow toward the north.   The resulting pattern of crossing waves creates a disturbed region that’s easy to identify at distances beyond which the islands are visible.

In principle, reflected ways should also give clues to the presence of an island.  Joe made the acquaintance of one Captain Korent Joel, a native Marshall Islander who was trying to revive the tradition of wave piloting.   Joe persuaded Captain Korent to demonstrate his wave piloting technique to a group of oceanographers who deployed a set of sensitive wave buoys.  As Captain Korent left the atoll of Arno, he first pointed out the incoming swell from the east, and then the reflected swell off of Arno.

There was only one problem.   No one on the boat with Captain Korent could notice the reflections, although the dominant eastern swell was clearly visible.   Even the sensitive wave buoys couldn’t detect the presence of the reflected swell.    What was going on?    Joe wondered whether Captain Korent just thought he should be seeing a reflected swell and was making this up.

In order to put Captain Korent to a sterner test, Joe waited until he (Captain Korent) was taking a nap on in the cabin.   Joe instructed the crew to motor some 30 miles to the southwest of Arno to get to a new location.   When Captain Korent woke up, Joe told him that he had taken the boat to an undisclosed location and asked him if he could identify the direction to Arno, and the kind of wave patterns he was seeing.   Captain Korent was quite certain the Arno was to the northwest, and he was also quite correct!   So, he was reading the waves properly after all!

I met Joe in person at a conference of the Association for Social Anthropology in Oceania (ASAO) in Portland Oregon in February 2012.    Joe had some videos on his laptop of Captain Korent and shared them with me.   I downloaded them to my computer.   That evening, I watched the video where Captain Korent was pointing out the reflected swell to Joe on the boat.   This was the reflected swell that Joe couldn’t see, and the oceanographer’s buoys couldn’t detect.   Joe told me what Captain Korent was saying in Marshallese about the waves.   I do some sea kayaking, and I’m often close to the water, and am a bit of an amateur wave-watcher myself.

In my first viewing of the video, I could definitely see the incoming dominant swell from the east.   But, by the third or fourth viewing, I could see a weaker reflected swell moving at slight angle against the larger incoming swell.   When I compared my observations to what Captain Korent was saying in Marshallese, they agreed completely!  By the tenth viewing, I became 100% convinced that Captain Korent was pointing out the reflected swell correctly.

The next day, I called Joe over, along with Cathy Pyrek, who was also attending the ASAO conference.   I pulled up the video on my laptop and showed what I saw as the reflected swell.   Joe said, “Oh yeah, now I see it”.    I turned to Cathy and asked if she really saw it, or I was just convincing them of it, but she said,“It’s definitely there, it’s strange that everyone missed it.”

We still have much to learn about how the human mind operates, but it struck me that Captain Korent’s talents show how we’re capable of picking up very weak signals in the presence of noise.   Evidently there is more information on the surface of the ocean than the oceanographer’s buoys were capable of recording.   This is perhaps not surprising, but it’s evidence that there are different frameworks of knowledge out there that are effective and are based on empiricism.   It may not be Western, but it is a kind of science.

Joeseph Genz, et al., “Wave Navigation in the Marshall Islands,” Oceanography, 22, June 2009, 234-245.

Joseph Genz, “Marshallese Navigation and Voyaging: Re-learning and Reviving Indigenous Knowledge of the Ocean,” (PhD diss., University of Hawaii, 2008)

John Huth, The Lost Art of Finding Our Way, (Belknap Press, Cambridge MA, 2013).

### Margaret Thatcher, politician, scientist

Aidan Randle-Conde
Monday, April 15th, 2013

Early last week Margaret Thatcher, former British Prime Minister, passed away, aged 87. She was a charismatic figure who was known internationally for being a strong and decisive leader. She had close political ties with President Ronald Reagan, she opposed the communist policies in Eastern Europe, and she was skeptical of increasing integration of the UK with Western Europe. Her actions and legacy are entwined with the global political stage at the time. However, in the UK she was very divisive and at times controversial, and even to this day there is a mixture of high praise a bitter resentment about her policies. Much has been said about her legacy over the past few days, and I think that, regardless of one’s own views, one of the best things we can say about Thatcher is that she knew what her vision was, and she pursued it with a great deal of energy and enthusiasm.

Thatcher, the politician (Mirror)

During her undergraduate years, Thatcher was a chemist at the University of Oxford. It was only later that she studied law and became a politician, so from her very early career she had an appreciation for science. She knew about the care and attention needed to make discoveries, the frustration of waiting for data, and the need for peer review and skepticism. Given her status as an international leader, she had the opportunity to visit CERN in the early 1980s, but as a scientist she took so much more away from the visit than we could have expected.

Thatcher, the chemist (popsci)

She’d asked to be treated like a fellow scientist, and her questions showed that she had taken her background reading about CERN seriously. She asked why the proposed accelerator, LEP, would be circular and not linear. This is not an easy question to answer unless the person asking has knowledge about how accelerators work. After a discussion with Herwig Schopper, then Director General, she came back to the UK as an ambassador for CERN and LEP was approved in the UK shortly afterwards. One of her questions was very astute. When told that the LEP tunnel would be the last at CERN she knew from experience that scientists will usually want to go further with their research and in particle physics at the energy frontier, further usually means larger. It’s true that CERN has reused the LEP tunnels for the LHC, but there are also proposals for even larger projects that will probe even higher center of mass energies.

Thatcher must have made a very good impression on Schopper during her visit. A recent Scientific American article has revealed that she was told about the discovery of the W and Z bosons before the information was made public. This letter shows that Schopper kept his promise and trusted Thatcher to keep the tantalizing and preliminary evidence to herself:

Schopper writes to Thatcher (Scientific American)

When the news of the $$W$$ boson discovery was public she wrote to Peter Kalmus of Queen Mary College, London, to offer her congratulations. Naturally she made a point to mention that there was a significant British effort behind the discovery:

Thatcher's letter to Kalmus

On the one hand, Thatcher was genuinely excited about CERN and the research, but on the other she was a fiscally conservative politician with monetarist policies and she had to defend the spending to her colleagues, and to herself. She had to make sure that the physicists at CERN were using the funding effectively, and delivering high quality scientific results for the spending. During a visit to the Super Proton Synchrotron she spoke John Ellis, who introduced himself as a theoretical physicist. The conversation continued:

Thatcher: “What do you do?”
Ellis: “Think of things for the experiments to look for, and hope they find something different.”
Thatcher: “Wouldn’t it be better if they found what you predicted?”
Ellis: “Then we would not learn how to go further!”

Once again Thatcher knew what question to ask, and Ellis knew what answer to give. Thatcher seemed convinced and knew that the people at CERN has the right attitude when it comes to discovery and use of public money. You can see some media coverage of her visit to the UA1 (Underground Area 1) site on the CERN Document Server.

In 1993, three years after Thatcher left office, David Miller from UCL came up with an analogy for the Higgs field where Thatcher played the central role. Essentially we can think of the Higgs field like a room full of people milling around at a cocktail party. Someone famous and popular enters the room, and all of a sudden people crowd around, making this person’s journey through the room harder. They take longer to get up to a good walking speed, and when they are walking they become harder to stop. That’s essentially what mass is- a measure of hard it is to change an object’s velocity. The analogy goes further, to include rumors being spread from the vicinity of this famous person. They would spread in small groups of people, and each group would have its own “mass”, which is what the Higgs boson is, it’s just an excitation of the field in the presence of matter. Who was the famous person in this analogy? Margaret Thatcher, of course!

Thatcher and the Higgs field (Quantum Tangents)

So her legacy with CERN is one of a scientist and a politician. She was genuinely excited to see the discoveries take place, she met with the scientists personally and interacted with them as another scientist. She took the time to understand the questions and answers, and even challenged the physicists with more questions. At the same time she put the projects in context. She had to defend the experiments, so she had to challenge the physicists to give her the information she needed to get the support from the UK. In a sense she knew the need for public outreach, to open up CERN’s scientific program to scrutiny from the public so that when we want to push back the frontiers even further we can count on their support.

If we’re to keep pursuing scientific discoveries in the future, we need scientifically literate and inspired politicians. It would be tempting to say that they are becoming more and more rare, but in reality I think things are more favorable than they have been before. With the recent discoveries we’re in a golden age of physics that has made front page news. Multimedia outlets and the internet have helped spread the good word, so science is high in the public consciousness, and justifying further research is becoming easier. However before the modern internet era and the journalistic juggernaut that comes to CERN each time there’s a big announcement it fell on the shoulders of a few people, and Thatcher was one of them.

(I would like to thank John Ellis for providing help with his quote, and for giving the best answer when asked the question!)

### April 2013 AMS Liveblog

Aidan Randle-Conde
Wednesday, April 3rd, 2013

## General information

Today, the Alpha Magnetic Spectrometer (AMS) experiment is going to announce its findings for the first time. The AMS experiment uses a space-based detector, mounted on the International Space Station (ISS), and was delivered on NASA’s shuttle Endeavour, on NASA’s penultimate shuttle mission. To date AMS has observed 25 billion events over the course of the last 18 months. There has been a lot of news coverage and gossip about how this might change our understanding of the universe, and how it might impact on the search for dark matter and dark energy. However until today the results have been a guarded secret for AMS. Sam Ting, who leads the AMS Experiment, will make the presentation in the CERN Main Auditorium at 17:00 CERN time.

AMS-02 on the ISS (Wikipedia)

I’ll be live blogging the event, so stay tuned for updates and commentary! This is slightly outside my comfort zone when it comes to the science, so I may not be able to deliver the same level of detail as I did for the Higgs liveblogs. All times are CERN times.

See the indico page of the Seminar for details, and for a live video feed check out the CERN Webcast.

18:25:Congratulations and applause. The seminar is over! Thanks for reading.

## Questions

Q (Pauline Gagnon): How many events above 350GeV?
A: We should wait for more statistics and better understanding. Note we do not put “Preliminary” on any results.

Q: Is there a step in the spectrum?
A: Good question! Experiments in space are different to those on the ground. This was studied over Christmas, but it’s just fluctuations. “If you don’t have fluctuations something is wrong.”

Q (Bill Murray): What is the efficiency of the final layer of the Silicon tracker?
A: Close to 100%

Q: Some bins not included. Why not?
A: Less sensitive at low energy. We want a simple model for the spectrum.

Q: Are you going to provide absolute flux measurements?
A: Yes, we will provide those. We calibrated the detector very carefully for precise measurements.

Q (John Ellis): Dark matter interpretation constrained by other experiments, eg ground based experiments.
A: Good point, we have a large number of spectra to analyze very carefully.

Q: Why not use a superconducting magnet?
A: NASA could not deliver more Helium, so superconducting is not an option for a long lived experiment.

Q: You have high statistics in the final bin, so why not rebin?
A: That’s an important question! “I’ve been working at CERN for many years and never made a mistake… We will publish this when we are absolutely sure.” (To my mind this sounds like a fine tuning problem- we should not pick which binning gives us the results we want.) “You will have to wait a little bit.”

Q (Pauline Gagnon): How can you tell the difference between the sources of positrons and models?
A: The fraction will fall off very sharply at high energy as a function of the energy.
Q: How much more time do you need to explore that region?
A: It will happen slowly.

## The liveblog

18:11: Ting concludes, to applause. Time for questions.
18:10: The excess of positons has been observed for about 20 years and aroused much interest. AMS has probed this spectrum in detail. The source of the excess will be understood soon.
18:09: Conclusion time. More statistics needed for the high energy region. No fine structure is observed. No anisotropy is observed. (anisotropy of less than 0.036 at 95% confidence.)
18:07: Diffuse spectrum fitted and consistent with a single power law source.
18:00: The positron fraction spectrum is shown (Twitpic) Results should be isotropic if it’s a physics effect. The most interesting part is at high energy. No significant anisotropy is observed.
17:57: Time for some very dense tables of numbers and tiny uncertainties. Is this homeopathic physics? Dilute the important numbers with lots of other numbers!
17:53: A detailed discussion of uncertainties. There seems to be no correlation between the number of positrons and the positron fraction. Energy resolution affects resolution and hence bin to bin migration as a function of energy. There are long but small tails in the TRD estimator spectra for electrons and positrons, which must be taken into account. For charge confusion the MC models are used to get the uncertainties, which are varied by 1 sigma.
17:51: Charge confusion must be take into account. The rate is a few percent with a subpercent uncertainty. Sources of uncertainty come from large angle scattering and secondary tracks. Monte Carlo (MC) simulations are used to estimate these contributions and they seem to be well modeled.
17:48: A typical positron event, showing how the various components make the measurements. (Twitpic)
17:46: Ting shows the cover of the upcoming Physical Review Letters, a very prestigious journal, with an AMS event display. Expect a paper on April 5th!
17:45: The positron fraction. Measurements of the number of positrons compared positrons+electrons can be used to constrain physics beyond the Standard Model. In particular it can be sensitive to neutralinos, particles which are present in the Supersymmetric (SUSY) models of particle physics. The models are extensions of the Standard Model. The positron fraction is sensitive to the mass of the neutralino, if it exists.
17:42: Onto the data! There have been 25 billion events, with 6.8 million electron or positron events in the past 18 months. Two independent groups (Group A and Group alpha for fairness) analyze the data. Each group has many subgroups.
17:41: AMS is constantly monitored and reports/meetings take place every day. NASA keep AMS updated with the latest technology. There’s even an AMS flight simulator, which NASA requires AMS to use.
17:40: A less obvious point: AMS have no control over the ISS orientation or position- the position and orientation must be monitored, tolerated and taken into account.
17:38: “Operating a particle physics experiment on the ISS is fundamentally different from operating an experiment in the LHC”. Obvious Ting is obvious!
17:34: The tracking system must be kept at constant temperature, while the thermal conditions vary by tens of degrees. It has a dedicated cooling system.
17:30: Sophisticated data readout and trigger system with 2 or 4 times redundancy. (You can’t just take a screwdriver out to it if it goes wrong.)
17:27: In addition to all the other constraints, there are also extreme thermal conditions to contend with. The sun is a significant source of thermal radiation. ECAL temperatures vary from -10 to 30 degrees Celcius.
17:25 : Data can be stored for up to two months in case of a communication problem. Working space brings all kinds of constraints, especially for computing.
17:23 : NASA was in close contact to make sure it all went to plan, with tests on the ground. NASA used 2008t of mass to transport 7.5t of AMS mass (plus other deliveries) into space! AMS was installed on May 19th 2011. (I was lucky enough to hear the same story from the point of view of the NASA team, and it was an epic story they told. Apparently AMS was “plug and play”.)
17:21: Calibration is very important, because once AMS is up in space you can’t send a student to go and fix it. (Murmurs of laughter from the audience)
17:19: The detector was tested and calibrated at CERN. (I remember seeing it in the Test Beam Area long before it was launched.)
17:18: Ting shows a slide of the AMS detector, which is smaller than the LHC physicists are used to. “By CERN standards, it’s nothing”. (Twitpic)
17:16: Lots of challenges for electronic when in space. Electronics must be radiation sensitive, and AMS needs electronics that perform better than most commercial space electronics.
17:15: The TRD system measures energy loss (dE/dx) to separate electrons and positrons. A tried and true method in particle physics! The Silicon tracker has nine layers and 200,000 channels, all aligned to within 3 microns. Now that’s precision engineering. The RICH has over 10,000 photosensors to identify nuclei and measuring their energy. This sounds like a state of the art particle detector, but In Space! The ECAL system, with its 50,000 fibers and 600kg of lead can measure up to 1TeV of energy, comparable to the LHC scale.
17:11: Permanent magnet shows <1% deviation in the field since 1997. Impressive. Cosmic rays vetoed with efficiency of 0.99999.
17:10 Studies require rejection of protons versus positrons of 1 million, a huge task! TRD and TOF provides a factor of 10^2, whereas the RICH and ECAL provide the rest of the discrimination.
17:08: AMS consists of a transition radiation detector (TRD), nine layers of silicon tracker, two layers of time of flight (TOF) systems, a magnet (for measuring the charge of the particles), and a ring imaging Cherenkov detector (RICH) and electromagnetic calorimetry system (ECAL). Charges and momenta of particles are measured independently.
17:06: Ting summarizes the contributions from groups in Italy, Germany, Spain, China, Taiwan, Switzerland, France. Nice to see the groups get recognition for their long, hard work. The individual groups are often mentioned only in passing.
17:03: “AMS is the only particle physics experiment on the ISS” which is the size of a football field. The ISS cost “about 10 LHC” units of money! It’s a DOE sponsored international collaboration. Ting is doing a good job acknowledging the support of collaborators and the awesomeness of having a space based particle physics experiment.
17:00: “Take your seats please.” The crowd goes quiet, as the introduction starts. Sam Ting was awarded the 1976 Nobel Prize for Physics, for the discovery of the J/psi particle.
16:54: Rolf Heuer has arrived. The room is nearly full now!
16:47: Sam Ting is here. He arrived about 10 minutes ago, and spoke to Sau Lan Wu, an old colleague of his. (Twitpic)
16:31: There are a few early bird arrivals. (Twitpic)

### The Substandard Model of Particle Physics

Aidan Randle-Conde
Monday, April 1st, 2013

Now that we are on the verge of completing the Standard Model of Particle Physics, it’s time to look to the future of the field. Five physicists at CERN present their new state of the art* theory: The Substandard Model of Physics!

“It’s easy to understand but questionably accurate.” Mandy Baxter (Marine Biogeochemical Microbiologist, USCB)

Thanks to the actors.
Androula Alekou (Neutrino Expert)
Katie Malone (Higgs Expert)
Stephen Ogilvy (Flavor Expert)
Aidan Randle-Conde (QCD Expert)
Lee Tomlinson (QFT Expert)

Steve Marsden (Standard Model Expert)
Helen Lambert (Environmental Sanitization Team)

@sigsome @aidanatcern

Visit the US LHC Blogs at Quantum Diaries:

http://www.quantumdiaries.org/lab-81

Music: Off to Osaka, Kevin Macleod, http://www.incompetech.com

Images taken from CKMFitter (http://ckmfitter.in2p3.fr), UTFit (http://www.utfit.org), Wikimedia.

This video does not reflect the views of CERN. It does not even reflect the views of the actors. In fact I’d be surprised if it reflected the views of anyone at all.

Thanks to Adam Davidson for inspiring the name. It was a off handed comment you made about 7 years ago that stuck with me ever since. Finally it has become a reality!

Apologies for the slightly out of focus footage and extra frame. Some small technical glitches always get through.

(*We’re just not sure what kind of a state, and what kind of art it is.)

### Shutdown? What shutdown?

Ken Bloom
Sunday, March 24th, 2013

I must apologize for being a bad blogger; it has been too long since I have found the time to write. Sometimes it is hard to understand where the time goes, but I know that I have been busy with helping to get results out for the ski conferences, preparing for various reviews (of both my department and the US CMS operations program), and of course the usual day-to-day activities like teaching.

The LHC has been shut down for about two months now, but that really hasn’t made anyone less busy. It is true that we don’t have to run the detector now, but the CMS operations crew is now busy taking it apart for various refurbishing and maintenance tasks. There is a detailed schedule for what needs to be done in the next two years, and it has to be observed pretty carefully; there is a lot of coordination required to make sure that the necessary parts of the detector are accessible as needed, and of course to make sure that everyone is working in a safe environment (always our top priority).

A lot of my effort on CMS goes into computing, and over in that sector things in many ways aren’t all that different from how they were during the run. We still have to keep the computing facilities operating all the time. Data analysis continues, and we continue to set records for the level of activity from physicists who are preparing measurements and searches for new phenomena. We are also in the midst of a major reprocessing of all the data that we recorded during 2012, making use of our best knowledge of the detector and how it responds to particle collisions. This started shortly after the LHC run finished, and will probably take another couple of months.

There is also some data that we are processing for the very first time. Knowing that we had a two-year shutdown ahead of us, we recorded extra events last year that we didn’t have the computing capacity to process in real time, but could save for later analysis during the shutdown. This ended up essentially doubling the number of events we recorded during the last few months of 2012, which gives us a lot to do. Fortunately, we caught a break on this — our friends at the San Diego Supercomputer Center offered us some time on their facility. We had to scramble a bit to figure out how to include it into the CMS computing system, but now things are happily churning away with 5000 processors in use.

The shutdown also gives us a chance to make relatively invasive changes to how we organize the computing without potentially disrupting critical operations. Our big goal during this period is to make all of the computing facilities more flexible and generic. For the past few years, particular tasks have often been bound to particular facilities, in particular those that host large tape archives. But that can lead to inefficiencies; you don’t want to let computers remain idle at one site just while another site is backed up because it has particular features that are in demand. For instance, since we are reprocessing all of the data events from 2012, we also need to reprocess all of the simulated events, so that they match the real data. This has typically been done at the Tier-1 centers, where the simulated events are archived on tape. But recently we have shifted this work to the Tier-2 centers; the input datasets are still at the Tier 1′s, but we read them over the Internet using the “Any Data, Anytime, Anywhere” technology that I’ve discussed before. That lets us use the Tier 2′s effectively when they might have been otherwise idle.

Indeed, we’re trying to figure out how to use any available computing resource out there effectively. Some of these resources may only be available to us on an opportunistic basis, and taken away from us quickly when they are needed by their owner, on the timescale of perhaps a few minutes. This is different from our usual paradigm, in which we assume that we will be able to compute for many hours at a time. Making use of short-lived resources requires figuring out how to break up our computing work into smaller chunks that can be easily cleaned up when we have to evacuate a site.

But computing resources include both processors and disks, and we’re trying to find ways to use our disk space more efficiently too. This problem is a bit harder — with a processor, when a computing job is done with it, the processor is freed up for someone else to use, but with disk space, someone needs to actively go and delete files that aren’t being used anymore. And people are paranoid about cleaning up their files, in fear of deleting something they might need at an arbitrary time in the future! We’re going to be trying to convince people that many files on disk aren’t getting accessed, and it’s in our interest to automatically clean them up to make room for data that is of greater interest, with the understanding that the deleted data can be restored if necessary.

In short, there is a lot to do in computing before the LHC starts running again in 24 months, especially if you consider that we really want to have it done in 12 months, so that we have time to fully commission new systems and let people get used to them. Just like the detector, the computing has to be ready to make discoveries on the first day of the run!

### Back From Hibernation, and a Puzzling Asymmetry

Monday, March 4th, 2013

I know in my life at least, there are periods when all I want to do is talk to the public about physics, and then periods where all I would like to do is focus on my work and not talk to anyone. Unfortunately, the last 4 or so months falls into the latter category. Thank goodness, however, I am now able to take some time and write about some interesting physics which had been presented both this year and last. And while polar bears don’t really hibernate, I share the sentiments of this one.

Okay, I swear I'm up this time! Photo by Andy Rouse, 2011.

A little while ago, I posted on Dalitz Plots, with the intention of listing a result. Well, now is the time.

At the 7th International Workshop on the CKM Unitarity Triangle, LHCb presented preliminary results

Asymmetry of $$B^{\pm}\to\pi^{\pm}\pi^+\pi^-$$ as a function of position in the Dalitz Plot. Asymmetry is mapped to the z-axis. From LHCb-CONF-2012-028

for CP asymmetry in the channels $$B\to hhh$$, where $$h$$ is either a $$K$$ or $$\pi$$. Specifically, the presentation was to report on searches for direct CP violation in the decays $$B^{\pm}\to \pi^{\pm} \pi^{+} \pi^{-}$$ and $$B^{\pm}\to\pi^{\pm}K^{+}K^{-}$$.  If CP was conserved in this decay, we would expect decays from $$B^+$$ and $$B^-$$ to occur in equal amounts. If, however, CP is violated, then we expect a difference in the number of times the final state comes from a $$B^+$$ versus a $$B^-$$. Searches of this type are effectively “direct” probes of the matter-antimatter asymmetry in the universe.

Asymmetry of $$B^\pm\to\pi^\pm K K$$ as a function position in the Dalitz plot. Asymmetry is mapped onto the z-axis.From LHCb-CONF-2012-028

By performing a sophisticated counting of signal events, CP violation is found with a statistical significance of $$4.2\sigma$$ for $$B^\pm\to\pi^\pm\pi^+\pi^-$$ and $$3.0\sigma$$ for $$B^\pm\to\pi^\pm K^+K^-$$. This is indeed evidence for CP violation, which requires a statistical significance >3$$\sigma$$.The puzzling part, however, comes when the Dalitz plot of the 3-body state is considered. It is possible to map the CP asymmetry as a function of position in the Dalitz plot, which is shown on the right. It’s important to note that these asymmetries are for both signal and background. Also, the binning looks funny in this plot because all bins are of approximately equal populations. In particular, notice red bins on the top left of the $$\pi\pi\pi$$ Dalitz plot and the dark blue and purple section on the left of the $$\pi K K$$ Dalitz plot. By zooming in on these regions, specifically $$m^2(\pi\pi_{high})>$$15 GeV/c$$^2$$ and $$m^2(K K)<$$3 GeV/c$$^2$$, and separating by $$B^+$$ and $$B^-$$, a clear and large asymmetry is shown (see plots below).

Now, I’d like to put these asymmetries in a little bit of perspective. Integrated over the Dalitz Plot, the resulting asymmetries are

$$A_{CP}(B^\pm\to\pi^\pm\pi^+\pi^-) = +0.120\pm 0.020(stat)\pm 0.019(syst)\pm 0.007(J/\psi K^\pm)$$

and

$$A_{CP}(B^\pm\to\pi^\pm K^+K^-) = -0.153\pm 0.046(stat)\pm 0.019(syst)\pm 0.007(J/\psi K^\pm)$$.

Whereas, in the regions which stick out, we find:

$$A_{CP}(B^\pm\to\pi^\pm\pi^+\pi^-\text{region}) = +0.622\pm 0.075(stat)\pm 0.032(syst)\pm 0.007(J/\psi K^\pm)$$

and

$$A_{CP}(B^\pm\to\pi^\pm K^+K^-\text{region}) = -0.671\pm 0.067(stat)\pm 0.028(syst)\pm 0.007(J/\psi K^\pm)$$.

These latter regions correspond to a statistical significance of >7$$\sigma$$ and >9$$\sigma$$, respectively. The interpretation of these results is a bit difficult: the asymmetries are four to five times that of the integrated asymmetries, and are not necessarily associated with a single resonance. We would expect in the $$\rho^0$$ and $$f_0$$ resonances to appear in the lowest region of $$\pi\pi\pi$$ Dalitz plot, in the asymmetry. In the $$K K\pi$$ Dalitz plot, there are really no scalar particles which we expect to give us an asymmetry of the kind we see. One possible answer to both these problems is that the quantum mechanical amplitudes are only partially interfering and giving the structure that we see. The only way to check this would be to do a more detailed analysis involving a fit to all of the possible resonances in these Dalitz plots. All I can say is that this result is certainly puzzling, and the explanation is not necessarily clear.

Zoom onto $$m^2(\pi\pi)$$ lower axis (left) and $$m^2(K K)$$ axis (right) . Up triangles are $$B^+$$, down are $$B^-$$

### Hangout with CERN, anyone?

Seth Zenz
Tuesday, February 12th, 2013

I’m helping organize the ongoing Hangout with CERN series of events, and this Thursday I get to host. To make the event a success, I need your help! Interested? Read on…

Hangout with CERN happens each week at 17:00 CET, 11 AM EST, or whatever you want to call that time. It’s an informal Google+ hangout in which physicists, engineers, IT experts, and other folks from CERN connect to tell you about what we do here. In our latest format, we devote two weeks to each topic. The first week introduces the topic and lets you hear experts describe their work, along with a quiz and a few questions from the public. (We monitor comments on Twitter and YouTube the whole time.) The second week – which is the part I work on – is even more informal: we try to have a few guest members of the public, get to more questions, and so on.

Here’s last week’s video, entitled “LHC and the Grid – The world is our calculator,” which discusses the worldwide computing system we use to analyze all the data from the LHC:

Next week’s event on Google+ is here. We’ll be discussing the same topic, and we want to hear your questions about it. Do you have a question? Might you want to participate live in the hangout and ask your question directly? Let me know in the comments!

### A Change of Pace

Seth Zenz
Monday, February 4th, 2013

Some physicists and engineers from Purdue and DESY, and me, at the beamline we used to test new pixel designs

Every so often, a physicist needs a vacation from doing data analysis for the Higgs boson search. A working vacation, something that gets you a little closer to the actual detector you work on. So last week, I was at the DESY laboratory in Hamburg, Germany, helping a group of physicists and engineers study possible changes to the design of individual pixels in the CMS Pixel Detector. (I’ve written before about how a pixel detector works.) We were at DESY because they had an electron beam we could use, and we wanted to study how the new designs performed with actual particles passing through them. Of course, the new designs can’t be produced in large scale for a few years — but we do plan to run CMS for many, many years to come, and eventually we will need to upgrade and replace its pixel detector.

What do you actually do at a testbeam? You sit there as close to 24 hours a day as you can — in shifts, of course. You take data. You change which new design is in the beam, or you change the angle, or you change the conditions under which it’s running. Then you take more data. And you repeat for the entire week.

So do any of the new designs work better? We don’t know yet. It’s my job to install the software to analyze the data we took, and to help study the results, and I haven’t finished yet. And yes, even “working on the detector” involves analyzing data — so maybe it wasn’t so much of a vacation after all!