• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for March, 2012

At CERN, while we are about to shed light on the fundamental question of the creation of mass after the Big Bang, we are also close to solving another basic mass-related problem. The kilogram is the only base unit of the International System of Units (SI) whose official definition is still based on a material artefact rather than on invariant quantities. If you are now thinking that this concerns you less than the glamorous Higgs boson, think again: your scales could give you a different value when you use them tomorrow.

The international prototype of the kilogram is a cylinder of platinum-iridium alloy whose height (39 mm) is equal to its diameter. It was machined in 1878 and is kept at the Bureau International des Poids et Mesures (BIPM) in Sèvres, near Paris. To date, while all the other units in the SI system have been redefined to be based on fundamental constants or atomic properties, the kilogram continues to be defined according to this piece of matter.

A piece of matter that people, or at least one person, must clean, and there is a risk that atoms – that is, fractions of mass – might be lost in the process. “Over the years, several official copies have been produced and distributed to various national metrology offices,” says Ali Eichenberger, a physicist at the Swiss Metrology Office (METAS). “Although it is not yet possible to define the kilogram mass in an absolute way, modern technology makes it possible to compare different masses with very high precision, up to 1 microgram. Looking at the different official copies there seems to be a significant variation in masses.” Moreover, not knowing the kilogram with the appropriate precision has an impact on other units, such as the ampère.

Over the past century, significant variation is seen between the masses of the official kilogram copies. (Courtesy of METAS).

A metrology project launched by METAS in which CERN is participating should be able to fix the problem. The idea is to build an ultra-precise watt balance – an instrument that compares the mechanical and the electrical power (see box). Using the watt balance and its equations, it is possible to relate the unit of mass to the metre, the second and the Planck constant, i.e. all fundamental units and constants.

“One of the crucial elements of the watt balance is the magnetic circuit, which needs to be extremely stable during the measurement,” explains Davide Tommasini, a magnet expert from the Magnets and Superconductors group in CERN’s Technology Department, who is directly involved in the METAS watt balance project. “By using a correctly dimensioned ‘magnet shunt’ with a low Curie temperature, it is possible to drastically reduce the effects of temperature variation. The circuit must also provide a very homogenous magnetic field in the whole volume involved in the measurement.” The magnet circuit will be assembled at CERN. “We are expecting the permanent magnet and the ‘shunting’ cylinder to arrive soon. We will then work on testing the performance of the circuit,” says Davide Tommasini.

The watt balance built by METAS to perform previous measurements of the Planck constant. A new balance is currently under development. (Courtesy of METAS).

“The requirements associated with the magnets are very strict and we are very happy that CERN agreed to take part in the project in the framework of its knowledge transfer activities,” says Henri Baumann, a physicist at METAS who launched the project together with Ali Eichenberger. “This measurement will also lead to a significant improvement in the determination of the Planck constant. The CERN theorists will be happy to know that!”

“This project is a clear indication of the impact that the skills and the expertise needed in particle physics have on other research domains and on society,” says Hartmut Hillemanns from the Knowledge Transfer (KT) group, who has fostered the project with the scientific team at CERN and led the negotiation with the other project partners.

The new definition of the mass unit should be available in a couple of years from now. Chances are that by then we will have also understood how mass is created at the most fundamental level… yes, we are talking about the Higgs this time!

The principle of the watt balance 

The watt balance is an electromechanical instrument that measures the weight of a test mass very precisely. In the watt balance a coil is suspended on one arm and is immersed in a horizontal magnetic flux. During a first measurement phase, the current in the coil exerts a vertical force on the conductor that is balanced against the weight of the test mass. In the second phase, the coil is moved at a constant velocity through the magnetic field, and the voltage induced across the coil is measured. By combining the equations and performing various subsequent calculations one arrives at the equation:


where C is a calibration constant, fj and f’j are the Josephson frequencies used during the static and the dynamic phase and h is the Planck constant. The watt balance experiment allows therefore relating the unit of the mass to the meter, the second and the Planck constant.

Several watt balances are currently in operation around the world and are being used for metrology purposes.

 

Another way of fixing the problem 

The most important alternative for defining the kilogram, known as “X-ray crystal density” method or Avogadro project, consists in accurately measuring the density of a very pure crystal silicon sphere.

From the CERN Bulletin

Share

Au CERN, nous nous apprêtons non seulement à éclaircir la question fondamentale de l’apparition de la masse après le Big bang, mais nous nous acheminons aussi vers la résolution d’un autre problème essentiel lié à la masse. Le kilogramme est la seule unité fondamentale du système international d’unités (SI) définie officiellement à l’aide d’un prototype matériel de fabrication humaine (un artéfact), plutôt qu’au moyen de grandeurs invariantes de la nature. Si vous estimez que la question vous concerne moins que le prestigieux boson de Higgs, détrompez-vous ! Que diriez-vous si votre balance vous indiquait une valeur différente lorsque vous vous pèserez demain ?

Le prototype international du kilogramme est un cylindre en alliage platine-iridium de 39 mm de hauteur et de 39 mm de diamètre. Il a été usiné en 1878 et se trouve au Bureau international des poids et mesures (BIPM), à Sèvres, près de Paris. À ce jour, alors que toutes les autres unités du système SI ont été redéfinies pour être rattachées à des constantes fondamentales ou à des propriétés atomiques, le kilogramme reste défini par ce bloc de matière.

Un bloc de matière qui doit être nettoyé par des personnes – ou du moins une personne – au risque que des atomes, et donc des fractions de masse, puissent se perdre dans le processus. « Au fil des ans, plusieurs copies ont été produites et déposées dans divers bureaux de métrologie nationaux, explique Ali Eichenberger, physicien à l’Office fédéral suisse de métrologie (METAS). Bien qu’il ne soit pas encore possible de définir la masse du kilogramme de manière absolue, la technologie moderne permet de comparer des masses avec une très haute précision, pouvant atteindre le microgramme. Un examen des divers prototypes nationaux a fait apparaître que leurs masses varient de manière sensible. » De plus, le manque de précision de la définition actuelle du kilogramme a une incidence sur d’autres unités, notamment l’ampère.

En un siècle, des variations de masse significatives ont été enregistrées entre les copies officielles du kilogramme. (Image : METAS).

Un projet de métrologie lancé par le METAS et auquel le CERN participe devrait permettre de régler le problème. L’idée est de construire une balance de watt ultra-précise – un instrument qui compare les puissances mécanique et électrique (voir l’encadré). Grâce à la balance de watt et à ses équations, il est possible de relier l’unité de masse au mètre, à la seconde et à la constante de Planck, bref, à toutes les unités et constantes fondamentales.

« Un élément essentiel de la balance de watt est le circuit magnétique, qui doit être extrêmement stable au cours de la mesure, explique Davide Tommasini, un spécialiste des aimants travaillant dans le groupe Aimants et supraconducteurs au sein du département Technologie du CERN, qui participe directement au projet de balance de watt du METAS. En utilisant un ‘shunt magnétique’ bien dimensionné avec une faible température de Curie, il est possible de réduire radicalement les effets de variation de température. Le circuit doit également fournir un champ magnétique très homogène dans l’ensemble du volume où intervient la mesure. » Le circuit magnétique sera assemblé au CERN. « L’aimant permanent et le cylindre de shuntage devraient bientôt arriver. Nous nous emploierons ensuite à tester les performances du circuit », ajoute Davide Tommasini.

La balance de watt construite par METAS pour effectuer les précédentes mesures de la constante de Planck. Une nouvelle balance est actuellement en développement. (Image : METAS).

« Les exigences associées aux aimants sont extrêmement strictes et nous sommes très heureux que le CERN ait accepté de participer au projet dans le cadre de ses activités de transfert de connaissances, souligne Henri Baumann, un physicien du METAS qui a lancé le projet avec Ali Eichenberger. Cette mesure permettra aussi d’améliorer sensiblement la détermination de la constante de Planck. Les théoriciens du CERN s’en féliciteront certainement ! »

« Ce projet témoigne de la portée qu’ont les compétences et les connaissances spécialisées nécessaires en physique des particules sur des programmes de recherche d’autres disciplines et sur la société dans son ensemble », explique Hartmut Hillemanns, du groupe KT, (Knowledge Transfer ou transfer du savoir) qui mène à bien le volet cernois du projet avec l’équipe scientifique du CERN et interagit avec les autres partenaires.

La nouvelle définition de l’unité de masse devrait être donnée d’ici quelques années. Peut-être aurons-nous alors compris comment la masse se crée au niveau le plus fondamental… Oui, cette fois, nous parlons bien du Higgs !

Le principe de la balance de watt 

La balance de watt est un instrument électromécanique qui mesure de manière très précise le poids d’une masse étalon. Dans la balance de watt, une bobine est suspendue à un bras, puis placée dans un flux magnétique horizontal. Au cours d’une première phase de mesures, le courant circulant dans la bobine exerce une force verticale sur le conducteur, qui est ajustée pour contrebalancer le poids de la masse étalon. Au cours de la deuxième phase, on déplace la bobine à vitesse constante dans le champ magnétique et on mesure la tension induite aux bornes de la bobine. En combinant les équations et en procédant à divers calculs, on arrive à l’équation :


où C est une constante d’étalonnage, fj et f’’j sont les fréquences de Josephson utilisées pendant les phases statique et dynamique et h est la constante de Planck. L’expérience de la balance de watt permet donc de rattacher l’unité de la masse au mètre, à la seconde et à la constante de Planck.

Plusieurs balances de watt sont utilisées par des bureaux de métrologie de par le monde.

 

Une autre manière de résoudre le problème 

L’autre grand projet de redéfinition du kilogramme, le projet Avogadro, applique la cristallographie aux rayons X pour mesurer avec exactitude la densité d’une sphère de silicium cristallin d’une grande pureté

tiré du Bulletin du CERN

 

Share

The Agrostic Principle

Friday, March 30th, 2012

In honour of the season.

As I drive to and from work in Vancouver, I notice that even in winter, the grass is green. In the spring, people are out fertilizing their lawns and in summer watering them (even when they are not allowed to)—mollycoddled grass! They are now even putting grass on the top of buildings. You would almost think that Vancouver exists for the benefit of grass. But it is not just Vancouver; we have wide areas of the world devoted to grass, from bamboo to grain. You would think the world was created for the benefit of grass. After all, the earth is just the right distance from the sun to allow grass to flourish. Farther from the sun, it would be cold and arid like Mars. Closer to the sun it would be hot and sterile like Venus.  Thus, we have what is known in the trade as the acrostic[1] principle: the philosophical argument that observations of the physical universe must be compatible with the preferred status of grass.

As just mentioned, the earth is just at the right distance from the sun for grass to flourish. But it goes beyond that. Carbon is a major component of grass. However, the creation of carbon in stars depends critically on the existence of an excited state in carbon, known as the Hoyle state, with exactly the right energy. If that state were not there, there would be no carbon and hence no grass. The horror of it! Just think, no grass. And it all depends on having the nuclear state at just the right energy.

The Hoyle state is not the only coincidence necessary for the existence of grass. If the fundamental constants of nature, things like the fine structure constant or the gravitational constant (big G) were slightly different, the universe would not support the existence of grass. There are two solutions to this problem. One is to assume that there is an intelligent designer with an inordinate fondness for grass who fine-tuned the universe so grass could exist. Now, there is a minority opinion that it is not grass that he is fond of, but rather beetles (coleoptera) and that he only created grass as a source of feed for beetles. After all, there are the order of a million species of beetles.  But as I just said, the coleopteric principle is distinctly a minority position, but we should be open minded.

The other explanation of the fine tuning of the universe is based on the idea of the multiverse. This is the idea that many different universes exist with all possible values of the physical constants and that we are in the one in which grass is possible.  Again, note the preferred role of grass. The evidence for this scenario, at the present time, is no stronger than that for the existence of the coleopterophillic intelligent designer.

Now one might ask what role consciousness and intelligence have in all this. The answer to that is fairly self-evident. The main role of consciousness and intelligence is the development of civilization, and the main role of civilization is the development of agriculture. It should be obvious to even the most obtuse reader that the main purpose of agriculture is to permit grass to more effectively compete with trees. Just think of the extent to which farmers have replaced forests with grassland. The bringing of European “civilization” to North America had as its main effect, the replacement of forest with grassland.  It had some unfortunate side effects, like the creation of the United States of America, but what is more important—people or grass?

As further evidence of the agrostic principle, I note that it provides the only possible explanation for the existence of golf courses and cricket pitches. The very idea of grown men or women hitting a ball with a club to prove their virility is silly.  Now artificial turf may be considered as evidence against the agrostic principle, but artificial turf seems to be a passing fad. In just 13 years, between 1992 and 2005, the National Baseball League went from having half of its teams (6 of 12) using artificial turf to all of them – now up to 16 – playing on natural grass. As for football (soccer), artificial turf is widely banned. Enough said.

The agrostic principle also highlights flaws in ancient Greek philosophy.  Plato believed that the “good” was contemplating his ideals or ideas. That is incorrect; the greatest good is cultivating and contemplating grass. Like Euclid’s postulates, that should be self-evident. That the smoking of grass is the greatest good is a corruption of Epicurus’s teaching. Rather, he was the first of the new atheists. The Sophists, on the other hand, where the first post-modernists and believed that it was impossible to decide if contemplating or smoking grass was the greatest good. After smoking a few joints, the latter is probably true. Socrates believed that nothing could be learned from nature. Perhaps if he had spent more time cultivating and contemplating grass, he would not have been compelled to drink hemlock. However, Aristotle may have been onto something with his final cause or teleology. Evolution shows its bareness by failing to recognize that consciousness and intelligence arose due to the teleological purpose (final cause) of helping grass compete with trees. This is probably the best example of the need for Aristotle’s final cause that can be found in nature. Unfortunately, Aristotle starting worrying about essences rather than cultivating and contemplating grass. Thus, the Greek civilization decayed. And my wife wants me to replace the lawn with a garden. The end of western civilization is in sight.

The agrostic principle has some naysayers. Douglas Adams gives the example in his Hitch-hikers Guide to the Galaxy of the puddle which observed how well it fitted the hole it was in and concluded that the hole and the universe where created expressly for its benefit. It was consequently quite surprised and distressed when it evaporated. Imagine; the gall of Adams using satire to attack the agrostic principle. Now, of course, the properties of the hole can be deduced from the properties of the puddle, but this should not be used to infer the universe was not created for the sole benefit of the puddle. Some people have followed the example of Adams’s puddle and claimed that since humans nicely fit a hole in the universe, the universe was created for their benefit (this is sometimes call the anthropic principle). These people will probably be surprised when humans go extinct. The superiority of the agrostic principle to the anthropic principle is shown by the observation that while homo spaiens have existed for about 200,000 years, grass tickled the feet of dinosaurs over sixty million years ago. And grass will probably still exist after humans have, through sheer stupidly, destroyed themselves and have been replaced by a group with less intelligence and more wisdom, perhaps the coleoptera.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.


[1] From the Greek word ἄγρωστις for grass.

Share

8 TeV Collisions

Friday, March 30th, 2012

Ladies & Gentlemen, Protons & Neutrons,

The Large Hadron Collider’s Accelerator Division has successfully collided, for the first time, two 4 TeV proton beams! Congratulations to all who made this possible. I can promise that everyone is looking forward to what may be discovered!

Now enjoy some images courteous of @lhcstatus and @ATLASExperiment.

 

@ATLASExperiment: First 8 TeV collisions from the #LHC in the #ATLAS detector! Thank you LHC operators. Next week: #Physics! @CERN http://pic.twitter.com/ItAbbl64 (Note: Yes, this is actually an 8 TeV collision.)

 

@lhcstatus: #LHC 17:13:43: preparing collisions (shortly) pic.twitter.com/bDrbjVt9

@lhcstatus: #LHC 17:33:43: preparing collisions http://pic.twitter.com/V3GWWsDW

 

@ATLASExperiment: Monitoring the position of the beams in #ATLAS as #LHC prepares for 8 TeV collisions. #CERN #Physics http://pic.twitter.com/m1WAitei

 

Happy Colliding!

– richard (@bravelittlemuon)

PS: Data-taking for physics starts next week. Happy Friday.

Share

The great vacuum in the sky

Thursday, March 29th, 2012

This is the zone rockets traverse in Thomas Pynchon’s novel Gravity’s Rainbow. I got e-mail from a reader who didn’t understand the concept of the vacuum. The writer didn’t think it possible, and is in good company. Neither Plato, nor Aristotle, nor even Descartes believed that a pure vacuum could exist.

A ‘vacuum’ in the most common sense is simply the absence of matter in some volume. Early experiments by physicists Torricelli and Boyle with vacuum pumps demonstrated that at least a partial vacuum was possible and could be created on earth. A standard measure of the purity of a vacuum is often expressed in the unit of pressure called a “Torr”, after Torricelli. The pressure at the surface of the earth is 760 Torr. The creation of vacuums of increasingly rarefaction has been possible with more and more powerful pumps. First, there is a mechanical pump, much like a piston engine in a car, which can achieve a pressure of about 10E-5 Torr. Then, there is a turbomolecular pump that uses a high-speed turbine to rid a chamber of gas. Beyond this, there are ion pumps, which trap atoms in a chamber by bombarding them with ionized atoms. At very low temperatures, physicists can take advantage of cryopumping where molecules can be made to stick to cold surfaces.

Why are vacuums important to the LHC? As you might be aware, we have to cool the magnets to a degree or so above absolute zero. In order to do this, we effectively have to create a giant thermos bottle to help keep the magnets cold. This uses a vacuum as the first stage of insulation from the outside world, which prevents the transmission of heat across the barrier of the vacuum.

The beam pipes of the LHC must have a very clean vacuum in order to keep protons circulating in the accelerator tubes without colliding with errant gas molecules. To do this, the pipes the protons travel through are typically maintained to a vacuum of 10E-9 Torr. At the interaction points, where the collisions take place in the middle of the detectors, extra care has to be taken to reduce the number of gas molecules even further, so more cryopumping is used to get the vacuum down to a level of 10E-11 Torr.

To give you some idea of what 10E-11 Torr is like, it’s akin to the pressure in interplanetary space. Present estimates of the vacuum of space far between galaxies is more than 1000 times lower than that, with 6 hydrogen atoms per cubic meter.

In a sense, these are all ‘partial vacuums’ – meaning that there are still atoms floating around. But, if we were able to make a perfect vacuum pump, would this mean that there’s absolutely nothing but space in such a creation?

The answer is ‘no’ and somewhat bizarre. In quantum field theory, there is a concept of ‘virtual’ particles, which are always being created and destroyed in empty space. For example, an electron and an anti-electron (called a positron) can be created momentarily in free space and can then fall back together again. If we introduced a free charge to this perfect vacuum, these electron-positron pairs would polarize and tend to screen the charge of the particle.

Beyond these virtual pairs of particles, there is something even stranger, that we sometimes associate with the Higgs boson, called a ‘vacuum expectation value’. This is to say, in a perfect vacuum we expect that there is some non-zero amount of the Higgs field floating around. Now, one may be quick to dismiss this as just some figment of a theorist’s imagination that has no consequence. Measurements of the rate of expansion of the universe, however, indicate a strange ‘dark energy’ that permeates free space and is forcing the universe to accelerate its expansion. This dark energy appears to be an energy that will inhabit space devoid of any matter whatsoever and is akin to the ‘vacuum expectation value’ in many ways. No one knows why this dark energy exists, but it is permitted by Einstein’s equations describing the large-scale structure of the universe. We just didn’t expect to see it, and it seems to lurk everywhere.

So, perhaps the ancient philosophers were right: there may not be a pure vacuum in nature after all.

Share

Leon Lederman

The National Science Board announced Monday that it chose Leon Lederman as the 2012 recipient of the Vannevar Bush Award.

The award is given to people who are lifelong leaders in science and technology and who have made substantial contributions to the welfare of the nation.

While the general public might know him best for his book “The God Particle” about the search for the Higgs boson, Lederman has improved the lives of millions through his efforts in science, eduction and cultural outreach.

His early award-winning research in high-energy physics brought him into national science policy circles and in 1963 he proposed the idea that became the National Accelerator Laboratory, which was later renamed Fermilab. In 1977 Lederman led the team that discovered the bottom quark at Fermilab. The following year he was named director of the laboratory and his administration brought Fermilab into its position of scientific prominence by 1983 with the achievement of the then world’s most powerful superconducting accelerator, the Tevatron. In 1988 Lederman was awarded the Nobel Prize in Physics.

During his term as director, Lederman emphasized the importance of math and science education as outreach to the neighboring communities. He initiated the Saturday Morning Physics lectures, which have drawn thousands of students to the laboratory to meet and question physicists. He subsequently founded the Friends of Fermilab, which raises money for science education; the Illinois Mathematics and Science Academy; and the Teacher’s Academy for Mathematics and Science, which provides in-service training and professional development for K-8 math and science teachers. Lederman is also one of the main proponents of the Physics First initiative to introduce physics earlier in the high school curriculum. His contributions to eduction have been memorialized at Fermilab with the naming of a hands-on K-12 science education center after him. The Leon Lederman Science Center is host to hundreds of field trips by schools and scout troops each year and supports Science Adventure classes during school breaks.

In about 1980, Lederman also made it a mission to include Mexican and Latin American researchers in high-energy physics experiments. Prior to that, the involvement by those countries was limited theoretical research not hands-on experimentation. Lederman subscribed to the philosophy of the more minds the better. He helped Hispanic scientists find a foothold in experimental programs and encouraged internships at Fermilab for Hispanic youth. The outreach has been successful and Fermilab now counts many Latin American countries as collaborators on science experiments. One example is Mexico, which  is Fermilab’s ninth most prolific partnering country in terms of collaboration results.

— Tona Kunz

Share


At the end of my last post, I left you all with the above plot (from this ATLAS conference note) without any real explanations. It’s actually quite a nice result, so I thought I might go through it in a little more detail today.

So what does the plot show? Reading the axes, it shows the lepton charge asymmetry as a function of lepton pseudorapidity of leptonic W events.

What does this actually mean? To answer this, let’s go back to what a W boson is. On the right here, it’s a cute little plush toy, which you can buy from Particle Zoo. In real life, it’s massive charged elementary particle. This means there is a positive W boson, and a negative W boson, \(W^+\) and \(W^-\) respectively. When a W boson decays into a charged lepton and corresponding neutrino, due to charge conservation, the charge of the lepton must match the charge of the W boson. So the above plot of lepton charge asymmetry is actually a plot of W charge asymmetry, which can be interpreted as a W production asymmetry, \(A_W = \frac{\sigma_{W^+} – \sigma_{W^-}}{\sigma_{W^+} + \sigma_{W^-}} \).

So why is there a W production asymmetry? Let’s look at how a W boson is produced in a proton-proton collision. On the left here, we have a Feynman diagram of this process, where you can see that to make a positive W boson, you need a certain combination of quark and antiquark, most often an up quark and an antidown quark. To make a negative W boson, you need the opposite combination, a down quark and an antiup quark.


The production asymmetry occurs because, as illustrated in the diagram on the right, the proton contain two valence up quarks and one valence down quark in a sea of quark-antiquark paris and gluons. So in a proton-proton collision, there is a higher probability of a up and an antidown quark interacting than an antiup quark and a down quark, and hence more positive W bosons are produced compared to negative W bosons.

So that’s why there’s a W production asymmetry, but why does it depend on pseudorapidity? And what is pseudorapidity anyway?

Well, pseudorapidity is a measure of the angle at which the W boson was produced, which depends on the momentum of the two quarks from which the W boson was produced. The quarks and gluons within a proton carry a fraction, \(x\), of the total proton momentum, which is described by a parton density function \(f(x)\). The plot on the left shows the proton parton distribution functions for various types of quarks and anitiquarks, as well as gluons, for a particular proton collision energy scale \(Q\).

So the momentum of the quarks which produce the W boson varies from collision to collision, depending on their parton density functions, which causes the W production asymmetry, caused by the quark content of the proton, to vary with pseudorapidity. Which is what the plot shows!

Share

Not just B physics!

Tuesday, March 27th, 2012

Today, I’m going to be talking about some lesser known LHCb results. In fact, I’m going to discuss physics that some people thought LHCb couldn’t do, given the detector and software design.

What am I going to be talking about? Electroweak physics. Yes, you read that right, not the heavy quark physics which LHCb was designed and built for, but electroweak physics. In particular, I’m going to discuss some of our new results on Z and W boson cross sections, which will be presented at the DIS workshop in Bonn this week.

But before I go into the results and why they are interesting, let me quickly introduce the Z and W boson, as found at The Particle Zoo. Theorised in the 60s and discovered in the 80s, they are massive elementary particles that mediate the weak force. Z bosons are neutral and decay into a pair of leptons or quarks. W bosons are charged and decay into either a charged lepton and neutrino or two quarks.

At the LHC, Z and W bosons are usually identified by their leptonic decays. The signatures that electrons, muons and tauons leave in the detectors are much easier to find and measure than those left by quarks. In LHCb, we are able to detect Z decays to a pair of electrons, or muons or tauons and W decays into a muon and corresponding muon neutrino. Unfortunately, we aren’t able to cleanly identify W decays to electron or tauons and their corresponding neutrinos.

Above I present a summary of all the Z and W cross sections we have measured so far using data from 2010. On the left are the Z cross sections, given separately for each decay mode, while on the right are the Z and W to muon cross sections and various ratios of them.

If you are used to seeing LHC results, these may look a little strange. Usually the data is shown as black solid points while the theory is shown as coloured bands. Here the data is shown as the coloured bands, while the predictions of various theoretical models are shown as black open points.

Why this confusing presentation you ask? Well, that has to do with why we are trying to measure the Z and W production cross sections in LHCb.

As I’ve mentioned before, LHCb has a unique geometry compared to the other LHC experiments. In particular, with our cone geometry, we cover the forward region of 1.9 < y < 4.9, while ATLAS and CMS cover |y| < 2.5 with their cylindrical geometries. In terms of proton-proton collisions and the production of Z and W bosons, this means we are able to probe a complementary region of phase space. The plot on the right illustrates this, where you can see that LHCb is able to explore the low-\(x\), high \(Q^2\) region inaccessible by other experiments (past and present). This is important as this is the region where there is the highest uncertainty in the theoretical predictions in the Z and W production cross sections. So ideally, we would like to use experiment to constrain the theoretical predictions.
I say ideally, as if you look at our current results, we don’t have the experimental precision to do this. But we will in the future, so be on the look out!

Of course we aren’t the only experiment looking at Z and W production cross sections, ATLAS and CMS are as well, so I feel obliged to show you this plot on the left, which is of the W lepton charge asymmetry as a function of lepton pseudorapidity from ATLAS, CMS and LHCb…

Share

Ramping up

Tuesday, March 27th, 2012

At the moment the LHC is making the transition from no beams to stable beams. It’s a complicated process that needs many crosschecks and calibrations so it takes a long time (they have already been working on the transition since mid February.) The energy is increasing from 7TeV to 8TeV, and the beams are being squeezed tighter, and this means more luminosity, more data, and better performance. As the LHC prepares for stable beams, so do the experiments. I can only see what is happening within ATLAS, but the story will be the same for CMS and LHCb.

As the LHC moves through its checks and changes its beam parameters the experiments have an opportunity to request special beam setup. We can ask that the LHC “splashes” the detector with beam in order to calibrate our hardware. This is similar to the famous first beam plots that we saw in 2008. In addition to splashes we can also request very low pileup runs to test our simulation. “Pileup” refers to the average number of events we expect to get every time the beams collide in the detectors, and by increasing the pileup we cram as many events as we can into the limited periods of time available to us. For 2011 our pileup was about 15, and this is going to increase in 2012 to about 20-30. This meant I was surprised to find out that we can use pileup of 0.01 for some of our simulation calibrations!

First ATLAS splash from 2008 (ATLAS Collaboration)

First ATLAS splash from 2008 (ATLAS Collaboration)

The timetable for the ramping up the LHC is announced as far in advance as possible, but it’s subject to small changes and delays as new problems arise. In general, the LHC outperforms its expectations, delivering higher luminosities than promised and stable beams for longer than expected, so when we factor in unexpected problems and unexpected higher performance we have to take the timetable with a pinch of salt. We expect to get stable beams around Easter weekend. You can see the timetable in the pdf document provided by the LHC team.

In the meantime the ATLAS hardware has been checked and maintenance performed to get it in good working order for the data taking. The thresholds are fine tuned to suit the new beam conditions and the trigger menu is updated to make the best use of the data available. There are plenty of decisions that need to be made and discussions that need to take place to make sure that the hardware is ready for stable beams. Today I got a glimpse at the checks that are performed for the electromagnetic calorimetry system, the trigger system and some changes to the muon systems. It’s easy to lose sight of how much work goes into maintaining the machine!

The LHC team preparing for beams.

The LHC team preparing for beams.

As the hardware improves, so does the software. Software is often a cause of frustration for analysts, because they develop their own software as a collaboration and the software is sometimes “bleeding edge”. As we learn more about the data and the differences between data and simulation we can improve our software, and that means that we constantly get new recommendations, especially as the conferences approach. There is a detailed version tracking system in place to manage these changes, and it can be difficult to keep up to date with it all. Unfortunately, updated software usually means analyzing the data or simulation again, which is time consuming and headache-inducing in itself. That is how things worked in 2011. This year it looks like we’ve already learned a lot about how the data look, so we can start with much better simulation and we can start with an improved release for all the software. This should make progress much easier for analyses and simpler for everyone (which is a very important consideration, given that we have a large range of experience with software, and a large range of knowledge of physics processes.)

The banks of super computers are ready and waiting...

The banks of super computers are ready and waiting...

Putting all this together we can conclude the following: we will have higher energy beams giving us more data, we’ll have a better functioning detector based on previous experience, we’ll have improved simulation, and we’ll have more stable and simpler software. This is very exciting on the one hand, but a bit intimidating on the other, because it means that the weak link in the chain could be the physicists performing the analyses! There are plenty of analyses which are limited by statistics of the dataset, or by resolution of the detector, or stymied by last minute changes in the software or bugs in the simulation. If we hit the ground running for 2012 we could find ourselves with analyses limited by how often the physicists are willing to stay at work until 3am to get the job done.

I’ve already explained why 2012 is going to be exciting in terms of results in another blog post. Now it looks like it will bring a whole new set of challenges for us. Bring it on, 2012, bring it on.

Share

Startup 2012

Sunday, March 25th, 2012

The LHC will start colliding beams again in a few weeks after the traditional winter shutdown. 2012 could be THE year. This is not just idle speculation. Hints of the elusive Higgs boson may have been seen in both multipurpose experiments at the LHC (ATLAS and CMS) as well as the Tevatron full luminosity analysis at D0 and CDF. To be clear from the start these hints are – as of this writing – not significant enough to make a claim to have seen anything yet. The level of statistical significance has been described as ‘interesting and tantalizing’ though not conclusive. What makes people so excited about this result is that unlike many other episodes of temporary excitement there is a reasonable level of agreement BETWEEN experiments.

Let me explain. For a so called “low mass” Higgs boson (110-130 GeV) the most prominent decay channel is to the heaviest available fermions which the boson can decay into and conserve energy and momentum- a b and anti-bquark pair. Unfortunately, at both the Tevatron and the LHC the backgrounds to this channel are tremendously large. So its not just the decay branching ratios that are important but also how large the backgrounds are, the difficulty of reconstructing the final state, and the experimental resolution on the observed objects which matter (among other things).

As it turns out the channels with the largest expected statistical power at the LHC in this range are decays into two photons, two W bosons, or two Z bosons. In fact in December of 2011 both the ATLAS and CMS collaborations saw small bumps on an otherwise fastly falling decay spectrum in the diphoton resonant search. In that channel one looks for pairs of photons and reconstructs the invariant mass. If the pair of photons came from the production of a heavy particle you would expect to see a clustering of events around a particular mass. An excess of events was also seen in both experiments in the gauge boson decays (pairs of W and Z particles) around the same mass region.

All of this was known since last December when both collaborations made their results public with the full 2011 datasets for the first time. Then at the annual Moriond conference in 2012 the Tevatron experiments released their results with their full datasets. Because of the different center of mass energy the cross-sections for both signal and background processes are different. It turns out that the they too see an excess of events in the region consistent with the LHC experiments but in the so called “associated Higgs production” where the Higgs is produced along with a W or Z boson. This allows them to make use of the large branching ratio into b and anti-bquark pairs while reducing the background due to the presence of the gauge boson.

So what does this all mean? Well – for now I am optimistically cautious. The exciting thing for me as a part of the ATLAS collaboration is that we will start taking data in a few weeks at an even higher center of mass energy – 8 TeV. This year we could get even more data than we did in 2011 which will give us the opportunity to possibly confirm our preliminary results and make a discovery or exclude the existence of the particle which may have even more significant impact to our field.

One of the roles I currently play in the ATLAS experiment is the coordination of the muon high level trigger. Since we collide protons at the LHC at a rate that is much much higher than we can record to tape for analysis we have to choose which events we can keep. We have to have enormous rejection power on the order of accepting 1 event in many million in order to accomplish this. But on the other hand we have to be very careful which events we choose to save! We must maintain very high efficiency for triggering on the events we care about the most. This is becoming an ever more exciting and difficult game as the event rate gets larger and larger and the stakes get higher and higher.

Looking forward to an exciting year!

Share