## Archive for May, 2012

### It’s conference season again!

Thursday, May 31st, 2012

Greetings from Florida! The summer conference season has just started, and on both sides of the Atlantic, in Florida and France, physicists are meeting to share the latest news from the LHC and the Tevatron. I’m at the Eleventh Conference on the Intersections of Particle and Nuclear Physics (CIPANP 2012), and with 70 parallel sessions, 10 plenary sessions, and 64 posters there’s a lot to explore here! While the Higgs boson is a hot topic, it’s not the main focus of the conference, topics include neutrino physics, cosmology, nuclear physics, dark matter and hadronic structure. Physicists are chatting over coffee, catching up on gossip and rumors, and trying to find the time to fit in the most interesting talks.

I delivered my talk yesterday (a whirlwind tour of Higgs bosons decaying to final states with tau leptons) so I can now relax and enjoy the rest of the conference. Given the diverse nature of CIPANP this is a great opportunity to find out about the other areas of physics. In the very low mass region there are extremely stringent tests of the Standard Model which keep getting better. It’s easy to forget that the most precise tests are not found at the high energy frontier, so hearing from colleagues who work with muons and neutrinos is vital.

Presenting my talk

So far I’ve mostly limited myself to the Higgs sessions and the plenary talks. We’ve seen ATLAS, CMS, CDF, and D0 squeeze as much as they can out of their datasets, looking in much more detail at the decay channels, splitting analyses into ever finer categories in order to improve the techniques. Even so, we’re going to have to wait for ICHEP in July to see some substantially improved exclusion limits.

Perhaps the best part of traveling to conferences is the change of scenery and break from the usual habits. I don’t want to give the impression that it’s like a vacation- nearly everyone is still working very hard while they’re here. Instead the travel breathes new life into our approach to physics, giving us a chance to think a bit differently about what we do.

A popular plenary session.

As I sit in talks I find my mind wandering to the public understanding of physics, because I struggle to understand a lot of the presentations from theorists. We tend to skip over a lot of information when we present our work, so it would be useful to be able to take things more slowly when explaining the more important areas. Unfortunately we need to get permission to present plots using data, so for now we are stuck with the plots that have been approved. They are often busy, pragmatic, and try to condense as much information as possible in as little space as possible. Putting in a few more steps could make the ideas much more accessible to the wider public, so if I get time in the next few months I want to explore making it easier to get more suitable plots approved for the public.

A physicist takes a break between sessions

I’ll focus more on the physics results in a different blog post. For now I just want to say that it’s great to be back in the USA again and (tedious border control aside) it’s been a very pleasant experience to be on this side of the Atlantic for a week. At these conferences there are always social events and receptions, so imagine how happy I was to see that there was a dolphin watching cruise on the schedule!

Dolphins!

### An experiment: Feynman Diagrams for Undergrads

Thursday, May 31st, 2012

The past couple of weeks I’ve been busy juggling research with an opportunity I couldn’t pass up: the chance to give lectures about the Standard Model to Cornell’s undergraduate summer students working on CMS.

The local group here has a fantastic program which draws motivated undergrads from the freshman honors physics sequence. The students take a one credit “research in particle physics course” and spend the summer learning programming and analysis tools to eventually do CMS projects. Since the students are all local, some subset of them stay on and continue to work with CMS during their entire undergraduate careers. Needless to say, those students end up with fantastic training in physics and are on a trajectory to be superstar graduate students.

Anyway, I spent some time adapting my Feynman diagram blog posts into a series of lectures. In case anyone is interested, I’m posting them publicly here, along with some really nice references at the appropriate level.

There are no formal prerequisites except for familiarity with particle physics at the popular science/Wikipedia level, though they’re geared towards enthusiastic students who have been doing a lot of outside [pop-sci level] reading and have some sophistication with freshman level math and physics ideas.

The whole thing is an experiment for me, but the first lecture earlier today seems to have gone well.

### Computing for particle physics in perspective

Sunday, May 27th, 2012

1985: First Computing in High Energy Physics (CHEP) conference is held in Amsterdam.

1991 or 1992: I encounter the World Wide Web for the first time. There is no graphical browser for it yet, so I am underwhelmed and not sure what it would ever be good for.

1998: CHEP to be held in Chicago. First time I had heard of the conference, and the thought that popped into my head was, “shoot me if I ever go to that.”

2005: I start to work on computing for the CMS experiment at the LHC.

2007: I attend CHEP in Victoria, Canada. No one shot me.

Last week: 19th CHEP held in New York City, and I was there. There were five hundred people registered, all eager to talk about the latest advances and future directions in software and computing for particle and nuclear physics, and also to explore one of the world’s great cities. (As a native of the New York area, I was happy to play tour guide, although I didn’t expect that I’d end up escorting 17 people to Katz’s over the course of four days.) It was a good opportunity to think about the impact that advances in computing have made on physics.

It’s worth looking at the keynote talk by Glen Crawford of the Department of Energy, who described the role of computing as a key enabling technology for our field. Here is a slide of his that I particularly liked:

On the right is what has become the meme (I guess) that we have been using in the US to illustrate how we need the interplay of scientific explorations in three scientific frontiers — energy, intensity, and cosmic — to understand critical problems in particle physics. But I hadn’t previously seen the diagram in the lower left, which shows the required interplay of advanced technologies to achieve these goals. (It certainly hadn’t occurred to me to put computing on the same footing as, say, the LHC accelerator itself.) Glen goes on to describe how particle physics has long been an early adopter of computing technologies, from networks to grids to the World Wide Web (yes, invented by particle physicists). And, in turn, these technologies have been absolutely necessary to handle the huge amounts of data produced by particle-physics experiments that need to be shared among thousands of researchers all over the world.

Other items that caught my attention:

• In many ways our data management and distribution problems are similar to those of Netflix streaming movies, except that their total data volume is 12 TB and ours is 20000 TB.
• Long-term data access and preservation is becoming a growing concern. Particle physics experiments are often unique; it’s hard to imagine anyone will do anything like the electron-proton collisions of the now-defunct HERA collider anytime soon, nor the proton-antiproton collisions of the Tevatron. Perhaps some new finding at the LHC will inspire us to go back and look at old data from other accelerators…but will we be able to?
• Videoconferencing is required for particle physics experiments to get their work done, given that collaborators are spread all over the world. Making sure the systems for this are robust is an important task.
• Software “engineering” for particle physics has long deserved to be in quotation marks, given our often haphazard policies for designing and releasing software for experiments. But perhaps our model does really serve our purposes well, and is being used elsewhere in commercial computing development.
• While experiments often start with widely divergent solutions to software and computing problems, they often start converging after a while, suggesting that there are efficiencies that can be found through cooperation.
• Computers are evolving in such a way that we will see more and more processing cores on a chip that are supported by less and less memory per processor. We’ll need to make greater use of parallelization, and perhaps figure out how to make use of graphical processor units.
• Rene Brun is about to retire from CERN. It is hard to imagine that anyone has been more influential in the development of software for particle physics in the past forty years. Ultimately experiments communicate their physics ideas through their software, and Rene’s packages have been the lingua franca of the field. (A sometimes idiosyncratic lingua franca, in my opinion, but the basis of everything all the same.) Rene received a standing ovation at the end of the conference.

2013: Next CHEP to be held in Amsterdam. Having survived this one without undue violence, maybe I’ll go to that one too. It will be interesting to see which predictions of this CHEP will have come true by then!

• ### Science: The Art of the Appropriate Approximation

Friday, May 25th, 2012

There is this myth that science is exact. It is captured nicely in this quote from an old detective story:

In the sciences we must be exact—not approximately so, but absolutely so. We must know. It isn’t like carpentry. A carpenter may make a trivial mistake in a joint, and it will not weaken his house; but if the scientist makes one mistake the whole structure tumbles down. We must know. Knowledge is progress. We gain knowledge through observation and logic–inevitable logic. And logic tells us that while two and two make four, it is not only sometimes but all the time. – Jacques Futrelle, The Silver Box, 1907

Unless, of course, it is two litres of water and two litres of alcohol, then we get less than four litres. Note also the almost quaint idea that science is certain, not only exact, but certain. We must know. The view expressed in this quote is unfortunately not confined to century-old detective stories, but is part of the modern mythology of science. But in reality, science is much more like carpentry. A trivial mistake does not cause the whole to collapse, but I would not like to live in a house built by that man.

To the best of my knowledge, there has never been an exact calculation in all of physics. In principle, everything in the universe is connected. The earth and everything in it is connected by the gravitational field to the distant quasars. But you say, surely that is negligible, which is precisely the point. It is certainly not exactly zero, but with equal certainty, it is not large enough to be usefully included in any calculation. I know of no terrestrial calculation that includes it. Even closer objects like Jupiter have negligible effect. In the grand scheme, the planets are too far from the earth to have any earthly effect. Actually, it is not the gravitational field itself which is important but the tidal forces which are down an additional factor of the ratio of the radius of the earth to the distance to the planet in question. Hence, one does not expect astrology to be valid. The art of the appropriate approximation tells us so.

Everywhere we turn in science we see the need to make the appropriate approximations. Consider numerical calculations. Unless you are calculating the  hypotenuse of a triangle with side of 3 and 4 units, almost any numerical calculation will involve approximations. Irrational numbers are replaced with rational approximations, derivatives are replaced with finite differences, integrals with sums, and infinite sums with finite sums. Every one of these is an approximation—usually a valid approximation—but never-the-less an approximation. Mathematical constants are replaced by approximate values. Someone once asked me for assistance in debugging a computer program. I noticed that he had pi approximated to only about six digits. I suggested he put it in to fifteen digits (single precision on a CDC computer). That, amazingly enough, fixed the problem. Approximations, even seemingly harmless ones, can bite you.

Even before we start programing and deciding on numerical techniques, it is necessary to make approximations. What effects are important and which can be neglected? Is the four-body force necessary in your nuclear many-body calculation? What about the five-body force? Can we approximate the problem using classical mechanics, or is a full quantum treatment necessary? Thomas Kuhn (1922 – 1996) claimed that classical mechanics is not a valid approximation to relativity because the concept of mass is different. Fortunately, computers do not worry about such details and computationally classical mechanics is frequently a good approximation to relativity. The calculation of the precision of the perihelion of Mercury does not require the full machinery of general relativity, but only the much simpler post-Newtonian limit. And on and on it goes, seeking the appropriate approximation.

Sometimes the whole problem is in finding the appropriate approximation. If we assume nuclear physics can be derived from quantum chromodynamics (QCD), then nuclear physics is reduced to finding the appropriate approximation to the full QCD calculation, which is by no means a simple task. Do we use an approximation to the nuclear force based on power counting, or the old fashioned unitarity and crossing symmetry? (Don’t worry if you do not know what the words mean, they are just jargon and the only important thing is that the approximations lead to very different looking potentials.) Do the results depend on which approach is used, or only the amount work required to get the answer?

Similarly, in materials science, all the work is in identifying the appropriate approximation. The underlying forces are known: electricity and magnetism. The masses and charges of the particles (electrons and atomic nuclei) are known. It only remains to work out the consequences. Only, he says, only. Even in string theory, the current proposed theory of everything, the big question is how to find useful approximations to calculate observables. If that could be done, string theory would be in good shape. Most of science is the art of finding the appropriate approximation. Science may be precise, but it is not exact, and it is in finding the appropriate approximation that we take delight.

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

### CERN’s prodigal neutralino comes back from outer space

Friday, May 25th, 2012

Christer Fuglesang, a former physicist who worked at CERN and now an European Space Agency (ESA) astronaut brought back to CERN a neutralino he had taken along on his mission to the Internal Space Station in 2009.

Yesterday, Christer Fuglesang (right) former physicist from CERN and now astronaut with the European Space Agency, brought back to Sergio Bertolucci (left), CERN research director, the neutralino bearing ESA and CERN colors (bottom right insert) he took with him onboard the space shuttle in 2009.

The said neutralino is in fact a stuffed toy created by particle zookeeper Julie Peasley, creator of the Particle Zoo. It represents a hypothetical fundamental particle proposed within a new theory called supersymmetry. This theory builds on the Standard Model, the actual theory in particle physics and would unify together particles of matter and particles associated with fundamental forces.

Most importantly, many hope this neutralino could be a new form of matter that would explain what dark matter is made of.

Dark matter is a completely unknown type of matter that makes up 23% of the whole content of the universe, while only 4% of the universe corresponds to the type of matter that makes humans as well as all stars and galaxies. Physicists still don’t know what makes dark matter and dark energy (the remaining 73% of the universe’s content) but we know it’s there through its gravitational effects.

The universe contains 23% dark matter and 73% dark energy, two forms of matter and energy completely different from the regular matter found on Earth, all stars and galaxies, which accounts for only  4% of the content of the universe.

Dark matter does not radiate any light (hence its name) but still generates a gravitational field, making its presence detectable. On the other hand, it seems to interact very minimally with ordinary matter, making it very difficult to detect it and study its nature.

One hope is that the Large Hadron Collider (LHC) might be able to produce dark matter particles and physicists would at last get a chance to study them. The neutralino is only one of many proposed candidates to explain dark matter but a very plausible one.

So when Christer Fuglesang was told he could take a few mementos with him on his trip to the International Space Station, he chose to bring something special from CERN. “The neutralino offers a nice connection between space and particle physics”, Christer said, making it the perfect choice.

The little softy is now reunited with all its friends, the other particles from the Particle Zoo. Let’s see which one of them will pop-out of the box being the one explaining such a huge amount of matter still unaccounted for. Let’s hope the LHC will manage to shed light on this dark side of the universe.

(Interview with Christer Fuglesang)

Pauline Gagnon

### Le neutralino prodigue revient au CERN après un voyage dans l’espace

Friday, May 25th, 2012

Christer Fuglesang, un physicien ayant travaillé au CERN avant de devenir astronaute pour l’Agence Spatiale Européenne (ESA), a ramené hier au CERN un neutralino qu’il avait emporté avec lui lors de sa mission vers la Station Spatiale Internationale (ISS).

Christer Fuglesang (à droite), astronaute de l’agence aérospatiale européenne (ESA) remettant à Sergio Bertolucci (à gauche), directeur de la recherche du CERN le neutralino (en bas à droite) aux couleurs du CERN et de l’ESA qu’il avait emmené à bord de la navette spatiale en 2009.

Il s’agit en fait d’une petite peluche créée par la gardienne et fondatrice du zoo des particules. Le neutralino représente une particule fondamentale mais hypothétique proposée dans le cadre d’une nouvelle théorie dite de supersymmétrie. Cette théorie échaffaudée sur les bases du Modèle Standard, le modèle actuel décrivant les la physique des particules et qui unifierait les particules de matière et les particules associées aux forces fondamentales.

Le plus intéressant serait que ce neutralino s’avère être du même type de matière que la matière noire.

La matière noire représente 23% du contenu total de l’univers. C’est une forme de matière d’un genre complètement inconnu, alors que la matière ordinaire, celle dont nous sommes faits de même que toutes les étoiles et galaxies, ne compte que pour 4% du contenu total. Bien que les physiciennes et physiciens ne sachent toujours pas de quoi 96% de l’univers est fait, nous détectons la présence de cette matière mystérieuse à travers ses effets gravitationnels.

L’univers comprend 23% de matière noire et 73% d’énergie noire, deux formes de matière et d’énergie qui n’ont rien à voir avec les 4% de matière ordinaire qui compose tout ce que l’on trouve sur terre, dans les étoiles et les galaxies.

La matière noire n’émet pas de lumière (d’où son nom) mais engendre tout de même un champ gravitationnel, ce qui la rend détectable. Par contre, elle ne semble interagir que minimalement avec la matière ordinaire, ce qui la rend bien difficile à détecter ou étudier sa nature.

On espère que le Grand Collisionneur de Hadrons (LHC) sera capable d’en produire et qu’on pourra enfin en étudier les propriétés. Les neutralinos ne sont qu’une des nouvelles particules fondamentales proposées pour résoudre le mystère de la matière noire mais un des modèles les plus plausibles.

Alors quand Christer Fuglesang a appris qu’il pouvait prendre à bord de la Station Spatiale Internationale quelques articles de son choix, il a voulu emmener quelque chose du CERN. « Le neutralino offre un lien entre la physique des particule et l’espace » avait expliqué Christer, en faisant l’article idéal.

La petite peluche a maintenant retrouvé ses compagnes du zoo des particules. Laquelle d’entre elles s’avèrera être celle qui révèlera la nature de cette immense quantité de matière encore inconnue? Espérons que le LHC éclairera un peu ce côté sombre de l’univers.

(Interview avec Christer Fuglesang) (en anglais seulement)

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

Friday, May 18th, 2012

Finally, it’s summer time! As I’ve said from the beginning, summer is a very nice time to be a professor, as we don’t have to do half of our job for these few months. But already this summer is filling up with things to do, and a lot of it involves travel. I have trips to five different destinations, two international, in the 13-plus weeks until the fall semester starts. It is a long road to be on. So you, dear reader, will be subject to my travelogues for a few months.

Today, I’m at the University of Colorado for the annual US CMS collaboration meeting. This is my first visit to Boulder, and it seems pretty nice, although it’s one of these campuses where are the buildings are of a similar style and exterior and thus it’s easy to get lost. The US CMS meeting is a chance for all of the US-based collaborators to get together and talk about what we’re doing on CMS and where we are going. Obviously, there is a lot to talk about right now. The LHC is running, there is a lot of data analysis in progress, and many public results that are having an impact about how we think about particle physics.

But what have we been devoting the most time to at this meeting? Detector upgrades! Yes, we’re talking about stuff that isn’t going to get installed until 2016, even while we might discover a Higgs boson in 2012. Why? First, it takes a long time to build detectors for particle physics. The technology tends to be pretty leading edge, often you have to build a large number of parts by hand, and you need extensive quality control. A real plan for construction, testing, and installation needs to be in place well before the detector needs to be operational. Also, we’re especially concerned that these improved detector components will be ready in time, if not early. The instantaneous luminosity of the LHC, a measure of the collision rate, is rising quickly, and within a few years we expect that it will be above the level for which the CMS detector was designed. If we want to be able to analyze future LHC collisions, we need a detector that meets the needed specifications. And finally, the finances for the construction of these detectors are still very much in the air. We might not have enough money to do everything we want on the timescale that we want to do it. So it’s important to give these projects a lot of scrutiny up front. It’s the start of a long road there, too.

Not that there isn’t any fun physics going on here. Today we had a series of talks by younger people (well, at least younger than me) on a variety of data-analysis topics. The quality of the work being done is really impressive, and there are a lot of creative and sophisticated ideas being put to use. One running theme is our ability to rely on real detector data, rather than simulations, to model the old-physics backgrounds to potential new-physics signals. And it’s worth keeping in mind that this is only possible because of the excellent detector that we’ve built. Good detector upgrades will allow us to keep doing this excellent data analysis in the future.

### Measurement and the New SI Units

Friday, May 18th, 2012

The SI units will be changing again in the next few years. You would think that choosing the units of measurement would be an unemotional topic, but as I recall from Canada’s, only partially successful attempt to convert to the metric system, that is far from the case. I remember one rather irrational editorial on the topic where the writer went on about how the changing  definition of the metre was an indication that the people behind the metric system did not know what they were doing. Since this was in an English Canadian paper, he blamed the problem on the French for having blown the original definition. Ignorance profound. The writer would probably have been surprised to learn that the inch is defined as 2.54 centimeters except, of course, in the US where there is a second inch (the surveyor’s inch) defined as 39.37 inches equal one meter.  Ah, the joy of traditional measurements. There are at least three different gallons in use, and as for barrels, there are more than you can shake a stick at. However, the petroleum barrel is defined as exactly 158.987294928 litres. I am sure you wanted to know that and don’t forget the last decimal point—the 8 is very important. As far as I can see, the only reason for using the traditional units is familiarity and yes, I still use the inch and foot, but also the kilometer. And I believe it’s also safe to say, that the generation born after the country officially switched, also does the same. That is the joy of living in a country that has half converted to metric.

Measurements tend to be of two types. One is pure numbers like the number of ducks in a row (or in a pond). The other type is the measurement of a number with a dimension. Here we need a standard to compare against; a length of six feet only make sense if we know what a foot is. In other words, we have a standard for it. Thus, the need to define units so different people can compare their results, and when we buy a hogshead of beer, we know how much we are getting.

Editorial writers will have another chance to rant in a few years as the General Conference on Weights and Measures is set to change the definitions of the basic metric or Standard International (SI) units again—this time, not the metre but the kilogram and other units. The history of how the definition of the units have changed over time is quite interesting, involving not just changing technology but also changing tastes. The original metre was defined in terms of the distance from the equator to the North Pole. But this could not be determined sufficiently accurately, so the standard was shifted to a physical artifact; a rod kept in Paris with two marks on it. This was then shifted to the wavelength of light from a certain atomic transition and finally, to fixing the speed of light. Similarly, for time, the second went from being defined in terms of the length of the day to being defined in terms of the frequency of an atomic transition. There is a trend from defining the units in terms of macroscopic quantities—the size of the earth, the length of day, the length of a bar—to microscopic quantities, or more specifically, atomic properties. There is a simple reason for this, namely that it is in atomic systems that the most accurate measurements can be made. Unfortunately, it also makes the unit definitions esoteric and detached form everyday experience. Everyone can identify with the length of a foot, but it is not immediately clear what the speed of light has to do with distance. Telling my daughter it takes five nanoseconds for light to travel from her head to her foot doesn’t do much for her. There is also a trend, partly aesthetic, towards defining the base units by fixing the fundamental constants of nature.

A fundamental constant of nature, like the speed of light, starts it life as something that relates two apparently unrelated quantities. In the case of the speed of light, it is time and distance. But then over time, it comes to be just a way of relating different units for measuring the same thing. Indeed, time units are sometimes used for distances and vise versa. This even happens in everyday life, such as when the distance from Vancouver to Seattle is given as three hours, meaning, of course, an average travel time. But in science, the relation is more definite and defining the metre in terms of the speed of light makes it explicit that the fundamental constant, the speed of light, is just a conversion factor from one set of units to another, from seconds to metres (1 metre = 3.3 nanoseconds).

The new proposal for the base SI units continues this trend of defining units by fixing fundamental constants. The degree Celsius is now defined in terms of the properties of water—the so called triple point. In the proposed new system, it will be defined by fixing a fundamental constant, the Boltzmann constant. The Boltzmann constant relates degrees to energy. At the microscopic level, i.e. in statistical mechanics, temperature is just a measure of energy and the new definition of the degree makes this explicit. Again, a fundamental constant turned to a conversion factor between different units—degrees and joules. The case of the kilogram is more subtle. It is currently defined by a physical artifact—the standard kilogram stored in Paris. The new proposal is to determine the kilogram by fixing the fundamental constant; Planck’s constant. This is another example of a fundamental unit becoming just a conversion factor between different units, in this case between time and energy units, or equivalently distance and momentum units.

As a theorist, this new set of units makes it nice for me as I like to use what are called natural units in my calculations. These are given by setting the speed of light (c), Planck’s constant (ħ), Boltzmann’s constant (k) and π all equal to 1 (OK, usually not π, but I did see that legitimately done once). An interesting side effect of the new units is that they all have exact conversion from these natural units. There is another set of natural units called Planck units which are defined in terms of the gravitational strength and the strength of the electromagnetic force. (In the proposed change, the charge of the electron is used to define the electromagnetic units.) Ultimately, those may be the most elegant units but we are nowhere close to having the technology to make them the bases of the SI units.

Naturally, any change of units has the naysayers coming out of the woods. One of the criticisms of the new units is that, since the fundamental constants are fixed by definition, we can no longer study their time dependence. To some extent, this is true. For example, with the current definition of the kilogram, Planck’s constant changes every time atoms are lost or gained by the standard kilogram. This change will be lost with the new units. This illustrates the absurdity of asking if a fundamental constant changes in isolation. All that is meaningful is if the constant has changed with respect to some other quantity with the same dimensions. The new choice of units makes this explicit, which is a good thing.

There is much more to the new choice of units than I can cover here and the interested reader is referred to the relevant web pages: http://www.bipm.org/en/si/new_si/ , http://royalsociety.org/events/2011/new-si/ , or http://en.wikipedia.org/wiki/New_SI_definitions .

Additional posts in this series will appear most Friday afternoons at 3:30 pm Vancouver time. To receive a reminder follow me on Twitter: @musquod.

### CHARM of Hawaii

Monday, May 14th, 2012

I’m blogging from the site of CHARM-2012 conference, which has just started in Honolulu, Hawaii. This is a fantastic conference at a fantastic place! The conference will have four full-packed days filled with many aspects of physics related to charmed quark. As I reported earlier, many exciting recent results are associated with charm quark.

Why is the conference taking part in Hawaii? Besides being a nice place in general, it is almost exactly half way between Japan and the US. This meeting alternates between Asian, US and European locations, and last meeting, in 2009, was in Beijing — so it is US’ turn.  There will be many talks from KEK‘s Belle collaboration (which University of Hawaii is a member of), LHC experiments, as well as from Tevatron experiments. Besides, world’s only operating charm experiment (BES 3) is located in Beijing, China. Indeed, there would be many theory talks as well. It shapes to be a very nice conference — and I’ll be reporting about exciting results to be discussed here.

### Happy birthday, Richard Feynman!

Friday, May 11th, 2012

Richard Feynman was one of the most influential physicists of the twentieth century. Not only did he revolutionize quantum theory with his development of quantum electrodynamics, but he also revolutionized the way we think about physics and physicists. He spoke to people from all kinds of backgrounds about physics, from lecturing students destined to change the field themselves, to appearing on television to discuss physics and the philosophy of science, to meeting with the greatest minds of the time.

Feynman in the middle of a lecture. (www.richard-feynman.net)

For me, Feyman’s great contribution was the way he thought about physics. His Lectures on Physics are world famous, and rightly so. (In fact, one of the first things I did after landing in San Francisco to work at SLAC was to buy a copy of his lectures from the Stanford bookstore. Shortly afterwards by bank froze my card, suspecting fraud. It was worth the inconvenience!)

As a jaded undergraduate they were a source of inspiration to me. A faint glimmer of hope turned into a roaring inferno after reading his lectures on electromagnetism, and I’ve never looked back since. Finally, here was someone who wanted to discuss the beauty of the subject, as well as the truth. He had no time for obscuring the underlying symmetry of a concept, nor for lying to students in order to make things easier. Inevitably having to unlearn and relearn ideas leaves people confused, disillusioned and unable to trust their tutors. In that spirit, this is how he started his course on electromagnetism:

“We begin now our detailed study of the theory of electromagnetism. All of electromagnetism is contained in the Maxwell equations.

Maxwell’s equations:

$\nabla \cdot \vec{E} = \frac{\rho}{\varepsilon_0}$
$\nabla \times \vec{E} = – \frac{\partial \vec{B}}{\partial t}$
$c^2\nabla \times \vec{B} = \frac{\partial \vec{E}}{\partial t} + \frac{\vec{j}}{\varepsilon_0}$
$\nabla \cdot \vec{B} = 0$

Don’t worry about trying to understand these equations. The important thing here is that Feynman has given the students the complete truth about electromagnetism. With these four equations he can solve any problem about the shape and nature of electromagnetic fields for any configuration of charges and currents. The equations he provides are not some approximation of the theory, or some equations that only work some of the time, these are the equations that all physicists and engineers use and they are, as far as we know, complete and state of the art. Feynman has shown a level of honesty and respect for his students/readers that was not present when I sat through lectures. My lecturers taught me backwards, Feynman taught me forwards.

(Experts might notice that the Lorentz force law is missing here, but Feynman already mentioned it a few pages before Maxwell’s equations. With the Lorentz force law physicists can relate the electromagnetic fields to the forces on charged particles.)

Feynman continues:

The situations that are described by these equations can be very complicated. We will consider first relatively simple situations, and learn how to handle them before we take up more complicated. The easiest circumstance to treat is one in which nothing depends on time- called the static case. All charges are permanently fixed in space, or if they do move, they move as a steady flow in a circuit (so $$\rho$$ and $$\vec{j}$$ are constant in time). In these circumstances, all of the terms in the Maxwell equations which are time derivatives of the field are zero. In this case Maxwell’s equations become:

Electrostatics:
$\nabla \cdot \vec{E} = \frac{\rho}{\varepsilon_0}$
$\nabla \times \vec{E} = \vec{0}$

magnetostatics:
$c^2\nabla \times \vec{B} = \frac{\vec{j}}{\varepsilon_0}$
$\nabla \cdot \vec{B} = 0$

You will notice an interesting thing about this set of four equations. It can be separated into two pairs. The electric field $$\vec{E}$$ appears only in the first two, and the magnetic field $$\vec{B}$$ appears only in the second two. The two fields are not interconnected. This means that electricity and magnetism are distinct phenomena so long as charges and currents are static.

And he goes on. Immediately at the start of the course he’s pointed out one of the most important and beautiful symmetries in electromagnetism. He also lets us know how the course is going to proceed, with static cases first and the full treatment later. This leaves the student with a wonderful surprise later in the course, when the two fields finally get united again. When this happens Feynman goes on to show us how electromagnetism comes about as a result of special relativity, and if done properly that is one of the most breathtaking moments in physics! This is the way physics should be taught, and I wish I could have been in that lecture hall to see it happen!

The rest of the lectures are a fascinating journey, full of neat little asides, teasers, paradoxes, and it’s all handled with refreshing clarity. He even pokes fun at physics itself from time to time, showing how our mathematical notation is just a trick to make complicated things look simple and how different problems appear to have similar solutions only because we choose to use the same kinds of methods to solve them. Towards the end of his electromagnetism course he even goes out of his way to show how electromagnetism fails in an epic way. The problem of the infinite energy of the field, and the intractable problem of the mass of the electron are two major failings of the classical theory, and he dedicates a lecture to showing us just many questions were left unanswered by the subject.

Feynman with bongos, because some physicists are cool (www.richard-feynman.net)

Feynman gave us a lot to digest, from Nobel prize worthy discoveries, to a view of scientists that was anything but a crusty old professor, and for me what I value most is the lectures he gave, packed with inspiration and clarity. If you have a chance, go read some of the lectures and find out what made this man get out of bed in the morning. You won’t be disappointed. His other books are also excellent (Six Easy Pieces, Six Not So Easy Pieces, QED and Surely You’re Joking, Mr Feynman!) and well worth a read. Put them on your Christmas wish list!

Feynman’s birthday should be a national day of celebration, not just for physics, but for getting people hooked on physics! (I’m just sorry I’m a bit late to the party here, have a great weekend.)

If you want to find out a bit more about Richard Feynman check out this lecture about Feynman from Lawrence Krauss, one of today’s most eloquent speakers and best advocates for physics.

(Quotes taken from “The Feyman Lectures on Physics, The Definitive Edition Volume II”, Feynman Leighton and Sands, ISBN 0-8053-9047-2)