• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Phil Richerme | CERN | Switzerland

Read Bio

Open Access to Scientific Knowledge

Monday, April 25th, 2011

Lawrence Lessig, Harvard Law professor and champion of the free culture movement, came to CERN last week to talk about the architecture of access to scientific knowledge. Lessig transformed the relatively mundane subject of copyright law into one of the most engaging and cogent presentations I have seen, while raising some truly valuable points of interest to the scientific community.

Lawrence Lessig

At issue is the inherent incompatibility of copyright law and open access to published research. Under the current system, many individuals – especially those not associated with a university – face a surprisingly high burden to access articles. Some journals make it impossible, and restrict access to U.S. and major world universities willing or able to cover the subscription costs; others will make articles available for typically $30 a pop, which can add up rather quickly. Given that researchers don’t benefit from paywalls (they don’t receive royalties) or restricted access (universal spread of ideas is a good thing for science), Lessig makes the case that publishing in open-access journals should be the preferable choice. One could further make the case that since the majority of scientific research is publicly funded, the results of such research should be easily available to the public.

So why hasn’t everyone already made the switch to open access journals? Two large reasons come to mind. First, there is incredible inertia in academic fields to maintain tradition (in Lessig’s words, academia is “fad-resistant”). To expect the academy to suddenly switch to a new set of journals on account of philosophy, especially if it is a switch away from the more prestigious journals, is unrealistic. It will take time for open-access journals to build prestige and prove themselves steadfast and stable. Second, academics are largely unaffected by the problems mentioned above – most often, they belong to universities with subscription agreements to journals and do not personally bear the costs or difficulties of access. As a result, there is little access-related or economic incentive for change.

It is likely that the response to the problem of open-access will be driven forward at the highest institutional levels. As a Harvard grad student working at CERN, I find it particularly praiseworthy that both of these institutions have been pioneers in the open-access publishing movement. Since 2005, CERN’s publication policy requires its researchers to deposit a copy of all their published articles in an open access repository, and encourages publication in open access journals. Similarly at Harvard, authors grant to the university a “non-exclusive, irrevocable, worldwide license to distribute their scholarly articles, provided it is for non-commercial uses.” Of course comparable practices have been adopted by many other universities, and will almost certainly percolate throughout all of academia in the next few years. I think this can only be a good thing for us as scientists, for science as a whole, and probably even for the general public.

And finally, in the spirit of true open-access, CERN has made Prof. Lessig’s talk freely available here.

Share

The Emotional Rollercoaster of Experimental Physics

Sunday, April 3rd, 2011

Emotions around an experimental physics lab are very tightly coupled to the status of the experiment. It is a singularly wonderful and proud feeling when an experiment is running smoothly, and a startlingly sinking feeling when the experiment breaks for no discernible reason.

Rime ice forming on the cryogen lines during our cooldown

At the ATRAP experiment, our official data-taking run starts in early May, when CERN begins to deliver low energy antiprotons. However, we like to have our apparatus cold and tested well before this time.
Our cooldown process takes roughly a week, start to finish. This is short compared with the several-weeks timescale to cool down the LHC magnets, though long compared with the human patience timescale. We start by cooling with liquid nitrogen (temperature = 77 Kelvin), and the experiment reaches 77 K after a few days. We then switch to liquid helium (temperature = 4 Kelvin), which is less efficient, but can cool us the rest of the way. Having the experiment at 4 K helps us in two major ways. First, particles in our trap will come into equilibrium with the trap temperature; having the coldest possible particles is important for making trappable antihydrogen (I have discussed this in more detail previously). Second, the low temperature acts as a “cryopump” – background gas molecules will eventually collide with a cold wall and freeze, effectively decreasing the pressure. This is particularly important for antimatter research, since collisions of antimatter with any background gas will lead to annihilations. In the past, we’ve used these annihilations as a measurement of the background gas pressure, and have shown it to be less than 5e-17 Torr – one of the best vacuums in the world.

Looking down the center of the positron entry tube. The black, funny-shaped piece in the center is blocking the path.


A highly sophisticated piece of scientific apparatus

Two weeks ago, our cooldown was complete with no problems, and the emotion was one of cautious optimism. We still needed to test the trap wiring, since any poor soldering job, improperly strain-relieved wires, etc. can all lead to broken or shorted connections when the experiment cools. Thankfully everything passed, and spirits were pretty high.
Then, the unexpected happened: positrons, which we load through the top of our apparatus, were not making their way into the trap. Looking down from above revealed why: a thin piece of insulator had fallen down the positron entry tube, and was blocking passage into the trap. Just like that, our high spirits hurtled back down towards the earth; we would likely need to warm up the experiment, remove the positron tube, remove the blockage, put the experiment back together, and cool down once more.
However, improvisation – a necessary tool in the kit of an experimental physicist – was put to good use here. We were able to remove the blockage without warming up the apparatus by using – and I swear I’m not making this up – a stick with some tape on the end. OK, I’m being a bit simplistic. We used all cryogenic-compatible materials, a thin-walled fiberglass tube to reduce heat transfer from room temperature to 4 K, and set up an airlock system to avoid ruining the vacuum – but, the main idea is the same. We were able to grab the blockage with the tape and lift it straight out the top. It’s gratifying to think that some simple ingenuity saved us over 2 weeks of work and thousands of dollars worth of cryogens.
Needless to say, our emotions once again swung upwards. After this episode, we’re back to running smoothly, and are beginning to perform some diagnostic experiments with electrons and positrons to prepare for our antiproton beam run. Undoubtedly, we’ll all remain in good humor until the next mini-crisis appears.

Share

Artificial Intelligence and what it means to be Human

Thursday, March 24th, 2011

A good friend of mine, Brian Christian, has recently written a book and a teaser article in The Atlantic about Artificial Intelligence and what it can teach us about our humanity. As a physicist, I enjoy learning about AI and other broad technological advancements, and as a human, I enjoy learning and thinking about humankind; I find the intersection particularly intriguing.

Ken Jennings, 74-time Jeopardy champion, competing against IBM's Watson

At issue are the defining characteristics that make us uniquely human. Historically, we would compare and contrast ourselves with the rest of the animal kingdom. We would make statements like “Humans are the only animals that use tools” (a fine theory until primates were observed doing the same thing), or “Humans are the only animals to use language” (until discovery of communication in dolphins, whales, and other species). With the advent of computing and advances in AI and machine learning, the list of uniquely human attributes is dwindling rapidly. To start, computers possess memory and arithmetic skills that easily outclass humankind. Perhaps more interestingly, computers have demonstrated superiority in specialized fields like chess (Deep Blue) and Jeopardy (Watson), emerging victorious against the top human contenders – impressive feats given the non-deterministic trajectory of the contests.

So what about us is unique? Is it possible to complete the sentence “Humans are the only animal to __________”, in a way that is accurate now and reasonably accurate in the future? As I ponder this, I can identify two broad areas in which the quintessential essence of humanity shines through (and I’m sure there are many others that I haven’t considered).

The first is one of subtext – an ability to “read between the lines.” We can tell when someone is bored without asking directly. When a friend says “everything’s fine,” we can tell immediately if everything’s fine, or if there is some concealed crisis. Bribery, seduction, and threats (to use the words of Steven Pinker) can all have their desired effects without being spelled out explicitly. As humans, this comes so naturally to us that most people take it for granted.

Subtle body language, wordplay, and innuendo all stem from the root of shared experience. A human from a different culture may easily misinterpret or miss altogether these forms of under-the-radar communication. Having a computer attain fluency in such a medium seems to me reasonably out of reach for now. It’s worth noting that Watson performed most poorly when faced with questions involving puns, jokes, or words used in unusual contexts. Deep Blue played chess most weakly not during the opening (memorizable) or end-game (calculable), but the middle-game, where strategy, subtle positioning, possible gambits, and intent all need to be analyzed.

A second broad area that seems exclusively human is that of imagination and the will to create. The obvious application of imagination and creativity is to the arts; humans hold the monopoly on art for art’s sake, and computers have a long way to go to catch up. This may not be altogether unexpected. The arts are a subset of the humanities – those subjects which seek to inform us about the human condition. It is an enormous challenge for a computer, without any shared human experience, to teach us something about humanity. People certainly try; I like the example of a program that composes in the style of Bach. It can do pretty well for a short while, but then gets stuck in a rut – it fails to surprise us with new keys, transformations, and motives as would be expected from Bach. In a sense, these masters of high art – Bach, Shakespeare, Michelangelo – are guardians of our humanity, in that their creations stand alone as something only humanity can accomplish.

Of course, imagination and creativity are not limited to the arts. I would argue quite strongly that these concepts are of supreme value in the sciences, physics included. We may have ceded the grunt work of equation solving and data analysis to computers, but ideas are human borne. Einstein’s quote – that “imagination is more important than knowledge” – rings prescient here. The capacity of the human mind to see reality as it is, and imagine the unknown, underlying governing laws, seems safe from the encroachment of computers that necessarily rely upon pre-programmed, well-known governing laws.

I believe that identifying unique aspects of our humanity can have far-reaching implications, but I want to restrict myself to a quick discussion of education for now. In the US, modern schooling revolves around standardized testing, in which creativity is too often subordinate to fact memorization. This is troublesome in an era where we are losing (have already lost?) our knowledge supremacy to computers. It seems as though we should grab hold of creativity or whatever else makes us uniquely human, and build a culture around encouraging these pursuits. Otherwise, we may simply be leading future generations down a path of obsolescence.

Whatever the implications, thinking about the relationship of AI and our own humanity can teach us something new about ourselves, which in my mind makes it a fascinating subject.

Share

Antiproton cooling and 100-year-old physics

Monday, March 14th, 2011

One of the great things about physics is its universality: theory developed to describe a certain phenomenon can often be widely applied in a multitude of situations. A century ago, the electron was a recent discovery, and the “plum-pudding” model of the atom had just been felled. Protons (and indeed, antiprotons), ion traps, and the rest of our modern toolkit remained unknown. Yet, the methods used at ATRAP to cool antiprotons in an ion trap some 6 orders of magnitude can be traced back to ideas formulated around the turn of the century.

Why cool antiprotons at all? Well, when we make antihydrogen, its temperature is dominated by the temperature of the incoming antiproton, since the mass of the positron is so comparatively small. The fraction of trappable antihydrogen atoms decreases dramatically as the temperature goes up, so it’s important to start with the coldest possible antiprotons.

ATRAP Magnetic Field

Proof of our large magnetic field (yes, small Euro coins are slightly magnetic)

We start by noting that in our ion trap, we have a large, uniform background magnetic field. A charged particle in a magnetic field is confined to move in circles, constantly changing direction and therefore accelerating. If we reach back to 1897, we come across Larmor’s derivation that accelerating charges radiate away energy. Exactly how quickly depends on the magnetic field and the mass of the particle; in the ATRAP experiment, the antiproton radiates its energy away with a time constant of 36 years.

But, there’s good news. For the same magnetic field, the much lighter electron radiates much more quickly – 2/10 of a second. Even better, the electron and antiproton have the same sign of charge (negative). They can be trapped simultaneously in the same voltage well, and allowed to interact (there are no annihilations, since the antiproton and electron do not form an antimatter-matter pair). So, we exploit the quick cooling of electrons by letting them collide with antiprotons in our trap. After only a minute or so, the electrons and antiprotons have come into thermal equilibrium with each other, and with their 4 Kelvin surroundings. The final antiproton temperature is actually closer to 20 Kelvin, though, because unwanted electrical noise makes its way down into our trap and acts as a heat source. Nonetheless, electron cooling successfully reduces the antiproton energy by a factor of 100000.

(Side note: the same sort of physics explains why the LHC has to be so Large. The tighter the loop, the larger the energy loss due to radiation; at some point, the energy losses make the whole process wildly inefficient).

We’ve recently published a paper describing how we can further cool antiprotons to 3.5 Kelvin. We use the technique of adiabatic cooling, which is a fancy way to say that an expanding gas gets colder (provided nothing external puts in or takes away energy). Examples can be found in surprising places – it’s the reason why compressed air sprayed out of a can feels cold, why water vapor condenses into clouds as it rises and expands, and why a refrigerator can keep food cold. And, in keeping with the theme of this post, it’s all well described by thermodynamics worked out in the late 19th century. (Incidentally, the related process of adiabatic heating – compressing a gas makes it hotter – forms the heart of a diesel engine).

Larmor, Carnot, Boltzmann

Larmor, Carnot, and Boltzmann - 3 guys who never heard of an antiproton

At ATRAP, our “gas” is a cloud of antiprotons, which we can let expand in a controlled way by reducing the trapping electric field. We demonstrate that the measured temperature decreases as the volume increases – the hallmark of adiabatic cooling. It’s worth mentioning that we measure the final temperature of our antiprotons by observing the number that escape our trap as a function of trap depth; this traces out the tail of a Boltzmann distribution, from which we can determine the temperature – another 100+ year old invention.

It amazes me that here we are doing cutting edge research, and concepts from the 19th century are still being put to good use. As experimentalists, we should consider ourselves lucky that the well from which we draw our ideas runs so deep.

Share

Introductions

Monday, March 7th, 2011

Hello! I’m Phil, a grad student working at CERN, and I’m new to Quantum Diaries. In truth, I’m new to blogging as well; however, I love talking about physics, sharing my thoughts, and hopefully giving people something new to think about.

Now that I’ve introduced myself, I’d like to introduce the ATRAP experiment. ATRAP has been at CERN for over 20 years now, working on experiments with antihydrogen – the simplest atom made completely from antimatter (an antiproton nucleus orbited by a positron). The long-term goal has remained the same: trap large numbers of antihydrogen atoms, measure their energy levels, and compare to (matter) hydrogen. This would allow us to very precisely test the CPT theorem, which predicts identical agreement between the two. Any difference in the energy levels of hydrogen and antihydrogen would violate CPT (a fairly central theorem in physics), and could only be explained by new physics beyond the standard model.

We’re not the typical CERN experiment. For one, our collaboration has orders of magnitude fewer people than the big CERN groups (there were 17 co-authors on our last paper). Also, unlike most CERN experiments, we’re trying hard to lower the energy of our particles, so that they may be more easily trapped. However, no special treatment here – we get the “standard-issue” CERN building, just like everyone else:

CERN Building 193

The entrance to the Antiproton Decelerator

AD hall

Inside the antiproton decelerator

We live at the Antiproton Decelerator (AD), which takes an injected antiproton beam and reduces the energy by a factor of ~1000, to 5 MeV. The AD is unique; other places in the world produce antiprotons, but CERN is the only place that slows them down to low enough energy for trapping. If we want to make trappable antihydrogen, we had better start with trappable antiprotons!

AD Blackboard

Chalkboard in the AD. Might as well get the Dan Brown references out of the way now...

In the days and weeks ahead I’ll go much more into detail about the experiment – for now, I’ll leave off with a picture of what our experiment looks like, complete with high-precision tape, zip-ties, and aluminum foil:

BTRAP

Experimental apparatus, surrounded by a large magnet and detector systems.

Share