• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Archive for April, 2009

I’m back. Some family matters came up unexpectedly. There was no way to keep all of the balls in the air at the same time so I had to decide which ones to drop. This was one of them.

For me, the hardest part about science is knowing when to stop, particularly when working on a paper that will be the last in series. One of the experiments I work on is the FOCUS experiment, which finished data taking about a decade ago and is just now finishing its last few papers.

For a little over one year now I have been chairing an internal review committee for a paper being written by a group of my colleagues on FOCUS. This is the way that high energy physics collaborations ensure quality control: a group of people who were not actively involved in a particular piece of work but who are experts in that sort of work in general or in related fields, are asked to review what has been done, offer suggestions for improvements and ultimately to say that the work passes the standards that the collaboration has set for itself. Only after passing this internal review will the paper be reviewed by the full collaboration and only after a vote of the full collaboration will the paper be sent to a journal.

During the course of an internal review it is normal that new ideas on how to improve the work are suggested. Some of these will be a small amount of work and of obvious benefit; these will be done. Others will be a large amount of work and of uncertain benefit; these are usually not done. But many ideas will be in the middle, the toughest sort being an idea that has a clear benefit but which will require a large amount of work or, perhaps, an unknown amount of work. Some ideas look simple at first but eventually lead to redoing a major part of the work.

So this is the hardest part. There are some good ideas that, in the end, just won’t be done. How do you know when to stop? Last week we decided to stop.

If all goes well, FOCUS will submit our 57th physics paper. Look for it on the arXiv.org server in 2 or 3 weeks.

Share

President Obama addressed the National Academy this morning.  I missed the telecast but I’ve been reading the transcript.

Federal funding in the physical sciences as a portion of our gross domestic product has fallen by nearly half over the past quarter century…We double the budget of key agencies, including the National Science Foundation, a primary source of funding for academic research, and the National Institute of Standards and Technology, which supports a wide range of pursuits – from improving health information technology to measuring carbon pollution, from testing “smart grid” designs to developing advanced manufacturing processes. And my budget doubles funding for the Department of Energy’s Office of Science which builds and operates accelerators, colliders, supercomputers, high-energy light sources, and facilities for making nano-materials. Because we know that a nation’s potential for scientific discovery is defined by the tools it makes available to its researchers.

He also finally officially announced ARPA-E, the Advanced Research Projects Agency for Energy.  And new commitment to improve science education. Exciting times in we are.  And not one to miss an opportunity, I bet he left a lot of the audience misty-eyed with this (worked on me):

At root, science forces us to reckon with the truth as best as we can ascertain it. Some truths fill us with awe. Others force us to question long held views. Science cannot answer every question; indeed, it seems at times the more we plumb the mysteries of the physical world, the more humble we must be. Science cannot supplant our ethics, our values, our principles, or our faith, but science can inform those things, and help put these values, these moral sentiments, that faith, to work – to feed a child, to heal the sick, to be good stewards of this earth.

We are reminded that with each new discovery and the new power it brings, comes new responsibility; that the fragility and the sheer specialness of life requires us to move past our differences, to address our common problems, to endure and continue humanity’s strivings for a better world.

Share

So finally it is time to write a bit about my work on calorimetry for the ILC. Not surprisingly, I did not get around to writing while still in Beijing.

Going homeL Can you spot Munich on the board?

Going home: Can you spot Munich on the board?

The first few days I still had to prepare my lecture, so I used all the time I could get to finish that. Then, once my lecture was given, I started working on my lecture for tomorrow at the Technical University Munich (more about this also in some later post). Plus another afternoon of sightseeing and a night out with my friend in Beijing, and now, on the flight back to Munich, after my preparations for tomorrow are finally done, with 3 hours left to fly and my battery still almost at 20% capacity I can finally start writing.

To start off, what is calorimetry at the ILC about? The main task of the calorimeters in an ILC detector is to measure jets. A jet is a spray of particles flying into the detectors, which all come from the same original particle: A highly energetic quark or gluon created in the interaction. To understand what happened in the particle collision, we need as much information as possible about these original particles, most notably their momenta or their energy and direction. The problem is that free quarks and gluons do not occur in nature, so we can only observe particles that consist of several quarks. The reason for this is called confinement, and it is a weird feature of the strong interaction: Unlike other forces we know (electromagnetism is just one example here), the strong interaction is weak when strongly interacting particles are close together, and gets stronger the farther they are separated. Once a certain separation is reached, there is so much energy stored in the strong field that new quarks are created out of this energy. In this way, a highly energetic strongly interacting particle leaving the reaction zone will create many quarks, which in the end form so-called color-neutral (no charge of the strong interaction) particles. These particles together form the jet. Now, if you can measure the jet precisely, you also get precise information about the original quark.

A simulated event at the ILC. Multiple jets in the detector and many particle showers in the calorimeters.

A simulated event at the ILC. Multiple jets in the detector and many particle showers in the calorimeters.

The figure on the right shows you what events at the ILC will look like in the detector. In this picture, the calorimeters are the two outermost shells of the detector (see Anadi’s nice description about how a high energy physics detector works.) I am focusing on the hadronic calorimeter, the thick outermost layer. Just by looking at the picture, you can clearly see the jets flying out radially in different directions in the detector. Measuring jets is not easy: You have many different particles, which you see in different parts of your detector. In the end, you have to combine the right particles to form your jet, otherwise you’ll never get the right energy for your jet, and with that for your primary quark or gluon.

Most particles in the jet, at least the more energetic ones, you can see in the calorimeters. This is why a common way of measuring jets is by adding the energy seen in the electromagnetic and in the hadronic calorimeters. However, for charged particles (which make up the majority of all particles in a jet), much more precise information can be obtained from the tracking detectors, which measure the particle momentum via the curvature in a magnetic field. So, by adding this information, you will be able to do a better job of getting the right energy for the jet. So, where is the catch? Well, all particles you see in the tracker also give you a signal in the calorimeter. Plus, there are particles (neutral ones) you can not see in your tracker. So it is not possible to reconstruct your jets without using the calorimeters, and when using them, you have to be really careful that you do not count the same particle twice. So you have to identify which signals in the calorimeters come from particles you already know from the tracker, and which are new ones. That is by no means easy, and very sophisticated software is currently being developed to push this idea as far as possible. These algorithms also have a nice-sounding name: Particle Flow Algorithms, or PFA in short.

The CALICE logo.

The CALICE logo.

So, to enable the ILC to deliver the results we particle physicists long for, we need calorimeters that are optimized to deliver the best possible jet energy resolution using PFA. The goals are ambitious: An ILC detector should be twice as good in measuring jets as ATLAS, the detector system at the LHC with the best jet energy resolution. The development of the technologies needed for such calorimeters is the goal of the CALICE collaboration, an international team of close to 300 scientists from America, Asia and Europe. Within that team, I am working on the hadron calorimeter, and on the analysis of data we have taken in a still continuing series of test experiments at DESY, CERN and at Fermilab.

More about our ideas and studies in a later post, my laptop is about to call it a day for now, and I’m now getting closer to home: Already in EU airspace…

Share

Duplicates

Saturday, April 25th, 2009

In high school I did competitive robotics through US FIRST Robotics on Team RUSH. If you aren’t familiar with the program, teams of high school students are mentored by professional engineers to build and compete with a robot. A new game is presented every year in January and the teams have less than 2 months to design and build the robot. This was not ‘battle bots’ – there was a focus on learning and competition.

One of the many lessons I learned on the team was to build duplicates. We had to build and ship the robot by a certain day, but it wasn’t yet time to compete. We would have time at the competition to make mechanical and software improvements to the robot, but we had to have a way of developing them. At the time the Detroit auto industry hadn’t collapsed (the numerous Michigan teams are primarily sponsored by auto engineering firms) so we had enough resources to build our own competition field and and an additional robot. This meant that we could practice our driving and strategy while our competition robot was on the way to the first competition of the season.

Can this lesson apply to physics? The amount of time and money to build the robot seemed immense at the time, but are orders of magnitude less than what it takes to build a particle physics experiment. We certainly can’t build two cryostats out of clean copper and only send one to WIPP. But we did build a mock-up of our time projection chamber to begin the build process. We have such delicate parts that trying the assembly outside of a clean room first meant we had practice before we assembled the real thing. This didn’t even double the cost of the TPC – we had ordered parts with plenty of extras and built the mock-up out of flawed parts that would not have been suitable for the real TPC. Some parts weren’t made of pure materials and others weren’t plated in expensive conducting metals, like the real parts are.

Now we are trying to create a duplicate electronics structure. The detector is built and attached to the read out electronics, so we are debugging the electronics and software to acquire data. While some collaborators are switching board positions and adjusting voltages, I’m trying to fix the software. It is quite the tango – it is hard to tell if some problems are in hardware or software and sometimes one group makes changes that mess up the other group’s debugging. Having a test electronics system with the minimum hardware and computers to test the software means that the two processes will be able to work independently. Once the real electronics system is installed at WIPP (and the bugs are out of the software!) we can use the test system to work on upgrades to the electronics without interfering with the data runs at WIPP. We can even use it for developing prototypes for readout systems for ton-scale EXO.

Share

Not a day at the beach

Saturday, April 25th, 2009

Only two weeks left until the end of the academic year! This is always a very busy period, which is my excuse for not writing anything recently. Very little academic business gets done around the university during the summer, so all sorts of things need to get wrapped up before we get to the end of the term, and there are always so many year-end events for our students too. And of course I still have my class to teach; this is going farily smoothly, but I will probably need every last minute in the next two weeks (or at least until I have prepared the final exam) to bring it to a happy ending.
As it happens, I also have a cluster of research-related travel right now — not helpful for getting my teaching done, but it gives me something to write about. I spent some of this week in San Diego, where those of us working on CMS software and computing gathered to discuss the state of the world. These meetings are more typically at CERN, but someone (I’m not even sure who, actually) came up with the brilliant idea of doing them next to an ocean this time instead. That’s great for me — not the ocean part, so much, but it’s always a challenge for me to get to CERN, what with the long distance and the fact that it’s hard to go for less than a week. For these meetings, I was able to teach on Tuesday morning and catch a flight here that night, and still attend most of the workshop.
As has been true for some time, the question we have been struggling with is are we ready for the start of the LHC, and if not what do we have to do to get there. I think that the greatest value of this meeting (heck, any meeting, I suppose) was to bring together groups of people who don’t usually talk. It turns out that there were cases of people working on different aspects of particular problems who had very different understandings of some of the issues. For instance, there was a dispute over whether “24 hours” actually meant 24 hours, or something more like 48 hours. And in some cases, one group of people didn’t know about work that another group was doing that could in fact be very useful to the first group. In short, there’s nothing like actually getting people in the same room to explain themselves to each other.
But once again, I was struck by just how complicated this experiment will be. The challenge from the computing perspective is how interconnected everything is. We want to make sure that a user can’t do anything that could essentially knock over a site (or possibly the whole distributed computing system) by accident. Certainly there were times in the meetings when someone would ask, “why do we have to make it so hard?” but honestly, sometimes it just is that hard.
Anyhow, next week I’ll be in Denver for the April general meeting of the American Physical Society. I’ll write about it then…much more physics content, I promise!

Share
Title page of “Discorsi e Dimostrazioni Mathematiche” in 1638

Title page of “Discorsi e Dimostrazioni Mathematiche” in 1638

Before closing a series of “Quantification”, I have to say the quantification is not enough to study physics, of course. A good example is Einstein’s theory of relativity. As often your reading the text book, there are Galilei’s relativity principle and Einstein’s relativity one. The latter is finite light-velocity version of the former.

In the book by Galilei, “Discorsi e Dimostrazioni Mathematiche” in 1638, he even proposed to measure the speed of light using the distance of two mountains, but light was too fast to get the value. Ole Christensen Romer, a Danish astronomer, in 1676 made the first quantitative measurements of the speed of light to use a satellite of Jupiter, of which value was 214300km/s. Romer’s result that the velocity of light was finite was not fully accepted until measurements of the aberration of light were made by James Bradley in 1727, of which value was 299042km/s. In 1849, Armand Hippolyte Louis Fizeau, a French physicist, got the value 315300±500km/s on the ground to use a special apparatus with gears.In 1862, Jean Bernard Leon Foucault, a French physicist, got 298000±500km/s with mirrors system.

We can see the quantification of the speed of light was already well done in the middle of 19th cent. Light was also identified as the electro-magnetic wave from its value of the speed. Nobody, however, realized it had a special meaning that light had finite speed. Maxwell’s equations of the electromagnetism obeys the version of the relativity principle with the finite speed of light, but Newton’s eq. of motion does not! This observation caused Einstein to postulate the speed of light in free space is the same for all observers. It was leap to the modern physics. So the quantification is indispensable but not enough to reach to physics.

The first part of the paper on the theory of special relativity by Einstein, Annalen der Physik, 17(1905)

The first part of the paper on the theory of special relativity by Einstein, Annalen der Physik, 17(1905)

物理における「数値化」の重要性シリーズを終える前に、「数値化」だけでは不十分であることは
当然なので、例をあげておこうと思います。それはまさにアインシュタインの相対性理論がそうなのです。
よく本などでガリレオの相対性原理とアインシュタインの相対性原理というものが出てきますが、
その違いは、光が無限に速いか、有限の速度を持つと考えるかの違いです。

ガリレオは1638年の「新科学対話」の中で、光の速度を測る実験を提案しています。それは遠くに離れた山の頂上の一方で、光をつけたり消したりすることで、もう一方の人がその時間差から光の速度を測ろうというものでした。当然、光の速度は速すぎて、値は得られませんでした。その後、デンマーク人で天文学者であったレーマーが1676年に木星をまわる衛星イオの見え方から人類史上初めて光の速度を測定し、214300km/秒であるとしました。この光の速度が有限であるという結果は1727年にブラッドレイが光路差を使い、その速度が299042km/秒であるとの測定がなされるまで、完全には受け入れられませんでした。1849年にはフランスの物理学者フィゾーが歯車の回転を使った装置を考案し、それまでは天体の観測によってしか測られていなかった光速度が地上で初めて測定されることとなり、その値は315300+/-500km/秒と、また、同じくフランスの物理学者であったフーコーが鏡を使った実験により、298000+/-500km/秒であるとしています。

このように、19世紀の半ばにはすで光速度の数値化は十分できていました。そして、その値から光の正体が
電磁波であることもわかってきていたのです。ところが誰も光が有限な速度を持つことの真の意味を理解
していませんでした。電磁気学をあらわすマックスウェルの方程式は光速度が有限である相対性原理を
満たしていましたが、ニュートンの運動方程式は依然としてガリレイの相対性原理を満たす方程式で
しかありませんでした。この差に気がついたアインシュタインは、異なる速度で動いている慣性系であろうと、光の速度は変わらないことを原理とした「特殊相対性理論」を作りあげます。そして
これこそが、20世紀現代物理学への跳躍となったのです。「数値化」は必要なのですが、真理に到達
するには、それだけでは十分ではない良い例でした。

Share

At KITP, Santa Barbara.

Friday, April 24th, 2009

At last, I came to Santa Barbara, California,  again. I am sitting at a desk in a room with a great oscean view. Yes, this is Santa Barbara.

This place is very special for me. This is the third visit to SB: The first one was 10 years ago, as a post-doc here of KITP (Kavli Institute for theoretical Physics. At that time it was called just ITP, and I have a cap with a printed logo “itp.ucsb” which is now a rare item…). The second one was 4 years ago, I stayed here for two months, participating in a long-term program “mathematical aspects of string theory” in KITP. Now, the thrid time has come, but only for three weeks. In view of that the first one was for 1 year, the second one was for two months, it seems my visiting period is getting shorted, unfortunately. But I really love this place.

10 years ago, when I came here for the first time, I decided to write a blog for myself and for my previous supervisor at Kyoto university to tell him that I was surviving in a foreign country, the US. And my blog continued almot for ten years, that is now surprising at least to me. I have never thought about how long I can continue the blog. 

All of the materials in my Japanese blog are written in Japanese. I know that I, myself, change my personality when I change my language. That should be true for anyone, I guess, except for native bilinguals. When I speak in English, the Japanese part of my nature disappears slightly. The Japanese part means for example a recognition of a slight difference in age when I speak to my collegues, and a difference in a way to ask someone something, and so on. This , summing up, in total provides a drastic change in communication, that is completely different from english way of communication. So I wonder my blog, now written in English, my have very different taste compared with the Japanese one which I got used to since 10 years.

Anyway, I am now at Santa Barbara. I have too many stories to tell about Santa Barbara, partly because 10 years ago it was my first experience to live outside Japan, I had experienced a lot of “culture shocks.” When I came here last time, it was 4 years ago, the very feeling which I felt 10 years ago actually came up to me again really vividly. When I saw a tree, a shop, the oscean, a road, a flower, a park, anything — literaly anything reminded me of every feeling which occured to me 10 years ago. So, 4 years ago, the visit was truely exciting. I was too excited and in fact was so exhausted.

But this time, something is different. Why? Probably I expected the same feeling as that I felt 4 years ago. But the truth is, I haven’t felt that excitement this time. The talks here are very good, I enjoyed them very much, as before. Then what is different this time? I suddenly understood that I myself hass changed a lot. Probably in the  years. My way of thinking should have changed, that should be the reason why I felt very different this time for visiting Santa Barbara.

I will tell the difference more in coming blogs. But, in any case, anyway, I love Santa Barbara.

Share

Bad Science

Friday, April 24th, 2009

Its a lovely sunny Friday afternoon in Hamburg, and the weekend looks equally good weather wise. I should be in a great mood and have a jolly physics tale to tell. I find, however, I’ve been disturbed all week about the release of the CIA torture memos and the subsequent pronouncement that the torturers had no need to fear as there was no intention to prosecute them – “it was a time for reflection”, not justice. This reassurance had to be quickly followed up by a visit to CIA headquarters with the message that they had done a great job.

Now I’m not an ethicist or a lawyer. I will not attempt to understand the apparent contradiction between something which is morally repugnant and illegal (as well as ineffective) on one hand, and a “good job, well done” on the other. Likewise I’m sure I dont understand the legal subtleties in the position that torture under direction from the previous administration was legal, and, say, the Nuremberg defence of the Nazis – that they were only following orders.

No what my concern properly should be is the “science of torture”. For make no mistake, a lot of research time and effort has been spent in trying to understand how to break a person. A lot of the modern research was based on CIA funded studies from the 1950s and “experimented” on in places like Vietnam and Central America. One conclusion was that medieval type tortures (of which waterboarding is one) fail consistently to produce reliable information. As a case in point, we learn from the recent memos that one victim was water tortured 288 times. Presumably there were still one or two items he forgot to mention after the first couple of dozen sessions. Or perhaps he successfully managed to endure 287 successive boardings and then suddenly gave in on the 288th out of sheer boredom.

What was discovered, in the ever refining art of torture, was that other methods may be preferable

“Guantanamo Bay turned into a de-facto behavioral science laboratory,” McCoy told LiveScience, where sensory deprivation and self-inflicted pain—allowing a detainee who had stood for hours to sit if he would only “cooperate”—regularly took place. […] Though captives are less resentful when tortured psychologically, it doesn’t make their statements any more trustworthy

Now that sounds like my sort of torture. Anyone who has stood in a bank queue in the UK knows that they could put up with that almost indefinitely if pressed. A point complained upon by one Washington bureaucrat privy to the memos, who himself “had to stand for 8 hours behind his desk all day” (why?) – couldn’t they find something tougher? Its not all fun and games however

You simply make somebody stand for a day or two. And as they stand – okay, you’re not beating them, they have no resentment – you tell them, “You’re doing this to yourself. Cooperate with us, and you can sit down.” And so, as they stand, what happens is the fluids flow down to the legs, the legs swell, lesions form, they erupt, they separate, hallucinations start, the kidneys shutdown.”

I dont want to go on at length about how torture – psychological as well as physical – hasn’t helped. How almost all released prisoners from Guantanamo have had no charges filed against them or have been found not guilty. How after 8 years of war we are further away from an end to terrorism than ever.

Presumably if a whole swathe of people have a grievance against you because they perceive that they have been mistreated in the past, then torturing some of their innocent number wont help soothe them. You can’t torture or kill everyone with a grievance. Even the most hardened hawk must get sick of infinite war sooner or later.

What my real concern, as a scientist, is what sort of people carry our ‘research’ like this? I like to think of myself as a person engaged in science because I see the value in knowledge for its own sake. What can we say of ‘scientists’ with stop watches and notebooks who try to find out the quickest method for dehumanising a person? Could we excuse Josef Mengele as being primarily concerned with advancing medical science?

Of course not. The conclusion I come to is that you can’t separate Science from Society, or indeed Politics, Law or Morality. This must be because, as a human endeavour Science is related to all other fields of human activity. As a physicist I know who went for an interview at a well known multinational arms manufacturer and who was asked whether he minded doing research on things that killed people, I can state that there is such a thing as Bad Science.

Share

Collisions in the Universe

Thursday, April 23rd, 2009

Last week astronomers observed the most crowded collision in the Universe! Four clusters of galaxies poured into a crowded 13 million-light-year-long stream of galaxies. (eventually our Milky Way will merge with the neighboring Andromeda galaxy as well!). These galactic events can probe the existence of so called “Dark Matter”. In fact particle physicist developed a superb model with predictions confirmed at the per mille level.  But how much of the Universe does the Standard Model explain? Just 4%! The rest is out there to be discovered.

I am not  an expert in astronomical measurements, but these events do grab my attention. Let’s start from a simple definition of Dark Matter. Dark Matter is matter undetectable by its emitted radiation, it is not visible. As for today, astronomers measured its contribution to the total Universe mass to be ~25%.

How did we infer the existence of Dark Matter in first place ? As the Earth rotates around the Sun due to gravitational attraction,  stars in galaxies rotate around the center of the galaxy. However the amount of mass visible is not enough to explain the rotational velocity, a large component of non-visible mass must exist. This is one of the proofs along with orbital velocities of galaxies in clusters of galaxies, and gravitational lensing.

Gravitational Lensing

Gravitational Lensing

The presence of mass can in fact be explained by the fascinating phenomena of “gravitational lensing”. Imagine a star far from the Earth. If the light from the star travels without encountering obstacles up to the Earth, we see a light spot. However, if there is a large amount of mass (say a galaxy) between us and the star, the light from the star changes its path (see the picture on the left). The  gravity due to the extra galaxy acts  like a lens to redirect the light rays, it bends the light. The gravitational lens does not create one single image of the star, but multiple ones. It can also distort the star disk-like shape into an ellipse. If the extra galaxy were perfectly symmetric with respect  to the line between the star and the Earth,  we would see a ring of stars!

Image of gravitational lensing

Image of gravitational lensing

What happens when two clusters of galaxies collide ? By now we now that a cluster of galaxies is gravitationally bound object, and the densest part of the Universe. Stars constitute ~2% of its total mass while so called “intergalactic gas” contributes to ~15%. The remaining mass is still in the Dark.
The clusters collide at speeds of millions of miles per hour. Several are the observatories taking pictures of these titanic events.

The Hubble Space Telescope, the Magellan Telescope and Very Large Telescope provide a map of the total mass (dark and ordinary) using visible light. The gravitational lensing indicates the location of the Dark Matter component (blue). As an example you can see the pictures from the well known “Bullet Cluster” observed in 2006. The Chandra data enabled the astronomers to accurately map the position of the ordinary matter by itself, mostly in the form of hot gas, which glows brightly in X-rays (shown in pink).

Image of Dark Matter

Image of visible mass

Image of Dark Matter (above); Image of visible mass (below)

As the clusters travel in opposite direction, they eventually collide. The picture below shows you the mass distribution after the collision. The ordinary matter slowed down compared to the Dark Matter and the two components separate.This is due to the different forces exerted on the Dark and visible mass. Dark Matter  particles interact with each other only very weakly or not at all, apart from the pull of gravity. (ordinary matter experiences larger “friction”, therefore it slows down during the collision).

The separation provides observational evidence for dark matter.

The "Bullet cluster" collision

The "Bullet cluster" collision

What’s the Nature of Dark Matter ?

A variety of cosmological data suggests that Dark Matter may be relics from particles present in the early universe. Currently the best theory to explain the origin of dark matter is Supersymmetry (SUSY), which predicts the existence of a “superpartner”  for each Standard Model particle. The lightest superpartner of the neutral bosons (the Z and the Higgs bosons), called the “neutralino,”  is an excellent candidate for this elusive form of matter. Being able to observe the SUSY particles would be crucial for a deep understanding of the universe.  Superparticles could be generated in proton-antiproton collisions at the Tevatron and in proton-proton collisions at the LHC.
The experiments at the Tevatron accelerator, CDF and D0, are desperately seeking a sign of SUSY in the collisions stored on tape, however these particles – if they exist – might be heavier than 100 times the proton. ATLAS and CMS are tuning their tools to be ready for the incoming LHC collisions!

Share

Moving Physics Bits

Wednesday, April 22nd, 2009

I work on a very hard experiment. Most experiments are hard, even outside of particle physics – it has been a very long time since anyone could simply drop two objects and learn something new. In my experiment we have to worry about the number of Uranium atoms in a few tons of copper. We had to find ways to make electrical contacts and carry signals without the normal mechanisms of solder or wires. We have a cleanroom that we shipped from Stanford, CA to Carlsbad, NM and then put 2000 ft underground in a salt mine. My collaborators are researching ways to grab a single barium ion and identify it with lasers. My experiment has overcome a lot of obstacles, but in some ways the most recent problem is the most bizarre: how do you get the data out of the salt mine?

Particle physics experiments usually result in huge amounts of data, which always pose a challenge to process and store. The LHC experiments see Terabytes (1000 GB) of raw data a second, which is filtered and recorded at around 1.8 GB a second. Our maximum rate is around 80 MB/s (about 4% the LHC rate), which we will sustain for a few hours during our calibration periods. The big collider experiments have way more data, but they also are within state of the art computing facilities. My home network can’t handle our data rate, but the almost any academic network could. If we were trying to get this data from one side of SLAC to another, it wouldn’t be much of a problem. However, we are trying to get it out of a government salt mine.

WIPP Entrance (note fence)

WIPP Entrance (note the fence)

The WIPP site has incredible security, which I’m sure everyone appreciates since it is a nuclear waste storage site. However, their security protocols mean we can’t use their network. We have our own 1.5 MB/s network going in to the mine. At least the last time I was there (back in October), the network wasn’t always working. It will never be good enough for transferring our data. It is barely enough to transmit commands to control the experiment remotely, such as when the mine is closed. We looked into trying to get a better connection, but that would mean getting a new line installed between WIPP and Carlsbad, which is something like 35 miles. It might be cheaper to get our data to the moon!

Since the data can’t get out via wires, we are left with a choice between tapes and hard drives. Tape drives might seem old-fashioned, but they are still used in science. They are less fragile than hard drives and giant tape-loading robots simplify the data transfer process. The fragility is an issue since we will be shipping the drives back and forth between WIPP and SLAC. Our collaboration has found only one experiment so far (SAUND, in the Bahamas) that ships hard drives. Their data volume is also less than ours – we’ll be practically juggling disks going back and forth. We’ll have the delays of transferring the data from the WIPP computers to the drive to be shipped (or at least backing it up), the transit time, and then the time of getting it off the transfer disk.

So forget determining the neutrino mass – that’s easy.  How do we get our data from WIPP to SLAC with the minimum delay and cost?  Moving those bits, with all of our physics in them – that’s hard.

Share