• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘physics’

–by T.I. Meyer, Head of Strategic Planning & Communication

I was at a seminar recently, and they posed the following question: Suppose you are 2 metres away from a solid wooden fence with a small hole cut out in it. As you watch the hole, you see the head of a dog go by, and then you see the tail of a dog go by. You see this happen, say, three times in a row. What do you conclude?

The conclusions are less interesting, I think, than, the space of all possible conclusions. Intuitively, as human beings, we would think there is a RELATIONSHIP between the head and the tail of a dog. What are the possible types of relationships?

  • Causation. We might think that the head of a dog CAUSES the tail of a dog. This is perhaps the most powerful and most natural pattern of our human brain. We are always looking for cause and effect. But, depending on how much quantum mechanics you shoot into your veins, is causation really real or is it just a human construct? Consider how sure you are, as an individual, about all the causes and effects in your life and your surroundings. Are you sure about cause and effect?
  • Coincidence. It could be that the two events (sighting of dog head and sighting of dog tail) simply were because of random chance. If we watched longer, we might see something else. How often do we mistake coincidence with cause and effect?
  • Correlation. It could be that the head of a dog is correlated with the tail of a dog, in the sense that they “arise together” on a common but not causal basis. Correlation is a powerful concept in statistics, where it suggests that two events happen often together but not because one necessarily causes the other.
  • Parts of a Whole. This is the “true” answer for the dog sighting; a dog head and a dog tail are parts of a whole that we see through the fence. Thus, there is no real cause and no correlation and no coincidence; we are simply observing two instances of some common underlying connection – that a living dog’s body has both a head and a tail.

In physics, we rely on this set of approaches. We worry about whether we have established causality, correlation, coincidence, or parts of a whole. When we measure a frequently occurring set of “particle debris” after a collision of two particles, we wonder if the collision “caused” the debris or if the debris actually reflects “part of a whole.” We apply rigorous statistical cross-checks and tests to assure ourselves that we have “watched long enough” to be confident (in a quantitative fashion) about our interpretation.

It is in this same realm that we often run into the confusion of pseudo-science that tries to pin everything on cause and effect or something else entirely. Pseudo-science almost always boils down to someone claiming cause and effect, where what they might be really be observing is simply an unexamined or unexplained relationship between two events or two occurrences. Part of the job of science is to provide a systematic methodology to tease out what these relationships are. In fact, science is aimed at mastering these observed relationships so that we can make “predictions.”

But why do humans love cause and effect so much? It certainly seems “easy to understand.”

I propose a somewhat silly response, perhaps based on Dawkins or Gould or Pinker. Cause & effect is the most precautionary approach for human beings wandering in the wild trying to survive predators, hunger, and other hazards. For instance, if you see the paw prints of a roaming tiger, the best survival strategy is to assume that a tiger caused those prints and you should get going in the other direction. A scientist might want to stop and consider whether the prints were fresh, whether they fit the characteristics of the tiger you saw yesterday, and so forth. But a human brain focused on survival is optimized for making quick calculations using the cause & effect principle to save its own skin.

So, take a look around you and your world. In how many ways and in how many places do you see that we rely on cause & effect as an explanation because it is convenient?

Moreover, what other categories of relationship do you see? And what experiments would you conduct to help separate out these types of relationships?

Share

This post was written by Brookhaven Lab scientists Shigeki Misawa and Ofer Rind.

Run 13 at the Relativistic Heavy Ion Collider (RHIC) began one month ago today, and the first particles collided in the STAR and PHENIX detectors nearly two weeks ago. As of late this past Saturday evening, preparations are complete and polarized protons are colliding with the machine and detectors operating in “physics mode,” which means gigabytes of data are pouring into the RHIC & ATLAS Computing Facility (RACF) every few seconds.

Today, we store data and provide the computing power for about 2,500 RHIC scientists here at Brookhaven Lab and institutions around the world. Approximately 30 people work at the RACF, which is located about one mile south of RHIC and connected to both the Physics and Information Technology Division buildings on site. There are four main parts to the RACF: computers that crunch the data, online storage containing data ready for further analysis, tape storage containing archived data from collisions past, and the network glue that holds it all together. Computing resources at the RACF are split about equally between the RHIC collaborations and the ATLAS experiment running at the Large Hadron Collider in Europe.

Shigeki Misawa (left) and Ofer Rind at the RHIC & ATLAS Computing Facility (RACF) at Brookhaven Lab

Where Does the Data Come From?

For RHIC, the data comes from heavy ions or polarized protons that smash into each other inside PHENIX and STAR. These detectors catch the subatomic particles that emerge from the collisions to capture information—particle species, trajectories, momenta, etc.—in the form of electrical signals. Most signals aren’t relevant to what physicists are looking for, so only the signals that trip predetermined triggers are recorded. For example, with the main focus for Run 13 being the proton’s “missing” spin, physicists are particularly interested in finding decay electrons from particles called W bosons, because these can be used as probes to quantify spin contributions from a proton’s antiquarks and different “flavors” of quarks.

Computers in the “counting houses” at STAR and PHENIX package the raw data collected from selected electrical signals and send it all to the RACF via dedicated fiber-optic cables. The RACF then archives the data and makes it available to experimenters running analysis jobs on any of our 20,000 computing cores.

Recent Upgrades at the RACF

Polarized protons are far smaller than heavy ions, so they produce considerably less data when they collide, but even still, when we talk about data at the RACF, we’re talking about a lot of data. During Run 12 last year, we began using a new tape library to increase storage capacity by 25 percent for a total of 40 petabytes—the equivalent of 655,360 of the largest iPhones available today. We also more than doubled our ability to archive data for STAR last year (in order to meet the needs of a data acquisition upgrade) so we can now sustain 700 megabytes of incoming data every second for both PHENIX and STAR. Part of this is due to new fiber-optic cables connecting the counting houses to the RACF, which provide both increased data rates and redundancy.

With all this in place, along with those 20,000 processing cores (most computers today have two or four cores), certain operations that used to require six months of computer time now can be completed often in less than one week.

Looking Ahead

If pending budgets allow for the full 15-week run planned, we expect to collect approximately four petabytes of data from this run alone. During the run, we meet formally with liaisons from the PHENIX and STAR collaborations each week to discuss the amount of data expected in the coming weeks and to assess their operational needs. Beyond these meetings, we are in continual communication with our users, as we monitor and improve system functionality, troubleshoot, and provide first-line user support.

We’ll also continue to work with experimenters to evaluate computing trends, plan for future upgrades, and test the latest equipment—all in an effort to minimize bottlenecks that slow the data from getting to users and to get the most bang for the buck.

— Shigeki Misawa – Group Leader, RACF Mass Storage and General Services

— Ofer Rind – Technology Architect, RACF Storage Management

Share

–by T.I. Meyer, TRIUMF’s Head of Strategic Planning & Communication

“So, did the 8 pieces of artwork actually generate any new insights for the physicists about neutrino oscillations,” asked the gentleman in the fifth row of the auditorium. I was on stage with my colleague Professor Ingrid Koenig from Emily Carr University of Art & Design. We were leading a 75 minute session at the Innovations: Intersection of Science & Art conference, curated by Liz Lerman and organized by Wesleyan University in central Connecticut.

The gentleman, chair of Wesleyan’s department of environmental science, repeated his question, “So you said this project was about seeing if you could have art influence physics rather than just the other way around. Well, did it work?”

Damn good question. I looked at Ingrid for a moment and then responded: “Nope.” But then I continued. No, we did not achieve success in using physics-inspired artwork to change the course of particle physics. But yes, in addition to learning that we posed the wrong hypothesis, we did achieve three other outcomes: (1) We constructed and executed one of the first research experiment at the intersection of art and science; (2) We documented a carefully controlled interaction of artists and particle physicists; and (3) We launched an inquiry that now has a national laboratory (TRIUMF) musing about how to exercise its influence in local and national culture for the advancement of society.

What was all this about? We were invited to lead a session at this conference because of the “RAW DATA” project for which TRIUMF and Emily Carr collaborated. For the full story on our “experimental research project,” please see this handsome website. One thing we discussed in the Q&A period (of course!) was the next step in the research. Perhaps rather than focusing on an experiment where the “work” of scientists was transferred to artists (whose “work” in turn was transferred to other artists and then back to scientists), we should construct an experiment where a “practice” or “process” of science (and art) was transferred. For instance, one thing scientists and artists both deal with is uncertainty and ambiguity. It was suggested that there might be something valuable uncovered if we had scientists and artists sharing their approaches to dealing with and communicating uncertainty.

The purpose of the conference was to pull together scientists, artists, and teachers from across North America to compare emerging trends and look for common opportunities for teaching at the intersection of art and science as well as for performing research at the intersection of art and science. In many regards, universities are starting to respond to the teaching opportunity but are less organized in exploiting the research opportunity. For instance, a key thread at the conference was the distinction between “art working for science” and “science working for art” when the real question might be, “What can science and art do together?” Lofty goals, of course, especially when sometimes the first step of bringing the fields together might actually be some “service” for the other side.

Better yet, I was not the only particle physicist there! Sarah M. Demers, an ATLAS physicist from Yale of some fame, participated as well, based on her experience co-teaching a “Physics of Dance” course with famed choreographer Emily Coates. The duo gave a fascinating presentation that started out with an inquiry “How do I move?” or rather “Why can I move?” Starting from the observation that atoms are mostly empty space and gravity ultimately attracts everything, they discussed why we can stand up at all (electrostatic repulsion between the electrons orbiting the atoms of the floor and those orbiting the atoms in my shoe on my foot in my sock). Then the question became, “How can I actually move my body at all if everything is repulsive and forces are balanced?” The answer came next, articulated by the dancer/choreographer who talked about how we use friction to generate a net force on our center of mass and can then use electrical impulses to stimulate chemical reactions in our muscles to push against ourselves and the floor. And then the talk moved to how to present and experience the Higgs field and the Higgs boson…in the form of a dance. WOW.

Throughout the 36 hours of this intensive, multi-dimensional conference (yes, we did “dance movement” exercises between sessions to help reflect and internalize the key points of discussions), I took copious notes and expanded my brain ten-fold.

A few other comments from my notebook.

There are really only two things that humans do: experience or share. We are either experiencing reality or we are sharing some aspect of it via communication (and yes, one can argue that communication does occur within reality!). Doing something is an experience, making a discovery is an experience, listening to music is an experience. And teaching, publishing a scientific paper, or making art for someone else are more in the sharing category. So, there are aspects of science and art that are both in “experience” and the “share” category.

Furthermore, science and art do not actually exist as stand-alone constructs. They only exist in our minds as modalities for thinking. They are tools, or perhaps practices, that assist human beings in “dealing with” or “responding to” the world. From this perspective, they are just some of the several modalities for organizing our thinking about the world, just like mathematics or engineering are also modalities.

During some of the breakout discussions, we sometimes got excited and use the terms art, creativity, and self-expression interchangeably. Unpacking these terms, I think, sheds considerable light on the path forward. Self-expression is just that…the process of expressing one’s self. Creativity is about being generative and often includes powerful threads of synthesis and analysis. Art, however, transcends and includes both of these. Art is meant to be “seen” by others, if I can simplify to just one verb. An artist, when creating a piece of art, is considering some audience, some community, or maybe just one person and taking into account how they might react to or interact with the artwork. It’s like the distinction between having an insight (smoking is why I have poor health) and a breakthrough (I have stopped smoking and haven’t had a cigarette for 6 months). In a strange way, this is parallel to what we do in science. An experiment or theory is just a nice idea, but until I write it up and send it out and have it approved for publication, it is just in my head and doesn’t actually advance science. Granted, scientific publications are perhaps more targeted at scientific peers while art’s discussion and acceptance might be determined by some other audiences beyond just artistic peers. But in a way, art is meant to be out there and wrestled with by people. And so is science.

So, what random musings do YOU have about science & art? Are they different?  Are they the same expression of a similar human yearning or inquiry?  Can they be combined?

Share

Heat: Adventures in the World's Fiery Places (Little Brown, 2013). If you haven't already fallen in love with the groundbreaking science that's taking place at RHIC, this book about all things hot is sure to ignite your passion.

Bill Streever, a biologist and best-selling author of Cold: Adventures in the World’s Frozen Places, has just published his second scientific survey, which takes place at the opposite end of the temperature spectrum. Heat: Adventures in the World’s Fiery Places features flames, firewalking, and notably, a journey into the heart of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory.

I accompanied Streever for a full-day visit in July 2011 with physicist Barbara Jacak of Stony Brook University, then spokesperson of the PHENIX Collaboration at RHIC. The intrepid reporter (who’d already tagged along with woodland firefighters and walked across newly formed, still-hot volcanic lava—among other adventures described in the book) met with RHIC physicists at STAR and PHENIX, descended into the accelerator tunnel, and toured the refrigeration system that keeps RHIC’s magnets supercold. He also interviewed staff at the RHIC/ATLAS Computing Facility—who face the challenge of dissipating unwanted heat while accumulating and processing reams of RHIC data—as well as theorists and even climate scientists, all in a quest for understanding the ultrawarm.

The result is an enormously engaging, entertaining, and informative portrayal of heat in a wide range of settings, including the 7-trillion-degree “perfect” liquid quark-gluon plasma created at RHIC, and physicists’ pursuit of new knowledge about the fundamental forces and interactions of matter. But Streever’s book does more: It presents the compelling story of creating and measuring the world’s hottest temperature within the broader context of the Lab’s history, including its role as an induction center during both World Wars, and the breadth and depth of our current research—from atoms to energy and climate research, and even the Long Island Solar Farm.

“Brookhaven has become an IQ magnet, where smart people congregate to work on things that excite geniuses,” he writes.

Streever’s own passion for science comes across clearly throughout the book. But being at “the top of the thermometer” (the title of his final chapter, dedicated in part to describing RHIC) has its privileges. RHIC’s innermost beam pipes—at the hearts of its detectors, inside which head-on ion collisions create the highest temperature ever measured in a laboratory—have clearly left an impression:

“… I am forever enthralled by Brookhaven’s pipes. At the top of the thermometer, beyond any temperature that I could possibly imagine, those pipes explore conditions near the beginning of the universe … In my day-to-day life, bundled in a thick coat or standing before my woodstove or moving along a snow-covered trail, I find myself thinking of those pipes. And when I think of them, I remember that at the top of the thermometer lies matter with the audacity to behave as though it were absolutely cold, flowing like a perfect liquid…”

There’s more, a wonderful bit more that conveys the pure essence of science. But I don’t want to spoil it. Please read and share this book. The final word is awe.

The book is available for purchase through major online retailers and in stores.

-Karen McNulty Walsh, BNL Media & Communications Office

Share

Theoretical physicist Raju Venugopalan

We sat down with Brookhaven theoretical physicist Raju Venugopalan for a conversation about “color glass condensate” and the structure of visible matter in the universe.

Q. We’ve heard a lot recently about a “new form of matter” possibly seen at the Large Hadron Collider (LHC) in Europe — a state of saturated gluons called “color glass condensate.” Brookhaven Lab, and you in particular, have a long history with this idea. Can you tell me a bit about that history?

A. The idea for the color glass condensate arose to help us understand heavy ion collisions at our own collider here at Brookhaven, the Relativistic Heavy Ion Collider (RHIC)—even before RHIC turned on in 2000, and long before the LHC was built. These machines are designed to look at the most fundamental constituents of matter and the forces through which they interact—the same kinds of studies that a century ago led to huge advances in our understanding of electrons and magnetism. Only now instead of studying the behavior of the electrons that surround atomic nuclei, we are probing the subatomic particles that make up the nuclei themselves, and studying how they interact via nature’s strongest force to “give shape” to the universe today.

We do that by colliding nuclei at very high energies to recreate the conditions of the early universe so we can study these particles and their interactions under the most extreme conditions. But when you collide two nuclei and produce matter at RHIC, and also at the LHC, you have to think about the matter that makes up the nuclei you are colliding. What is the structure of nuclei before they collide?

We all know the nuclei are made of protons and neutrons, and those are each made of quarks and gluons. There were hints in data from the HERA collider in Germany and other experiments that the number of gluons increases dramatically as you accelerate particles to high energy. Nuclear physics theorists predicted that the ions accelerated to near the speed of light at RHIC (and later at LHC) would reach an upper limit of gluon concentration—a state of gluon saturation we call color glass condensate.* The collision of these super-dense gluon force fields is what produces the matter at RHIC, so learning more about this state would help us understand how the matter is created in the collisions. The theory we developed to describe the color glass condensate also allowed us to make calculations and predictions we could test with experiments. (more…)

Share

Higgs Seminar 2012

Saturday, June 30th, 2012

This is the link to the liveblog

This year sees the International Conference on High Energy Physics, or ICHEP. Hundreds of physicists will flock to Melbourne, Australia, to get the latest news on physics results from around the world. This includes the latest searches for the Higgs boson, the final piece of the Standard Model. CERN will hold a seminar where ATLAS and CMS will present their results. I’ll be liveblogging the event, so join me on the day!

Information about the webcast

The webcast for the CERN seminar is available at http://cern.ch/webcast. If you have a CERN login you can also use http://cern.ch/webcast/cern_users/

Wednesday 4th July 2012 09:00.
(Other timezones: 00:00 PDT / 03:00 EDT / 07:00 GMT / 08:00 BST /09:00 CET / 17:00 VIC)

Meeting link: https://indico.cern.ch/conferenceDisplay.py?confId=197461
Webcast link: http://webcast.cern.ch/
Follow on twitter: @aidanatcern @sethzenz

Share

Art and Science: Both or Neither

Wednesday, June 13th, 2012

 

I don’t get it. I guess we just have different brains than them.” – two young science students, regarding a piece of art.

It’s a funny feeling, being an individual with a predominantly artistic mind working in a place dominated by science. I’m not saying I don’t have love for the sciences, but if we’re talking in terms of how my thought process lazily unfurls itself when faced with a problem, I’m definitely more of an artist than a scientist. The very fact that I have used the terms “scientist” and “artist” in a way that does nothing but reinforce the eternal dichotomy that exists between the two groups indicates that the problem is so widespread, indeed, that even the person trying to formulate an argument calling for a cessation of the “war” that exists between the two groups cannot avoid thinking of the two as incontrovertibly disparate.

 

A page from Leonardo da Vinci's famous notebooks. He remains one of the finest examples of an individual expanding his mind to take in both science and art.

 

The quote at the top is a real thing I heard. Aside from the disquieting use of “we” and “them,” the most troubling thing about the above assertion is the outright dismissal of the piece of art in question. The finality and hopelessness of the “Different Brain” argument does not seem ridiculous outright because it has been propagated by you (yes, you), me, and everyone else ever in the history of time when we don’t want to take the time to learn something new. Artists and scientists are two particular groups that use the Different Brain argument on one another all too often. In order to see the truly farcical nature that underlies the argument, picture two groups of early humans. One group has fire. The other group does not. One person from the fireless group is tasked with inventing fire for the group. The person in charge of making fire claps his hands; no fire is produced. He gives up, citing that he and his counterpart in the other group must have different brains. His group dies out because of their lack of fire.

I hope you followed the cautionary tale of our dismissive early human closely, for he is the rock I will build this post on. The reason one group died and the other thrived is quite obvious. It is not because they simply lacked fire; it is that they lacked the ability to extend their minds beyond their current knowledge in order to solve a problem. Moreover, they not only lacked the ability, they lacked the drive—a troubling trend that is becoming more pronounced as the misguided “war” between artists and scientists rages on, insofar as an intellectual war can rage.

If you were to ask a scientist what he or she would do when posed with a problem, the answer will invariably be something along the lines of, “I would wrestle it to the ground with my considerable intellect until it yields its secrets.” During my time at TRIUMF, I have noticed a deep, well-deserved pride in every scientist in their ability to solve problems. Therefore, it is truly a sad state of affairs when our scientists look at something that puzzles them and then look away. To me, that’s no scientist. That is someone who has grown too complacent, too comfortable, in the vastness of their knowledge that they begin to shy away from things that challenge them in a way they aren’t used to. What’s more is that no one (artists or scientists) sees this as a defeat. As soon as you’ve said, “Oh well, different brain,” you’ve lost.

Any person familiar with rhetoric will tell you that in order to build a strong argument and persuade people, you have to be honest. Be sneaky and fail to address something potentially damning and your credibility is shot and the argument is void. Since it works so well in politics (snark), I figure I should give is a shot here. The problem of the Different Brain argument does not just lay with the scientists; if I’ve excoriated them, it’s out of fear that soon, a generation of scientists will stop growing and thinking. The artists are guilty of invoking the Different Brain argument as well whenever faced with math, science, or anything, really, that they didn’t want to do. The only difference between the two is that I heard a scientist use the different brain argument in a place of science, in a place where knowledge is the point.

Different Brain is a spurious concept, which is obvious to anyone with more grey matter than pride, but it’s not just wrong because I say it is. It’s wrong because look around you.

I was standing in the middle of Whistler Village with my fiancé, when we spied a poster for a band called Art vs. Science (you’re doing it wrong, guys!). She immediately said, “Science would win.” No question. No pondering. No soul-searching. Gut reaction, like flinching from a feigned punch. She’s a statistics major and biology minor, so she has a “science” brain and her response didn’t necessarily surprise me. I was a little sad, though, because she wasn’t seeing the world like I was seeing it. We debated the problem for a few minutes until I told her to look around.

The shape of the buildings: Architecture

The pleasant configuration of the shrubbery: Horticulture

The signage on the buildings and lampposts: Design

The food in the bag in my hand: Cooking

The phone in her hand: Technology

I asked her to picture a world where science had “won”. What’s architecture without art? A shape. What’s horticulture without art? A forest. Design? A grid. Cooking? Paste. Technology? Sufficient. It’s a tough world to imagine. Look at the next thing you see and try to separate the science and art of it and imagine what it would look like, whether it would function at all. It’s absolutely dystopian.

It was then that my argument became clear: science and art are inextricable. There can be no dismissing, no deigning, no sighing in the face of it. There can only be and has only ever been unity between the two. The problem is that the two warring sides are too preoccupied with the connotations the words “art” and “science” seem to realize it’s not a question of either/or, but both/neither.

I was worried about whether this war of the different brains would always rage between the two sides, but three things lent me hope and I hope they will lend you hope, too.

1.)  These two quotes from Bertholt Brecht (20th century German playwright and poet, whose work I don’t much care for):

“Art and science work in quite different ways: agreed. But, bad as it may sound, I have to admit that I cannot get along as an artist without the use of one or two sciences. … In my view, the great and complicated things that go on in the world cannot be adequately recognized by people who do not use every possible aid to understanding.”

and

“Art and science coincide insofar as both aim to improve the lives of men and women.”

2.) I was feeling discouraged about my argument for this post and had taken to turning it over in my mind even when I was otherwise occupied, but when I heard Rolf Heuer, the Director-General of CERN, say, only a handful of feet from my face, “Science and Art belong together,” I felt a renewed sense of vigor course through my brain, spurring me on. If one of the foremost scientific experts of our age can see it, I wonder why many of us turn away from it, when it is clearly there.

3.) In case one thinks that I’ve gone too soft on the artists, imagine a world without science. Think of our society as a book of fiction or a painting. Unequivocal works of art. Yet, what holds the book together? How were the pages manufactured? How were the chemical composition of the paints devised? Science.

Keeping these points in mind, I am calling for the abolition of the concepts underpinning the Different Brain argument. The war between art and science is one of mutually assured destruction and will turn us into a lopsided simulacrum of a culture if we are not careful.

–Written by Jordan Pitcher (Communications Assistant)

Share

The biggest news at CIPANP 2012 for particle physicists seems to be coming from the “low” energy frontier, at energies in the ballpark of 10GeV and lower. This may come as a surprise to some people, after all we’ve had experiments working at these energies for a few decades now, and there’s a tendency to think that higher energies mean more potential for discovery. The lower energy experiments have a great advantage over the giants at LHC and Tevatron, and this is richer collection of analyses.

There’s a big difference between discovering a new phenomenon and discovering new physics, which is something that most people (including physicists!) don’t appreciate enough. Whenever a claim of new physics is made we need to look at the wider implications of the idea. For example, let’s say that we see the decay of a \(\tau\) lepton to an proton and a \(\pi^0\) meson. The Feynman diagram would look something like this:

tau lepton decay to a proton and a neutral pion, mediated by a leptoquark

tau lepton decay to a proton and a neutral pion, mediated by a leptoquark

The “X” particle is a leptoquark, and it turns leptons into quarks and vice versa. Now for this decay to happen at an observable rate we need something like this leptoquark to exist. There is no Standard Model process for \(\tau\to p\pi^0\) since it violates baryon number (a process which is only allowed under very special conditions). So suppose someone claims to see this decay, does this mean that they’ve discovered new physics? The answer is a resounding “No”, because if they make a claim of new physics they need to look elsewhere for similar effects. For example, if the leptoquark existed the proton could decay with this process:

proton decay, mediated by a leptoquark

proton decay to an electron and neutral pion, mediated by a leptoquark

We have very stringent tests on the lifetime of the proton, and the lower limits are currently about 20 orders of magnitude longer than the age the universe. Just take a second to appreciate the size of that limit on the lifetime. The proton lasts for at least 20 orders of magnitude longer than the age of the universe itself. So if someone is going to claim that they have proven the leptoquark exists we need to check that what they have seen is consistent with the proton lifetime measurements. A claim of new physics is stronger than a claim of a new phenomena, because it must be consistent with all the current data, not just the part we’re working.

How does all this relate to CIPANP 2012 and the low energy experiments? Well it turns out that there are a handful of large disagreements in this regime that all tend to involve the same particles. The \(B\) meson can decay to several lighter particles and the BaBar experiment has seen the decays to the \(\tau\) lepton are higher than they should be. The disagreement is more than \(3\sigma\) disagreement with the Standard Model predictions for \(B\to D^{(*)}\tau\nu\), which is interesting because it involves the heaviest quarks in bound states, and the heaviest lepton. It suggests that if there is a new particle or process, that it favors coupling to heavy particles.

Standard model decays of the B mesons to τν, Dτν, and D*τν final states

Standard model decays of the B mesons to τν, Dτν, and D*τν final states

In another area of \(B\) physics we find that the branching fraction \(\mathcal{B}(B\to\tau\nu)\) is about twice as large as we expect from the Standard Model. You can see the disagreement in the following plot, which compares two measurements (\(\mathcal{B}(B\to\tau\nu)\) and \(\sin 2\beta\)) to what we expect given everything else. The distance between the data point and the most favored region (center of the colored region) is very large, about \(3\sigma\) in total!

The disagreement between B→τν, sin2β and the rest of the unitary triangle measurements (CKMFitter)

The disagreement between B→τν, sin2β and the rest of the unitary triangle measurements (CKMFitter)

Theorists love to combine these measurements using colorful diagrams, and the best known example is the unitary triangle. If the CKM mechanism describes all the quark mixing processes then all of the measurements should agree, and they should converge on a single apex of the triangle (at the angle labeled \(\alpha\)). Each colored band corresponds to a different kind of process, and if you look closely you can see some small disagreements between the various measurements:

The unitary triangle after Moriond 2012 (CKMFitter)

The unitary triangle after Moriond 2012 (CKMFitter)

The blue \(\sin 2\beta\) measurement is pulling the apex down slightly, and green \(|V_{ub}|\) measurement is pulling it in the other direction. This tension shows some interesting properties when we try to investigate it further. If we remove the \(\sin 2\beta\) measurement and then work out what we expect based on the other measurements, we find that the new “derived” value of \(\sin 2\beta\) is far off what is actually measured. The channel used for analysis of \(\sin 2\beta\) is often called the golden channel, and it has been the main focus of both BaBar and Belle experiments since their creation. The results for \(\sin2\beta\) are some of the best in the world and they have been checked and rechecked, so maybe the problem is not associated with \(\sin 2\beta\).

Moving our attention to \(|V_{ub}|\) the theorists at CKMFitter decided to split up the contributions based on the semileptonic inclusive and exclusive decays, and from \(\mathcal{B}(B\to\tau\nu)\). When this happens we find that the biggest disagreement comes from \(\mathcal{B}(B\to\tau\nu)\) compared to the rest. The uncertainties get smaller when \(\mathcal{B}(B\to\tau\nu)\) is combined with the \(B\) mixing parameter, \(\Delta m_d\), which is well understood in terms of top quark interactions, but these results still disagree with everything else!:

Disagreement between B→τν, Δmd and the rest of the unitary triangle measurments (CKMFitter)

Disagreement between B→τν, Δmd and the rest of the unitary triangle measurments (CKMFitter)

What this is seeming to tell us is that there could be a new process that affects \(B\) meson interactions, enhancing decays with \(\tau\) leptons in the final state. If this is the case then we need to look at other processes that could be affected by these kinds of processes. The most obvious signal to look for at the LHC is something like production of \(b\) quarks and \(\tau\) leptons. Third generation leptoquarks would be a good candidate, as long as they cannot mediate proton decay in any way. Searching for a new particle of a new effect is the job of the experimentalist, but creating a model that accommodates the discoveries we make is the job of a theorist.

That, in a nutshell is the difference between discovering a new phenomenon and discovering new physics. Anyone can find a bump in a spectrum, or even discover a new particle, but forming a consistent model of new physics takes a long time and a lot of input from all different kinds of experiments. The latest news from BaBar, Belle, CLEO and LHCb are giving us hints that there is something new lurking in the data. I can’t wait to see wait to see what our theorist colleagues do with these measurements. If they can create a model which explains anomalously high branching fractions \(\mathcal{B}(B\to\tau\nu)\), \(\mathcal{B}(B\to D\tau\nu)\), and \(\mathcal{B}(B\to D^*\tau\nu)\), which tells us where else to look then we’re in for an exciting year at LHC. We could see something more exciting than the Higgs in our data!

(CKMFitter images kindly provided by the CKMfitter Group (J. Charles et al.), Eur. Phys. J. C41, 1-131 (2005) [hep-ph/0406184], updated results and plots available at: http://ckmfitter.in2p3.fr)

Share

Richard Feynman was one of the most influential physicists of the twentieth century. Not only did he revolutionize quantum theory with his development of quantum electrodynamics, but he also revolutionized the way we think about physics and physicists. He spoke to people from all kinds of backgrounds about physics, from lecturing students destined to change the field themselves, to appearing on television to discuss physics and the philosophy of science, to meeting with the greatest minds of the time.

Feynman in the middle of a lecture. (www.richard-feynman.net)

Feynman in the middle of a lecture. (www.richard-feynman.net)

For me, Feyman’s great contribution was the way he thought about physics. His Lectures on Physics are world famous, and rightly so. (In fact, one of the first things I did after landing in San Francisco to work at SLAC was to buy a copy of his lectures from the Stanford bookstore. Shortly afterwards by bank froze my card, suspecting fraud. It was worth the inconvenience!)

As a jaded undergraduate they were a source of inspiration to me. A faint glimmer of hope turned into a roaring inferno after reading his lectures on electromagnetism, and I’ve never looked back since. Finally, here was someone who wanted to discuss the beauty of the subject, as well as the truth. He had no time for obscuring the underlying symmetry of a concept, nor for lying to students in order to make things easier. Inevitably having to unlearn and relearn ideas leaves people confused, disillusioned and unable to trust their tutors. In that spirit, this is how he started his course on electromagnetism:

“We begin now our detailed study of the theory of electromagnetism. All of electromagnetism is contained in the Maxwell equations.

Maxwell’s equations:

\[
\nabla \cdot \vec{E} = \frac{\rho}{\varepsilon_0}
\]
\[
\nabla \times \vec{E} = – \frac{\partial \vec{B}}{\partial t}
\]
\[
c^2\nabla \times \vec{B} = \frac{\partial \vec{E}}{\partial t} + \frac{\vec{j}}{\varepsilon_0}
\]
\[
\nabla \cdot \vec{B} = 0
\]

Don’t worry about trying to understand these equations. The important thing here is that Feynman has given the students the complete truth about electromagnetism. With these four equations he can solve any problem about the shape and nature of electromagnetic fields for any configuration of charges and currents. The equations he provides are not some approximation of the theory, or some equations that only work some of the time, these are the equations that all physicists and engineers use and they are, as far as we know, complete and state of the art. Feynman has shown a level of honesty and respect for his students/readers that was not present when I sat through lectures. My lecturers taught me backwards, Feynman taught me forwards.

(Experts might notice that the Lorentz force law is missing here, but Feynman already mentioned it a few pages before Maxwell’s equations. With the Lorentz force law physicists can relate the electromagnetic fields to the forces on charged particles.)

Feynman continues:

The situations that are described by these equations can be very complicated. We will consider first relatively simple situations, and learn how to handle them before we take up more complicated. The easiest circumstance to treat is one in which nothing depends on time- called the static case. All charges are permanently fixed in space, or if they do move, they move as a steady flow in a circuit (so \(\rho\) and \(\vec{j}\) are constant in time). In these circumstances, all of the terms in the Maxwell equations which are time derivatives of the field are zero. In this case Maxwell’s equations become:

Electrostatics:
\[
\nabla \cdot \vec{E} = \frac{\rho}{\varepsilon_0}
\]
\[
\nabla \times \vec{E} = \vec{0}
\]

magnetostatics:
\[
c^2\nabla \times \vec{B} = \frac{\vec{j}}{\varepsilon_0}
\]
\[
\nabla \cdot \vec{B} = 0
\]

You will notice an interesting thing about this set of four equations. It can be separated into two pairs. The electric field \(\vec{E}\) appears only in the first two, and the magnetic field \(\vec{B}\) appears only in the second two. The two fields are not interconnected. This means that electricity and magnetism are distinct phenomena so long as charges and currents are static.

And he goes on. Immediately at the start of the course he’s pointed out one of the most important and beautiful symmetries in electromagnetism. He also lets us know how the course is going to proceed, with static cases first and the full treatment later. This leaves the student with a wonderful surprise later in the course, when the two fields finally get united again. When this happens Feynman goes on to show us how electromagnetism comes about as a result of special relativity, and if done properly that is one of the most breathtaking moments in physics! This is the way physics should be taught, and I wish I could have been in that lecture hall to see it happen!

The rest of the lectures are a fascinating journey, full of neat little asides, teasers, paradoxes, and it’s all handled with refreshing clarity. He even pokes fun at physics itself from time to time, showing how our mathematical notation is just a trick to make complicated things look simple and how different problems appear to have similar solutions only because we choose to use the same kinds of methods to solve them. Towards the end of his electromagnetism course he even goes out of his way to show how electromagnetism fails in an epic way. The problem of the infinite energy of the field, and the intractable problem of the mass of the electron are two major failings of the classical theory, and he dedicates a lecture to showing us just many questions were left unanswered by the subject.

Feynman with bongos, because some physicists are cool (www.richard-feynman.net)

Feynman with bongos, because some physicists are cool (www.richard-feynman.net)

Feynman gave us a lot to digest, from Nobel prize worthy discoveries, to a view of scientists that was anything but a crusty old professor, and for me what I value most is the lectures he gave, packed with inspiration and clarity. If you have a chance, go read some of the lectures and find out what made this man get out of bed in the morning. You won’t be disappointed. His other books are also excellent (Six Easy Pieces, Six Not So Easy Pieces, QED and Surely You’re Joking, Mr Feynman!) and well worth a read. Put them on your Christmas wish list!

Feynman’s birthday should be a national day of celebration, not just for physics, but for getting people hooked on physics! (I’m just sorry I’m a bit late to the party here, have a great weekend.)

If you want to find out a bit more about Richard Feynman check out this lecture about Feynman from Lawrence Krauss, one of today’s most eloquent speakers and best advocates for physics.

(Quotes taken from “The Feyman Lectures on Physics, The Definitive Edition Volume II”, Feynman Leighton and Sands, ISBN 0-8053-9047-2)

Share

Ramping up

Tuesday, March 27th, 2012

At the moment the LHC is making the transition from no beams to stable beams. It’s a complicated process that needs many crosschecks and calibrations so it takes a long time (they have already been working on the transition since mid February.) The energy is increasing from 7TeV to 8TeV, and the beams are being squeezed tighter, and this means more luminosity, more data, and better performance. As the LHC prepares for stable beams, so do the experiments. I can only see what is happening within ATLAS, but the story will be the same for CMS and LHCb.

As the LHC moves through its checks and changes its beam parameters the experiments have an opportunity to request special beam setup. We can ask that the LHC “splashes” the detector with beam in order to calibrate our hardware. This is similar to the famous first beam plots that we saw in 2008. In addition to splashes we can also request very low pileup runs to test our simulation. “Pileup” refers to the average number of events we expect to get every time the beams collide in the detectors, and by increasing the pileup we cram as many events as we can into the limited periods of time available to us. For 2011 our pileup was about 15, and this is going to increase in 2012 to about 20-30. This meant I was surprised to find out that we can use pileup of 0.01 for some of our simulation calibrations!

First ATLAS splash from 2008 (ATLAS Collaboration)

First ATLAS splash from 2008 (ATLAS Collaboration)

The timetable for the ramping up the LHC is announced as far in advance as possible, but it’s subject to small changes and delays as new problems arise. In general, the LHC outperforms its expectations, delivering higher luminosities than promised and stable beams for longer than expected, so when we factor in unexpected problems and unexpected higher performance we have to take the timetable with a pinch of salt. We expect to get stable beams around Easter weekend. You can see the timetable in the pdf document provided by the LHC team.

In the meantime the ATLAS hardware has been checked and maintenance performed to get it in good working order for the data taking. The thresholds are fine tuned to suit the new beam conditions and the trigger menu is updated to make the best use of the data available. There are plenty of decisions that need to be made and discussions that need to take place to make sure that the hardware is ready for stable beams. Today I got a glimpse at the checks that are performed for the electromagnetic calorimetry system, the trigger system and some changes to the muon systems. It’s easy to lose sight of how much work goes into maintaining the machine!

The LHC team preparing for beams.

The LHC team preparing for beams.

As the hardware improves, so does the software. Software is often a cause of frustration for analysts, because they develop their own software as a collaboration and the software is sometimes “bleeding edge”. As we learn more about the data and the differences between data and simulation we can improve our software, and that means that we constantly get new recommendations, especially as the conferences approach. There is a detailed version tracking system in place to manage these changes, and it can be difficult to keep up to date with it all. Unfortunately, updated software usually means analyzing the data or simulation again, which is time consuming and headache-inducing in itself. That is how things worked in 2011. This year it looks like we’ve already learned a lot about how the data look, so we can start with much better simulation and we can start with an improved release for all the software. This should make progress much easier for analyses and simpler for everyone (which is a very important consideration, given that we have a large range of experience with software, and a large range of knowledge of physics processes.)

The banks of super computers are ready and waiting...

The banks of super computers are ready and waiting...

Putting all this together we can conclude the following: we will have higher energy beams giving us more data, we’ll have a better functioning detector based on previous experience, we’ll have improved simulation, and we’ll have more stable and simpler software. This is very exciting on the one hand, but a bit intimidating on the other, because it means that the weak link in the chain could be the physicists performing the analyses! There are plenty of analyses which are limited by statistics of the dataset, or by resolution of the detector, or stymied by last minute changes in the software or bugs in the simulation. If we hit the ground running for 2012 we could find ourselves with analyses limited by how often the physicists are willing to stay at work until 3am to get the job done.

I’ve already explained why 2012 is going to be exciting in terms of results in another blog post. Now it looks like it will bring a whole new set of challenges for us. Bring it on, 2012, bring it on.

Share