• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Posts Tagged ‘@bravelittlemuon’

Snowmass Came and Passed. What have we learned from it?

mspSkyline_UofM

Skyline of Minneapolis, home of the University of Minnesota and host city of the Community Summer Study 2013: Snowmass on the Mississippi.

Hi All,

Science is big. It is the systematic study of nature, so it has to be big. In another way, science is about asking questions, questions that expands our knowledge of nature just a bit more. Innocuous questions like, “Why do apples fall to the ground?”, “How do magnets work?”, or “How does an electron get its mass?” have lead to understanding much more about the universe than expected. Our jobs as scientists come down to three duties: inventing questions, proposing answers (called hypotheses), and testing these proposals.

As particle physicists, we ask “What is the universe made of?” and “What holds the universe together?”  Finding out that planets and stars only make up 5% of the universe really makes one pause and wonder, well, what about everything else?

From neutrino masses, to the Higgs boson, to the cosmic microwave background, we have learned  much about the origin of mass in the Universe as well as the origin of the Universe itself in the past 10 years. Building on recent discoveries, particle physicists from around the world have been working together for over a year to push our questions further. Progress in science is incremental, and after 10 days at the Community Summer Study 2013: Snowmass on the Mississippi Conference, hosted by the University of Minnesota, we have a collection of questions that will drive and define particle physic for the next 20 years. Each question is an incremental step, but each answer will allow us to expand our knowledge of nature.

I had a chance to speak with SLAC‘s Michael Peskin, a convener for the Snowmass Energy Frontier study group and author of the definitive textbook on Quantum Field Theory, on how he sees the high energy physics community proceeding after Snowmass. “The community did a lot of listening at Snowmass. High energy physics is pursuing a very broad array of questions.  I think that we now appreciate better how important all of these questions are, and that there are real strategies for answering them.”  An important theme of Snowmass, Peskin said, was “the need for long-term, global planning”.  He pointed to the continuing success of the Large Hadron Collider, which is the result of the efforts of thousands of scientists around the world.  This success would not have happened without such a large-scale, global  effort.  “This is how high energy physics will have to be, in all of its subfields, to answer our big questions.”

Summary presentations of all the work done for Snowmass are linked below in pdf form and are divided into two categories: how to approach questions (Frontiers) and what will enable us to answer these questions. These two categories represent the mission of the US Department of Energy’s Office of Science. A summary of the summaries is at the bottom.

What is the absolute neutrino mass scale? What is the neutrino mass ordering? Is CP violated in the neutrino sector? What new knowledge will neutrinos from astrophysical sources bring?

What is dark matter? What is dark energy? Why more matter than anti-matter? What is the physics of the Universe at the highest energies?

Where are the new particles that modify the Higgs, t, W couplings? What particles comprise the dark matter? Why is the Higgs boson so light?

The growth in data drives need for continued R&D investment in data management, data access methods, networking. Challenging resource needs require efficient and flexible use of all resources HEP needs both Distributed High-Throughput computing (experiment program) and High-Performance computing (mostly theory/simulation/modeling)

Encourage and enable physicists to be involved in and support local, national and world-wide efforts that offer long–term professional development and training opportunities for educators (including pre-service educators), using best practice and approaches supported by physics education research. and Create learning opportunities for students of all ages, including classroom, out-of-school and online activities that allow students to explore particle physics

Our vision is for the US to have an instrumentation program for particle physics that enables the US to maintain a scientific leadership position in a broad, global, experimental program; and develops new detection capabilities that provides for cutting edge contributions to a world program

Is dark energy a cosmological constant? Is it a vacuum energy? From where do ultra high energy cosmic rays originate? From where do ultra high energy neutrinos originate?

How would one build a 100 TeV scale hadron collider? How would one build a lepton collider at >1 TeV? Can multi-MW targets survive? If so, for how long?

To provide a conduit for untenured (young) particle physicists to participate in the Community Summer Study. To facilitate and encourage young people to get involved.
Become a long term asset to the field and a place where young peoples voices can be heard

Several great posts from QD (Family, Young, Frontierland), Symmetry Magazine (Push, Q&A, IceSlam, Decade), and even real-time updates from QD’s Ken Bloom (@kenbloomunl) and myself (@bravelittlemuon) via #Snowmass are available. All presentations can be found at the Snowmass Indico page.

Until next time, happy colliding.

– Richard (@bravelittlemuon)

Community Summer Study: Snowmass 2013 Poster

Community Summer Study: Snowmass 2013 Poster

Share

A Little Bit of the Higgs Boson for Everyone

Hi All,

This post is long overdue but nonetheless I am thrilled to finally write it. We have discovered the a some  ??? Higgs boson, and it is precisely my trouble writing this very sentence that inspires a new post. CERN‘s press office has keenly presented a new question in particle physics known as the Definite Article Problem:

Have we discovered “a” Higgs boson or “the” Higgs boson?

We can express the Article problem in another way:

Are there more Higgs bosons?

Before I touch upon that problem, I want to explain about why the Higgs boson is important. In particular, I want to talk about the Sun! Yes, the Sun.

asd

The Higgs Boson and Electroweak Symmetry Breaking is Important because the Sun Shines.

Okay, there is no way to avoid this: I really like the sun.

Slide Credit: Mine. Image Credit: GOES Collaboration

It shines. It keeps the planet warm. There is liquid water on Earth, and some very tasty plants too.

Slide Credit: Mine. Image Credit: NobelPrize.org

At the heart of the Sun is a ranging nuclear furnace and involves two types of processes: (1) those that involve the Strong nuclear force and (2) those that involve the Weak nuclear force (look for the neutrinos!). The two types of processes work together in a solar relay race to complete a circuit, only to do it over and over again for billions of years. And just like a real relay race, the speed at which the circuit is finished is set by the slowest member. In this case, the Weak force is the limiting factor and considerably slows down the rate at which the sun could theoretically operate. If we make the Weak force stronger, then the Sun would shine more brightly. Conversely, if we make the Weak force even weaker, the Sun would be dimmer.

Slide Credit: Mine. Image Credit: NobelPrize.org

From studying the decays of radioactive substances, we have learned that the rate of Weak nuclear processes is set by a physical constant called Fermi’s Constant. Fermi’s Constant is represented by symbol GF. From study the Higgs boson and the Higgs Mechanism, we have learned that Fermi’s Constant is literally just another constant, v, in disguise. This second physical constant (v) is called the Higgs “vacuum expectation value” , or “vev” for short, and is the amount of energy the Higgs field has at all times relative to the vacuum.

The point I want to make is this: If we increase the Higgs vev, Fermi’s Constant gets smaller, which reduces the rate of Weak nuclear interactions. In other words, a larger Higgs vev would make the sun shine less brightly. Going the other way, a smaller Higgs vev would make the sun shine more brightly. (This is really cool!)

Slide Credit: Mine. Image Credit: Jacky-Boi

The Higgs vev is responsible for some other things, too. It is a source of energy from which all elementary particles can draw. Through the Higgs Mechanism, the Higgs field provides mass to all elementary particles and massive bosons. One would think that for such an important particle we would have a firm theoretical understanding it, but we do not.

Credit: Mine

We have a very poor theoretical understanding of the Higgs boson. Among other things, according to our current understanding of the Higgs boson, the particle should be much heavier than what we have measured.

Credit: Mine

The Definite Article Problem

There are lots of possible solutions to the problems and theoretical inconsistencies we have discovered relating to the Standard Model Higgs boson. Many of these ideas hypothesize the existence of other Higgs bosons or particles that would interact like the Higgs boson. There are also scenarios where Higgses have identity crises: the Higgs boson we have observed could be a quantum mechanical combination (superposition) of several Higgs bosons.

I do not know if there are additional Higgses. Truthfully, there are many attractive proposals that require upping the number Higgs bosons. What I do know is that our Higgs boson is interesting and merits much further studying.

...

Credit: Mine

Happy Colliding

– richard (@bravelittlemuon)

PS In case anyone is wondering, yes, I did take screen shots from previous talks and turn them into a DQ post.

Share

Read-Set-Go: The LHC 2012 Schedule

Thursday, September 20th, 2012

From Now Until Mid-December, Expect One Thing from the LHC: More Collisions.

Figure 1: Integrated luminosity for LHC Experiments versus time. 8 TeV proton-proton collisions began in April 2012. Credit: CERN

 

Hi All,

Quick post today. That plot above represents the amount of 8 TeV data collected by the LHC experiments. As of this month, the ATLAS and CMS detector experiments have each collected 15 fb-1 of data. A single fb-1 (pronounced: inverse femto-barn) is equivalent to 70 trillion proton-proton collisions. In other words, ATLAS and CMS have each observed 1,050,000,000,000,000 proton-proton collisions. That is 1.05 thousand-trillion, or 1.05×1015.

To understand how gargantuan a number this is, consider that it took the LHC’s predecessor, the Tevatron, 24 years to deliver 12 fb-1 of proton-antiproton collisions*. The LHC has collected this much data in five months. Furthermore,  proton-proton collisions will officially continue until at least December 16th, at which time CERN will shut off the collider for the holiday season. Near the beginning of the calendar year, we can expect the LHC to collide lead ions for a while before the long, two-year shut down. During this time, the LHC magnets will be upgraded in order to allow protons to run at 13 or 14 TeV, and the detector experiments will get some much-needed tender loving care maintenance and upgrades.

To estimate how much more data we might get before the New Year, let’s assume that the LHC will deliver 0.150 fb-1 per day from now until December 16th. I consider this to be a conservative estimation, but I refer you to the LHC’s Performance and Statistics page. I also assume that the experiments operate at 100% efficiency (not so conservative but good enough). Running 7 days a week puts us at a little over 1 fb-1 per week. According to the LHC schedule, there about about 10 more weeks of running (12 weeks until Dec. 16 minus 2 weeks for “machine development”).

By this estimation, both ATLAS and CMS will have at least 25 fb-1 of data each before shut down!

25 fb-1 translates to 1.75 thousand-trillion proton-proton collisions, more than four times as much 8 TeV data used to discover the Higgs boson in July**.

Fellow QDer Ken Bloom has a terrific breakdown of what all this extra data means for studying physics. Up-to-the-minute updates about the LHC’s performance are available via the LHC Programme Coordinate Page, @LHCstatus, and @LHCmode. There are no on-going collisions at the moment because the LHC is currently under a technical stop/beam recommissioning/machine development/scrubbing, but things will be back to normal next week.

 

Happy Colliding

– richard (@bravelittlemuon)

 

* 10 fb-1 were recorded each by CDF and DZero, but to be fair, it also took Fermilab about 100 million protons to make 20 or so antiprotons.

** The Higgs boson discovery used 5 fb-1 of 7 TeV data and 5.5 fb-1 of 8 TeV data

Share

Theoretically, the Higgs boson solves a lot of problems. Theoretically, this Higgs boson is a problem.

Greetings from the good ol’ U.S. of A.

Now that Fall is here, classes are going, holidays are wrapping up, and research programs are in full steam. Unfortunately, all is not well in the Wonderful World of Physics. To refresh, back on 4th of July, the LHC experiments announced the outstanding and historical discovery of a new particle with properties consistent with the Standard Model Higgs boson. No doubt, this is a fantastic feat by the experiments, a triumph and culmination of a decades-long endeavor. However, there is deep concern about the existence of a 125 GeV Higgs boson. Being roughly 130 times the proton’s mass, this Higgs boson is too light. A full and formal calculation of the Higgs boson’s mass, according to the theory that predicts it, places the Higgs mass pretty close to infinity. Obviously, the Higgs boson’s mass is less than infinite. So let’s talk mass and why this is still a very good thing for particle physics.

For an introduction to the Higgs boson, click here, here, or here (This last one is pretty good).

The Standard Higgs

The Standard Model of Particle Physics (SM) is the theory that describes, well, everything with the exception of gravity (Yes, this is admittedly a pretty big exception).  It may sound pompous and arrogant, but the SM really does a good job at explaining how things work: things like the lights in your kitchen, or smoke detectors, or the sun.

Though if this “theory of almost-everything” can do all this, then when written out explicitly, it must be pretty big, right? Yes. The answer is yes. Undeniably, yes. When written out fully and explicitly, the “Lagrangian of the Standard Model” looks like this (click to enlarge):

Figure 1: The Standard Model Lagrangian in the Feynman Gauge. Credit: T.D. Gutierrez

This rather infamous and impressive piece of work is by Prof. Thomas Gutierrez of Cal Poly SLO. Today, however, we only care about two terms (look for the red circles):

Figure 2: The Standard Model Lagrangian in the Feynman Gauge with the Higgs boson tree-level mass and 4-Higgs vertex interactions terms circles. Original Credit: T.D. Gutierrez

The first term is pretty straightforward. It expresses the fact that the Higgs boson has a mass, and this can represented by the Feynman diagram in Fig 3. (below). As simple and uneventful as this line may appear, its existence has a profound impact on the properties of the Higgs boson. For example, because of its mass, the Higgs boson can never travel at the speed of light; this is the complete opposite for the massless photon, which can only travel at the speed of light. The existence of the diagram if Fig. 3 also tells us exactly how a Higgs boson (denoted by h) travels from one place in the Universe, let’s call is x, to another place in the Universe, let’s call it y. Armed with this information, and a few other details, we can calculate the probability that a Higgs boson will travel from point x to point y, or if it will decay at some point in between.

Figure 3: The tree-level Feynman diagram the represents a SM Higgs boson (h) propagating from a point x in the Universe to a point y somewhere else in the Universe. Credit: Mine

The second term is an interesting little fella. It expresses the way the Higgs boson can interact with other Higgs bosons, or even itself. The Feynman diagram associated with this second term is in Fig. 4. It implies that there is a probability a Higgs boson (at position w) and a second Higgs boson (at position x) can collide into each other at some point in the Universe, annihilate, and then produce two Higgs bosons (at point z and y). To recap: two Higgses go in, two Higgses go out.

Figure 4: The tree-level Feynman diagram the represents two SM Higgs bosons (h) at points w and x in the Universe annihilating and producing two new SM Higgs bosons at points z and y somewhere else in the Universe. Credit: Mine

This next step may seem a little out-of-the-blue and unmotivated, but let’s suppose that one of the incoming Higgs bosons was also one of the outgoing Higgs bosons. This is equivalent to supposing that w was equal to z. The Feynman diagram would look like Fig. 5 (below).

Figure 5: By making an incoming Higgs boson (h) the same as an outgoing Higgs boson in the 4-Higgs interaction term, we can transform the tree-level 4-Higgs interaction term into the 1-loop level correction to the Fig. 1, the diagram the represents the propagation of a Higgs boson in the Universe. Credit: Mine

In words, this “new” diagram states that as a Higgs boson (h) at position x travels to position y, it will emit and absorb a second Higgs boson somewhere in between x and y. Yes, the Higgs boson can and will emit and absorb a second Higgs boson.

If you look carefully, this new diagram has the same starting point and ending point at our first diagram in Fig. 3, the one that described the a Higgs boson traveling from position x to position y. According to the well-tested rules of quantum mechanics, if two diagrams have the same starting and ending conditions, then both diagrams contribute to all the same processes and both must be included in any calculation that has the same stating and ending points. In terms of Feynman diagrams, if we want to talk about a Higgs boson traveling from point x to point y, then we need to look no further than Fig. 6.

 

Figure 6: The tree-level (L) and 1-loop level (R) contributions to a Higgs boson (h) traveling from point x to point y. Credit: Mine

What Does This All Mean?

Now that I am done building things up, let me quickly get to the point. The second diagram can be considered a “correction” to the first diagram. The first diagram is present because the Higgs boson is allowed to have mass (mH). In a very real sense, the second diagram is a correction to the Higgs boson’s mass. In a single equation, the two diagrams in Fig. 6 imply

Equation 1: The theoretical prediction for the SM Higgs boson's observed mass, which includes the "tree-level" contribution ("free parameter"), and 1-loop level contribution ("cutoff"). Credit: Mine

In Eq. (1), term on the far left is the Higgs boson’s mass that has been experimentally measured, i.e., 125 GeV. Hence the label, “what we measure.” The term just right of that (the “free parameter”) is the mass of the Higgs boson associated with the first term in the SM Lagrangian (Fig. 2 and 3). When physicists talk about the Standard Model not predicting the mass of the Higgs boson, it is this term (the free parameter) that we talk about. The SM makes no mention as to what it should be. We have to get down, dirty, and actually conduct an experiment get the thing. The term on the far right can be ignored. The term “Λ” (the “cutoff scale“), on the other hand, terrifies and mystifies particle physicists.

Λ is called the “cutoff scale” of the SM. Physically, it represents the energy at which the SM stops working. I mean it: we stop calculating things when we get to energies equal to Λ. Experimentally, Λ is at least a few hundred times the mass of the proton. If Λ is very LARGE, like several times larger than the LHC’s energy range, then the observed Higgs mass gets an equally LARGE bump. For example, if the SM were 100% correct for all energies, then Λ would be infinity. If this were true, then

(the Higgs boson’s mass) = (something not infinity) + (something infinity) ,

which comes out inevitably to be infinity. In other words, if the Standard Model of Physics were 100% correct, then the Higgs boson’s mass is predicted to be infinity. The Higgs boson is not infinity, obviously, and therefore the Standard Model is not 100%. Therefore, the existence of the Higgs boson is proof that there must be new physics somewhere. “Where and at what energy?,” is a whole different question and rightfully deserves its own post.

 

Happy Colliding

– Richard (@bravelittlemuon)

Share

What Comes Next?

Tuesday, July 3rd, 2012

Suppose for a moment the LHC experiments announce the discovery of a new object Wednesday. What comes next?

Figure 1: The list of all known elementary particles in Standard Model. The existence of the Higgs boson has yet to be confirmed. Credit: AAAS

Hi All,

In fewer than 20 hours, on Wednesday, July 4th, now dubbed Higgsdependence Day, something very important will happen. In a physics laboratory just outside of Geneva, Switzerland, in a pretty spacious auditorium, the spokespeople for two rival experiments will unveil their independent searches for a microscopic object predicted to exist almost over 40 years ago. Not impressed?

Well, I will put it another way. In fewer than 20 hours, the world will learn just how a near hundred-billion dollar industry, the same industry that invented both the World Wide Web and new cancer treatments, will spend the next 10 to 20 years after finally learning if the Higgs boson really is responsible for the origin of mass in the visible Universe!

Figure 2: Diagram depicting the process known as WW Scattering, where two quarks from two protons each radiate a W boson that then elastically interact with one another. Credit: Me.

The who, what, where, why, whom, regarding the Higgs boson has been covered quite extensively and so I will not dwell on it. What I am talking about today is the BIG question on every physicist’s mind.  At this point, us physicists have stopped asking whether or not the LHC experiments will announce the definitive discovery of a new particle. We are now more focused on answering,

What do we need to do after Wednesday?

Quite frankly, there just has to be at least one new particle lurking in the data. It does not have to be THE Higgs boson as predicted by the Standard Model by any means. So long as this object fulfills the role of the Higgs boson, physics works. How am I so sure of this? Well, taking the Higgs Boson out of the Standard Model causes a rather ludicrous prediction:

The probability of WW scattering (Fig. 2) at the LHC becomes infinity.

That result, my dear friends, I promise you is total rubbish. This is the famous “Unitarity Problem” and is heroically solved by assuming that the Higgs boson is a real particle and has a mass less than 1400 times the mass of the proton.

Figure 3: The combined limits on the expected number of SM Higgs bosons decaying into bottom quark-anti-bottom quark pairs from the Tevatron experiments (CDF+DZero) July 2012, using almost 10 fb-1 of data. Data indicates a excess of events compared to the no-Higgs hypothesis, and thus consistent with the existence of a higgs-like object. Credit: FNAL.

Furthermore, if the LHC experiments confirm the existence of a Higgless Standard Model, then we have to explain why, as of Monday, the Tevatron has seen: an excess in the number of bottom-anti-bottom quark pairs (Fig. 3) and 2-photons events, but a deficit of the number of expected W+W- pairs.

Announcing the discovery of a higgs-like object on Wednesday will literally dictate the (non-neutrino) high energy physics programme until well-after the end of the decade. This will take a lot of data, and hence time & effort, to accurately tabulate all the quantum numbers of a new particle.

For starters, we need to immediately confirm what we have is a particle without any intrinsic angular momentum! In physics talk, this is called “spin.” One way to determine the spin of a particle capable of decaying into into a bottom-anti-bottom quark pair is to look at the angle between the two quarks as the object decays. This angle has a very unique shape if the new particle has no intrinsic angular momentum (spin-0), a single unit of angular momentum (spin-1), two units of angular momentum (spin-2), and so on. They should look something like the three plots below (L is spin-0; C is spin-1; R is spin-2). For more information, see the original QD post. The point I am trying to make is that it is very straightforward to confirm the “spin” of any new Higgs-like particle.

Figure 4 (a): The angular distribution of a spin-0 object decaying to a bottom and anti-bottom quark pair. Credit: Me.
Figure 4 (b): The angular distribution of a spin-1 object decaying to a bottom and anti-bottom quark pair. Credit: Me.
Figure 4 (c): The angular distribution of a spin-2 object decaying to a bottom and anti-bottom quark pair. Credit: Me.

 

 

 

 

 

 

 

 

Another quintessential piece of information: determining into what particles this new particle can decay. If this mystery object is our beloved Standard Model Higgs boson, then the probability it will decay into quarks and leptons is proportional to how heavy individual quarks and leptons are. Therefore, the rate at which this potential Higgs-like object decays into lighter particles must be carefully measured to confirm that it decays into bottoms quarks (mass = 4 GeV) more often than it does to muons (mass = 0.1 GeV). New theories, like Supersymmetry (SUSY), can alter such rates slightly. Consequentially, precisely measuring the decay rates of any Higgs-like object is automatically a test SUSY.

Figure 5: In SUSY, the correction to Higgs mass by the top quark (L) is inherently cancelled by the contribution from the top quark's supersymmetric partner, the stop (R). Credit: Chuan-Ren Chen.

Science is all about explaining how nature works by carefully and methodically testing hypotheses. The Higgs boson may be the final piece of the Standard Model puzzle but our work hardly stops on Wednesday. If we have truly found the Higgs boson at roughly 125 times the mass of the proton, then there is a very troublesome issue:

The Higgs boson mass is too heavy!

Recovering a 125 GeV Higgs boson requires a few contrived cancellations that are pretty unsatisfactory. It is significantly more rigorous if terms cancel based on physical principles. Remember, since this is real life and not a chalkboard, there are hardline, concrete principles for the way nature works the way it does. To suggest otherwise is silly. Oddly enough, such cancellations do occur inherently  in Supersymmetry (Fig. 5). Understanding the precise value of the Higgs mass is another item on our ever-growing Higgs Boson Properties checklist.

At the end of the day, discovering a Higgs boson means, experimentally and theoretically, pushing the bounds of our knowledge of the Universe that much further. Yes, it is likely that after tomorrow many physics textbooks will be outdated. This is a very good thing. However, confirming ALL the spin, decay, mass, mixing, etc. properties of this new particle, if there is indeed a new particle, will require many years, and you can count on hearing all about it from us!

 

Happy Colliding

– richard (@bravelittlemuon)

 

Share

What has no thumbs and travels at the speed of light, to within experimental uncertainty?

Hi All,

I will just say this right away, the Borexino, ICARUS, LVD, OPERA, and MINOS Experiments have all independently found, within experimental uncertainty, that neutrinos travel at the speed of light. To enlighten, last September the OPERA Experiment at the Gran Sasso Laboratory, in Gran Sasso, Italy, observed what appeared to indicate that neutrinos travel faster than the speed of light. (More information available from veteran QDers Aiden and Seth).

The reported quantity is time it took neutrinos to travel from CERN to Gran Sasso minus the time it would have taken light. I should also mention that the statistical (stat.) and systematic (sys.) uncertainties are incredibly important.

δt = (Time it took neutrinos to reach GS from CERN) – (Distance between GS and CERN)/(Speed of Light)

Figure 1: Results from four Gran Sasso Laboratory experiments indicating neutrinos travel at the speed of light, to within exerpeimental uncertainty. Reported quantity is time it took neutrinos to travel from CERN to Gran Sasso minus the time it would have taken light. Credit: BERTOLUCCI, Sergio

Figure 2: Results from the MINOS Experiment indicating neutrinos travel at the speed of light, to within exerpeimental uncertainty. Reported quantity is time it took neutrinos to travel from Fermilab to MINOS minus the time it would have taken light. Credit: ADAMSON, Phil

To clarify the situation, this result was not a typical “Hey! We discovered new physics!” result. Had OPERA correctly observed a massive particle traveling faster than light, then we would truly be in the midst of a physic revolution. That is not a hyperbole either. As a result, everyone, theorists and experimentalists alike, put on their scientists hats and scrutinized the result to no end. Much drama ensued and at long last the problem has been resolved. The issue at hand were actually two very subtle effects that worked against each other. The first was that a 5.2 mi (8.3 km) cable was accidentally stretched back in 2008 and systematically introduced a 74 nanosecond delay in the system that recorded the time the neutrinos arrived at the detector. The second issue involved the highly precise master clock system for the entire experiment; it was slow by about 15 nanoseconds. 74 – 15 = 59 nanoseconds was exactly how much sooner the neutrinos were arriving than they were expected.

 

Figure 3: Two previously unaccounted issues regarding the OPERA Experiment. Credit: DRACOS, Marcos

In conclusion, neutrinos may still travel faster than the speed of light. It is unlikely, but still possible. Officially as of today, though, we know that all measurements of neutrinos’ speed show are consistent with the speed of light.

Share

Neutrino 2012: Day 3+4

Friday, June 8th, 2012

How many types of neutrinos are there? That was Day 3+4’s Big Question.

Hi All,

Your Day 3+4 breakdown is finally here. A lack of internet access is always an issue. At any rate, things were a very great mix of experimental results and theoretical discussions that all pointed to one question: How many types of neutrinos are there in the Universe? According to the Standard Model, which itself is founded on very rich experimental results, there are 3 flavors: electron-neutrino, muon-neutrino, and tau-neutrino. However, it is very possible there are more neutrinos that do not have any charges under the Standard Model. Such neutrinos are called sterile neutrino or singlet neutrinos.

Without further ado: MiniBooNE, Neutrino Anomalies, KamLAND-Zen, EXO-200, and Day 4.

MiniBooNE

A quick breakdown of what the Miniature Booster Neutrino Experiment, or MiniBooNE for short, is all about can be given in the following two slides. Fermilab accelerates protons into a fixed target to produce pions and kaons. The pions and kaons are then directed toward the ground and squeezed together by a system of magnetic fields called “the horn.” The pions then fly for a period of time, decaying into muons, electrons, and neutrinos. The muons further decay into more neutrinos and electrons. When the electron and neutrino beam hits the ground, the electrons are absorbed and the neutrinos pass through the planet. Finally, after popping out in a deep underground Minnesota mine, the neutrino beam flies through the MiniBooNE detector and physics is born.

One of MiniBooNE’s chief scientific goals is to confirm or refute the result of a previous experiment, LSND, which observed an excess of neutrinos. The excess was best described by introducing a single sterile neutrino and we still do not know if the result was a statistical fluke or something more serious.

Figure 1: Summary of the MiniBooNE Experiment at Fermilab with motivation for its science programme. Credit: POLLY, Chris

Figure 2: Breakdown of the MiniBooNE Experiment at Fermilab. Credit: POLLY, Chris

The experiment has announced for the first time with its full dataset, that it has observed an excess number of anti-muon-neutrinos converting to anti-electron-neutrinos. This excess is almost entirely in the lower energy range, i.e., smaller energy transfer between neutrinos and detector, and the experiment is trying vigorously to determine if this has been caused by a previously unknown background.

Figure 3: Results showing an excess in the number of low energy anti-electron neutrinos observed. Credit: POLLY, Chris

When combing the anti-neutrino and neutrino excesses, the overall excess in number of events grows in significance. The two results are consistent with each other, so there is no measurable difference between matter and anti-matter has been observed. If it is there, it is beyond the detector’s capabilities. There are a few ideas to explain the more-than-expected number of neutrinos and they are individually being studied as we speak. A VERY preliminary result (so preliminary I am choosing not the put up the plot here) is that the data is somewhat well-described by assuming the existence of two sterile neutrinos. This actually is more preferred than a single sterile neutrino, so theorists are a bit happy at the moment. 🙂

Figure 4: Results showing an excess in the number of low energy electron-neutrinos and anti-electron neutrinos observed. Credit: POLLY, Chris

Neutrino Anomalies

The prospect of adding a new neutrino to the Standard Model is a tricky issue, let alone adding two. Theoretically it is not terribly difficult but such a step would have very obvious and quickly testable predictions. The first of several theory talks (I am skipping my synopsis of all other theory talks) had a summary of known anomalies from neutrino experiments. LSND and MiniBooNE has already been discussed and the largest. A rather recently discovered discrepancy is the number of neutrinos predicted to be produced by nuclear reactors. The calculation is very well known but had not been updated in years. After recalculating the expected neutrino production rate, the predicted rate was found to be larger than the observed rate. Strictly speaking, all results ARE consistent with the Standard Model and we cannot make any definitive statements based solely on what is listed here.

Figure 5: . Credit: LASSERRE, Thierry

KamLAND-Zen

On to KamLAND-Zen, which stands for Kamioka Liquid Scintillator Anti-neutrino Detector – Zero Neutrino Double β-Decay (pronounced: beta-decay). This experiment has got to be the best example of when an experiment collaboration just stops trying to write its experiment name as a logical acronym. It is still a wicked-cool name. Nuclear β-decay is one of the most well-studied examples of radioactivity where a nucleus in an atom will disintegrate into a lighter nucleus, plus an electron (or a positron), and an anti-electron-neutrino (or a regular electron-neutrino). Some radioactive elements can also undergo the super rare double β-decay where two β-decays occur simultaneously. In the case that a sterile neutrino does indeed exist, then the even more rare neutrino-less double β-decay should be possible. In this situation, two nuclei in an element will disintegrate into two lighter nuclei and only two electrons (or positrons!). KamLAND-Zen is looking for such a decay in the gas xenon but has had no such luck. It has, however, been able to measure the rate of the still-very-rare 2-neutrino-double β-decay in xenon, an impressive feat in and of itself. The experiment was also able to disprove a previous measurement of this rate from a different experiment called DAMA. Here are the results.

Figure 6: KamLAND-Zen's measurement of the half-life of double β-decay in xenon gas. Credit: INOUE, kunio

Figure 7: Summary of KamLAND-Zen's experimental results. Credit: INOUE, kunio

EXO-200

The Enriched Xenon Observatory Experiment, or EXO-200 for short (the 200 is explained on wiki), is KamLAND-Zen’s biggest competitor in the race for finding neutrino-less double-β-decay. The first slide shows how much more data they have since the last time their results were announced. The second slide shows their background and the fact that they have observed almost 22,000 2-neutrino double β-decay events! I cannot describe how cool that is other than say just that: it is really cool that they have so many events. Consequentially, their results are in good agreement with KamLAND-Zen’s results. So sadly, no neutrino-less events.

Figure 8: Details of the EXO-200 Experiment, its analysis, and differences from its previous analysis. Credit: FARINE, Jacques

Figure 9: Results from EXO-200 Experiment. Credit: FARINE, Jacques

Figure 10: Results from EXO-200 Experiment with comparison to other experiments.. Credit: FARINE, Jacques

Day 4

Day 4 was a much needed rest for conference goers. Like most other attendees, I spent the day exploring Kyoto and then working with my adviser on a paper we are hoping to finish soon. In the evening, however, we were treated to a dance performance by real-life geisha dancers. I was unable to get too many photos but below is a good one. The two dancers are both geiko-sans (fully-fledged geisha dancers) but there were also three maikos (apprentice geisha dancers).

Figure 11: Something. Credit: Mine

After the short entertainment, the main event began: a public lecture on the importance of neutrinos and their influence on how the Universe evolved, given by Prof. Hitoshi Murayama, Director of the University of Tokyo’s Institute for Physics and Mathematics of the Universe. Sadly, I was unable to find his slides online, which is especially unfortunate considering his talk was entitled, “Neutrinos May Be Our Mother.” I was able to snap this photo of Prof. Murayama discussing his recent meeting with the Prime Minister of Japan and philanthropist Fred Kavli, of the famed Kavli Foundation. Mr. Kavli’s generous contributions to physics and astronomy have led to the construction of dozens of institutes around the world to focus and have allowed us to concentrate on the most important mysteries of this universe we call home.

Figure 12: Prof. Hitoshi Murayama (Far Left), sharing a picture of his meeting with Mr. Fred Kavli (Second from Right), and Prime Minister Yoshihiko Noda (Far Right). Credit: Mine.

Share

Neutrino 2012: Day 2

Tuesday, June 5th, 2012

When it comes to neutrino experiments, they were all there Day 2: T2K, MINOS, OPERA, ICARUS, and NOνA, and LBNE!

Hi All,

Here is a run down of what happened Tuesday. I will try to post Day 3 things this afternoon, which should be Wednesday morning for the US. An update on the LBNE is at the bottom of the post.

Happy Colliding

– richard (@bravelittlemuon)

T2K

Figure 1: Update of T2K experiment after March 11, 2011 earthquake that struct Japan. Credit: NAKAYA, Tsuyoshi.

The Tokai to Kamoika Experiment, or T2K for short, is a one impressive behemoth of an experiment. Much like MINOS Experiment at Fermilab, you shoot protons into a target to make pions. The pions then decay into neutrinos, and the neutrinos travel 183 mi (295 km) through the Earth to the (Super-, Hyper-) Kamiokande detector in Kamioka, Japan.

When the March 2011 earthquake struck Japan, the proton accelerator at the J-PARC physics lab was heavily damaged, and power throughout the country was effectively shut off. Due to immense leadership of J-PARC’s director, Shoji Nagamiya, the accelerator was back online December 9, 2011, and by December 24, 2011, neutrinos were being observed in Kamioka. It is baffling that despite all this, the experiment still marched on and announced the herculean result it had observed 10 events where a muon-neutrino had converted into electron-neutrino. The predicted results were 9.07±0.93 event assuming sin213=0.1, and 2.73±0.37 events assuming sin213=0.0. Consequently, the experiment was able to measure θ13 itself and found sin213=0.104 +0.060-0.045

Figure 2: Schematic of T2K (Tokai To Kamioka) Experiment. Image: http://www2.warwick.ac.uk/newsandevents/news/t2k

MINOS

The MINOS Experiment at Fermilab is most simply described at the US version of T2K. It is unfair and a disservice to both MINOS and T2K to make that comparison because of the unique features of the experiments, but I have a lot to write. In 2010, MINOS caused a bit of a stir when it measured the mass difference between two of the three anti-neutrinos. The measurement itself was not at all controversial. The issue was that this result differed from the well measured mass difference for regular neutrinos. Here the Fermilab presser that can tell you all about it. On Day 2, MINOS announced that the discrepancy between neutrinos and anti-neutrinos has completely disappeared and that the previous disagreement is believed to have been a statistical fluctuation. It appears that Fermilab has released a new press release this morning explaining things in more detail. Below are the main plots. Oh, and MINOS data also slightly favors inverted hierarchy for anyone interested in that. Fun fact: In its seven years of running, MINOS has used over 1.5 sextillion protons to produce all of its neutrinos.

Figure 3: Preliminary results from the MINOS detector showing best fit value for neutrino mass splitting and mixing angle. Credit: NICHOL, Ryan

 

Figure 4: Preliminary results from the MINOS detector showing best fit value for anti-neutrino mass splitting and mixing angle. Credit: NICHOL, Ryan

Figure 5: Preliminary results from the MINOS detector showing best fit value for neutrino and anti-neutrino mass splitting and mixing angle. Credit: NICHOL, Ryan

OPERA

The OPERA Experiment, or Oscillation Project with Emulsion-tRacking Apparatus, is a fine and mighty experiment capable of one of the most time-consuming tasks in neutrino physics that even tests the patience of sleeping mountains: observing the conversion of tau-neutrinos into muon-neutrinos. Like T2K and MINOS, OPERA gets its neutrinos from pions, which are produced when protons strike a fixed target. Specifically, the experiment uses CERN protons in its first four years of running has used about 14.2 x 1019 protons!

Figure 6: A breakdown, by year, of how many protons the OPERA experiment has observed. Credit: NAKAMURA, Mitsuhiro

OPERA’s defining characteristic is how well it is able to extract out a signal from everything else. Below is an example of a real event in which a neutrino has collided with a nucleus, producing a charge lepton and nucleus somewhat fragments.

Figure 7: An example of a real neutrino event being extracted from the data. Credit: NAKAMURA, Mitsuhiro

The big news from OPERA on Day 2 was the second observation of a muon-neutrino converting into a tau-neutrino! 2 events in over four years; I told you this thing required patience. Here is how the event works.

Figure 8: The OPERA Experiment's second candidate event of a muon-neutrino converting into a tau-neutrino. Credit: NAKAMURA, Mitsuhiro

Here is an explanation of the event.

Figure 9: A breakdown of OPERA's second tau-neutrino candidate. Credit: NAKAMURA, Mitsuhiro

Finally, here is a summary of the status of OPERA’s search tau-neutrinos. It is worth mentioning that the experiment also announced it has observed 19 instances where a muon-neutrino has converted into an electron-neutrino!

Figure 10: A summary of the current status of the OPERA Experiment's search for appearances of tau-neutrinos. Credit: NAKAMURA, Mitsuhiro

A Few Words on ICARUS and NOνA

Due to the lack of time, I will simply say that one can expect big things from ICARUS and NOνA when they both have results. ICARUS has already started running and the gigantic, LHC-Detector-sized NOνA will start running next year when Fermilab flips on its proton beam again. NOνA will be capable of determining whether neutrinos have normal mass hierarchy or inverted mass hierarchy.

 

LBNE

Interesting things happen at conferences, like an impromptu talk added the morning of the second day of events. Long Baseline Neutrino Experiment co-spokesperson Robert Svoboda surprisingly gave an update of the LBNE, the first since its budget was gravely slashed. Much is still being kept internally for another few weeks when the final proposal will be submitted, so I will limit what I say. In summary, there are three options being considered for the experiment for phase 1 construction. Beyond that, it is up to the Funding Lords.

Figure 11: Update of the Long Baseline Neutrino Experiment. Credit: SVOBODA, Robert

Figure 12: Update of the Long Baseline Neutrino Experiment. Credit: SVOBODA, Robert

 

Share

Summer is a productive time for us and tends to involve lots of traveling.

 

Fig. 1: My 2010 PDG booklet and my Japan Rail pass. I am not sure which is more important.

Hi All,

As fellow QDer Aidan posted this morning, it is conference season, again! Lots and lots of conferences for all the different sub-sub-fields in physics. Two big ones on my plate are Neutrino 2012, which is about ALL things that begin with the letters n-e-u-t-r-i-n-o and end in the letter -s; and ICHEP 2012, which is the mother-of-all high energy physics conferences. (Much more on ICHEP in a few weeks seeing that I have been invited to be a panelist on the “Social Media in Science Communication” session. Trust me, it will be good.)

Neutrinos are all the rage these days: from #FTLneutrinos to θ13, we are determined to know precisely how neutrinos work. Fortunate for us, there is a huge international conference, imaginatively called “Neutrino,” next week in the gorgeous, ancient city of Kyoto, Japan, and you can definitely count on there be a Quantum Diaries presence. QDer Zeynep Isvan will be around, and, with the suggestion from my chief editor, Daisy, I will be live-blogging the plenary sessions when I can. The programme is also already online, so feel free to check out the topics.

After the conference, however, is when things get kicked into high gear for me. A few months ago I won a NSF summer fellowship to research dark matter in Japan. It is now summer, so for the next three months I will be a visitor at University of Tokyo’s prestigious Institute for the Physics and Mathematics of the Universe, or IPMU for short. I still have plots to make for a meeting today and my first flight is (literally) 24 hours from now. At least I have my trusty messenger bag already packed with two of the more important things: a Japan Rail pass and my 2010 PDG booklet!

See you in Kyoto!

 

Happy Colliding

– richard (@bravelittlemuon)

PS While adding links and sources to the post, I found my IPMU host on Twitter.

PPS More than 3.6 fb-1 worth of data has already been collected by the collider experiments.

 

Fig. X: Conference Poster for Neutrino 2012 in Kyoto, Japan (http://neu2012.kek.jp/)

Fig. 2: Conference Poster for Neutrino 2012 in Kyoto, Japan (http://neu2012.kek.jp/)

Share

Quarks: Yeah, They Exist

Monday, April 16th, 2012

Physics Fact: 58 years ago, quarks were independently proposed by Murray Gell-Mann & George Zweig [1,2]. M.G.M. called them “quarks” and Zweig called them “aces.”

Hi All,

A question I often get, like really often, especially from other physicists, is “How do we know quarks exist?” In particular,

If (light) quarks cannot be directly observed, due to the phenomenon known as color confinement (or infrared slavery as I like calling it), then how do we know quarks exist?

This is a really good question and it has a number of different answers. To a physicist, being able to directly observe an object means being able to isolate it and subsequently measure its properties, for example: electric charge. Due to effects associated with the strong nuclear force, quarks lighter than the top quark will nucleate into other objects (hadrons) in about 3×10-25 seconds. This is pretty fast, much faster than any piece of modern electronics. Consequentially, light quarks cannot be directly observed with present technology. However, this inability to isolate quarks does not imply we cannot directly measure their properties (like electric charge!).

This brings me to today’s post: How physicists measure quarks’ electric charges!

R

Fig. 1: An electron (e-) and positron (e+) annihilate to produce a virtual photon (γ*) that subsequently decays into a muon (μ-) and anti-muon (μ+). Click for full size.

A very typical calculation done by any student in a course on particle physics (undergraduate or graduate) is to calculate the likelihood (called cross section) of an electron and positron annihilating into a virtual photon, which then decays into a muon and anti-muon. (See the diagram to the right.). Since electrons, muons, and their anti-matter partners all have so little mass, it is pretty reasonable to just pretend they are all massless. The calculation becomes considerably easier, trust me on this. When all is said and done, we find that the cross section is equal to a bunch of constants (which I am just going to collectively call σ0), times the square of the electron’s electric charge (Q2e), times the square of the muon’s electric charge (Q2μ):

Likelihood of e+e → μ+μ = σ0 × Q2e × Q2μ

However, the electric charges of electrons and muons are both 1 (in elementary units) so the likelihood reduces to just σ0. Convenient, right?

Now, if we replace muons with quarks, then he find that the cross section is this:

Likelihood of e+e → qq = 3 × σ0 × Q2q

That’s right: the probability of producing quarks with electrons & positrons is simply three times that for producing muons, scaled by the square of the quarks’ electric charge. This amazing result allows us to then define the quantity “R“, which is just the ratio of the likelihoods:

R = (Likelihood of e+e → qq) / (Likelihood of e+e → μ+μ) = 3 × Q2q

In other words, by measuring the ratio of how likely it is to produce a particular set of quarks to how likely it is to produce muons, we can directly measure quarks’ electric charge! (BOOYA!)

Measuring R

As far as measuring R goes, it is pretty straightforward. However, there has to be some caveat or complication since this is physics we are talking about. Sure enough there are a few and I am just going to ignore them all, all but one.

In order to determine the probability of producing a particular pair of quarks using electron-positron collisions, experimentalists have to make sure the total energy of the collision is large enough. Simply put, no particle can ever be generated if there is not enough energy to make it. It is an example of the Conservation of Energy. The problem is this: if there is enough energy to make a particular set of quarks, then there is sufficient energy to produce any quark pair lighter than the original set. In addition, it is very difficult to isolate different quark-anti-quark pairs (see the top of this post for why that is).

The solution to this issue is to simply measure the likelihood of producing ALL types of quarks for a particular energy. To do so, all we need is to add up all the individual cross sections for each set of quarks. The total cross section simplifies to this:

Likelihood of e+e → ALL qq = 3 × σ0 × Q2e × Sum Q2q

That is to say, the probability of producing ALL quark-anti-quark pairs in electron-positron collisions is equal to a bunch of constants (σ0) times the square of the electron’s electric charge (Q2e), times the sum of the square of each quark’s electric charge (Q2q). Consequently, R becomes

R = (Likelihood of e+e → ALL qq) / (Likelihood of e+e → μ+μ) = 3 × Sum of all Q2q

R may no longer be a direct measurement of a single quark’s electric charge, but it is still a direct measurement of the electric charge of all the quarks. Without further ado, here are the predictions:

Table 1: R-values for energies below 200 MeV (0.1 GeV) and above 9 GeV. Click for full size.

 

Here are the data. This plot is taken from my favorite particle physics books, Quarks & Leptons:

Fig. 2: The R value of light quarks versus energy of quark-anti-quark pair. Click for full size. Credit: F. Halzen and D. Martin, "Quarks and Leptons: An Introductory Course in Modern Particle Physics", Wiley 1984.

That Disagreement Near 5-8 GeV is Not Really a Disagreement

Time for a little extra credit. If you look closely at figure 2, you may notice that between 5 GeV and 8 GeV all the data points are uniformly above the R=10/3 line. This feature is actual the result of two things: the first is that quarks really do have masses and cannot be ignored at these energies; the second is that the strong nuclear force surprisingly contributes to this process. I will not say much about the first point other than mention that, in our quick calculation above, we pretended to ignore all masses because electrons and muons were so light. The mass (in natural units) of the charm quark is about 1.3 GeV, and that is hardly small compared to 5 GeV.

Taking a closer look at where the virtual photon produces a quark and anti-quar k pair, we realize that quark and anti-quark are pretty close together. They are actually close enough to emit and absorb gluons, the particle that mediates the strong nuclear force. This has a very important consequence. Previously, the quark and anti-quark pair could only be produced in such a way that the total momentum of the system was conserved. However, if we consider the fact that the quarks can exchange gluons, and hence exchange momenta, then the quark and anti-quark pair can be produced an infinite number of different ways that violate the conservation of total momentum, so long as at least one gluon is exchanged between the two in order to restore total momentum. This amplification in likelihood is highly sensitive to energy but it causes about a 20% increase in R between 5 and 8 GeV. This 20% increase in R is precisely the difference between all the data points and the R = 10/3 line.

 

Fig. 3: A Feynman diagram representing the annihilation of an electron (e-) and positron (e+) into a virtual photon (γ*) that decays into a quark (q) and anti-quark (q) pair. The photon-quark-quark vertex is enlarged to highlight the ability for nearby quarks to exchange gluons. Click for full size.

 

 

 

Happy Colliding.

– richard (@bravelittlemuon)

P.S. #PhysicsFact should totally be a trend today. Go! Make it trend!

<sub>μ</sub>
Share