• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

Latest Posts

  • Flip
  • Tanedo
  • USLHC
  • USA

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • Laura
  • Gladstone
  • University of Wisconsin, Madison
  • USA

Latest Posts

  • Richard
  • Ruiz
  • Univ. of Pittsburgh
  • U.S.A.

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Michael
  • DuVernois
  • Wisconsin IceCube Particle Astrophysics Center
  • USA

Latest Posts

  • Jim
  • Rohlf
  • USLHC
  • USA

Latest Posts

  • Emily
  • Thompson
  • USLHC
  • Switzerland

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts

Archive for the ‘Latest Posts’ Category

René Descartes (1596 – 1650) was an outstanding physicist, mathematician and philosopher. In physics, he laid the ground work for Isaac Newton’s (1642 – 1727) laws of motion by pioneering work on the concept of inertia. In mathematics, he developed the foundations of analytic geometry, as illustrated by the term Cartesian[1] coordinates. However, it is in his role as a philosopher that he is best remembered. Rather ironic, as his breakthrough method was a failure.

Descartes’s goal in philosophy was to develop a sound basis for all knowledge based on ideas that were so obvious they could not be doubted. His touch stone was that anything he perceived clearly and distinctly as being true was true. The archetypical example of this was the famous I think therefore I am.  Unfortunately, little else is as obvious as that famous quote and even it can be––and has been––doubted.

Euclidean geometry provides the illusionary ideal to which Descartes and other philosophers have strived. You start with a few self-evident truths and derive a superstructure built on them.  Unfortunately even Euclidean geometry fails that test. The infamous parallel postulate has been questioned since ancient times as being a bit suspicious and even other Euclidean postulates have been questioned; extending a straight line depends on the space being continuous, unbounded and infinite.

So how are we to take Euclid’s postulates and axioms?  Perhaps we should follow the idea of Sir Karl Popper (1902 – 1994) and consider them to be bold hypotheses. This casts a different light on Euclid and his work; perhaps he was the first outstanding scientist.  If we take his basic assumptions as empirical[2] rather than sure and certain knowledge, all we lose is the illusion of certainty. Euclidean geometry then becomes an empirically testable model for the geometry of space time. The theorems, derived from the basic assumption, are prediction that can be checked against observations satisfying Popper’s demarcation criteria for science. Do the angles in a triangle add up to two right angles or not? If not, then one of the assumptions is false, probably the parallel line postulate.

Back to Descartes, he criticized Galileo Galilei (1564 – 1642) for having built without having considered the first causes of nature, he has merely sought reasons for particular effects; and thus he has built without a foundation. In the end, that lack of a foundation turned out to be less of a hindrance than Descartes’ faulty one.  To a large extent, sciences’ lack of a foundation, such as Descartes wished to provide, has not proved a significant obstacle to its advance.

Like Euclid, Sir Isaac Newton had his basic assumptions—the three laws of motion and the law of universal gravity—but he did not believe that they were self-evident; he believed that he had inferred them by the process of scientific induction. Unfortunately, scientific induction was as flawed as a foundation as the self-evident nature of the Euclidean postulates. Connecting the dots between a falling apple and the motion of the moon was an act of creative genius, a bold hypothesis, and not some algorithmic derivation from observation.

It is worth noting that, at the time, Newton’s explanation had a strong competitor in Descartes theory that planetary motion was due to vortices, large circulating bands of particles that keep the planets in place.  Descartes’s theory had the advantage that it lacked the occult action at a distance that is fundamental to Newton’s law of universal gravitation.  In spite of that, today, Descartes vortices are as unknown as is his claim that the pineal gland is the seat of the soul; so much for what he perceived clearly and distinctly as being true.

Galileo’s approach of solving problems one at time and not trying to solve all problems at once has paid big dividends. It has allowed science to advance one step at a time while Descartes’s approach has faded away as failed attempt followed failed attempt. We still do not have a grand theory of everything built on an unshakable foundation and probably never will. Rather we have models of widespread utility. Even if they are built on a shaky foundation, surely that is enough.

Peter Higgs (b. 1929) follows in the tradition of Galileo. He has not, despite his Noble prize, succeeded, where Descartes failed, in producing a foundation for all knowledge; but through creativity, he has proposed a bold hypothesis whose implications have been empirically confirmed.  Descartes would probably claim that he has merely sought reasons for a particular effect: mass. The answer to the ultimate question about life, the universe and everything still remains unanswered, much to Descartes’ chagrin but as scientists we are satisfied to solve one problem at a time then move on to the next one.

To receive a notice of future posts follow me on Twitter: @musquod.


[1] Cartesian from Descartes Latinized name Cartesius.

[2] As in the final analysis they are.

Share

This article appeared in Fermilab Today on July 30, 2014.

Fermilab physicist Arden Warner revolutionizes oil spill cleanup with magnetizable oil invention. Photo: Hanae Armitage

Fermilab physicist Arden Warner revolutionizes oil spill cleanup with magnetizable oil invention. Photo: Hanae Armitage

Four years ago, Fermilab accelerator physicist Arden Warner watched national news of the BP oil spill and found himself frustrated with the cleanup response.

“My wife asked ‘Can you separate oil from water?’ and I said ‘Maybe I could magnetize it!’” Warner recalled. “But that was just something I said. Later that night while I was falling asleep, I thought, you know what, that’s not a bad idea.”

Sleep forgone, Warner began experimenting in his garage. With shavings from his shovel, a splash of engine oil and a refrigerator magnet, Warner witnessed the preliminary success of a concept that could revolutionize the process of oil spill damage control.

Warner has received patent approval on the cleanup method.

The concept is simple: Take iron particles or magnetite dust and add them to oil. It turns out that these particles mix well with oil and form a loose colloidal suspension that floats in water. Mixed with the filings, the suspension is susceptible to magnetic forces. At a barely discernible 2 to 6 microns in size, the particles tend to clump together, and it only takes a sparse dusting for them to bond with the oil. When a magnetic field is applied to the oil and filings, they congeal into a viscous liquid known as a magnetorheological fluid. The fluid’s viscosity allows a magnetic field to pool both filings and oil to a single location, making them easy to remove. (View a 30-second video of the reaction.)

“It doesn’t take long — you add the filings, you pull them out. The entire process is even more efficient with hydrophobic filings. As soon as they hit the oil, they sink in,” said Warner, who works in the Accelerator Division. Hydrophobic filings are those that don’t like to interact with water — think of hydrophobic as water-fearing. “You could essentially have a device that disperses filings and a magnetic conveyor system behind it that picks it up. You don’t need a lot of material.”

Warner tested more than 100 oils, including sweet crude and heavy crude. As it turns out, the crude oils’ natural viscosity makes it fairly easy to magnetize and clear away. Currently, booms, floating devices that corral oil spills, are at best capable of containing the spill; oil removal is an entirely different process. But the iron filings can work in conjunction with an electromagnetic boom to allow tighter constriction and removal of the oil. Using solenoids, metal coils that carry an electrical current, the electromagnetic booms can steer the oil-filing mixture into collector tanks.

Unlike other oil cleanup methods, the magnetized oil technique is far more environmentally sound. There are no harmful chemicals introduced into the ocean — magnetite is a naturally occurring mineral. The filings are added and, briefly after, extracted. While there are some straggling iron particles, the vast majority is removed in one fell, magnetized swoop — the filings can even be dried and reused.

“This technique is more environmentally benign because it’s natural; we’re not adding soaps and chemicals to the ocean,” said Cherri Schmidt, head of Fermilab’s Office of Partnerships and Technology Transfer. “Other ‘cleanup’ techniques disperse the oil and make the droplets smaller or make the oil sink to the bottom. This doesn’t do that.”

Warner’s ideas for potential applications also include wildlife cleanup and the use of chemical sensors. Small devices that “smell” high and low concentrations of oil could be fastened to a motorized electromagnetic boom to direct it to the most oil-contaminated areas.

“I get crazy ideas all the time, but every so often one sticks,” Warner said. “This is one that I think could stick for the benefit of the environment and Fermilab.”

Hanae Armitage

Share

Inspired by the event at the UNESCO headquarters in Paris that celebrated the anniversary of the signature of the CERN convention, Sophie Redford wrote about her impressions on joining CERN as a young researcher. A CERN fellow designing detectors for the future CLIC accelerator, she did her PhD at the University of Oxford, observing rare B decays with the LHCb experiment.

The “60 years of CERN” celebrations give us all the chance to reflect on the history of our organization. As a young scientist, the early years of CERN might seem remote. However, the continuity of CERN and its values connects this distant past to the present day. At CERN, the past isn’t so far away.

Of course, no matter when you arrive at CERN for the first time, it doesn’t take long to realize that you are in a place with a special history. On the surface, CERN can appear scruffy. Haphazard buildings produce a maze of long corridors, labelled with seemingly random numbers to test the navigation of newcomers. Auditoriums retain original artefacts: ashtrays and blackboards unchanged since the beginning, alongside the modern-day gadgetry of projectors and video-conferencing systems.

The theme of re-use continues underground, where older machines form the injection chain for new. It is here, in the tunnels and caverns buried below the French and Swiss countryside, where CERN spends its money. Accelerators and detectors, their immense size juxtaposed with their minute detail, constitute an unparalleled scientific experiment gone global. As a young scientist this is the stuff of dreams, and you can’t help but feel lucky to be a part of it.

If the physical situation of CERN seems unique, so is the sociological. The row of flags flying outside the main entrance is a colourful red herring, for aside from our diverse allegiances during international sporting events, nationality is meaningless inside CERN. Despite its location straddling international borders, despite our wallets containing two currencies and our heads many languages, scientific excellence is the only thing that matters here. This is a community driven by curiosity, where coffee and cooperation result in particle beams. At CERN we question the laws of our universe. Many answers are as yet unknown but our shared goal of discovery bonds us irrespective of age or nationality.

As a young scientist at CERN I feel welcome and valued; this is an environment where reason and logic rule. I feel privileged to profit from the past endeavour of others, and great pride to contribute to the future of that which others have started. I have learnt that together we can achieve extraordinary things, and that seemingly insurmountable problems can be overcome.

In many ways, the second 60 years of CERN will be nothing like the first. But by continuing to build on our past we can carry the founding values of CERN into the future, allowing the next generation of young scientists to pursue knowledge without borders.

By Sophie Redford

Share

What are Sterile Neutrinos?

Sunday, July 27th, 2014

Sterile Neutrinos in Under 500 Words

Hi Folks,

In the Standard Model, we have three groups of particles: (i) force carriers, like photons and gluons; (ii) matter particles, like electrons, neutrinos and quarks; and (iii) the Higgs. Each force carrier is associated with a force. For example: photons are associated with electromagnetism, the W and Z bosons are associated with the weak nuclear force, and gluons are associated with the strong nuclear force. In principle, all particles (matter, force carries, the Higgs) can carry a charge associated with some force. If this is ever the case, then the charged particle can absorb or radiate a force carrier.

SM Credit: Wiki

Credit: Wikipedia

As a concrete example, consider electrons and top quarks. Electrons carry an electric charge of “-1″ and a top quark carries an electric charge of “+2/3″. Both the electron and top quark can absorb/radiate photons, but since the top quark’s electric charge is smaller than the electron’s electric charge, it will not absorb/emit a photon as often as an electron. In a similar vein, the electron carries no “color charge”, the charge associated with the strong nuclear force, whereas the top quark does carry color and interacts via the strong nuclear force. Thus, electrons have no idea gluons even exist but top quarks can readily emit/absorb them.

Neutrinos  possess a weak nuclear charge and hypercharge, but no electric or color charge. This means that neutrinos can absorb/emit W and Z bosons and nothing else.  Neutrinos are invisible to photons (particle of light) as well as gluons (particles of the color force).  This is why it is so difficult to observe neutrinos: the only way to detect a neutrino is through the weak nuclear interactions. These are much feebler than electromagnetism or the strong nuclear force.

Sterile neutrinos are like regular neutrinos: they are massive (spin-1/2) matter particles that do not possess electric or color charge. The difference, however, is that sterile neutrinos do not carry weak nuclear or hypercharge either. In fact, they do not carry any charge, for any force. This is why they are called “sterile”; they are free from the influences of  Standard Model forces.

Credit: somerandompearsonsblog.blogspot.com

Credit: somerandompearsonsblog.blogspot.com

The properties of sterile neutrinos are simply astonishing. For example: Since they have no charge of any kind, they can in principle be their own antiparticles (the infamous “sterile Majorana neutrino“). As they are not associated with either the strong nuclear scale or electroweak symmetry breaking scale, sterile neutrinos can, in principle, have an arbitrarily large/small mass. In fact, very heavy sterile neutrinos might even be dark matter, though this is probably not the case. However, since sterile neutrinos do have mass, and at low energies they act just like regular Standard Model neutrinos, then they can participate in neutrino flavor oscillations. It is through this subtle effect that we hope to find sterile neutrinos if they do exist.

Credit: Kamioka Observatory/ICRR/University of Tokyo

Credit: Kamioka Observatory/ICRR/University of Tokyo

Until next time!

Happy Colliding,

Richard (@bravelittlemuon)

 

Share

It’s Saturday, so I’m at the coffee shop working on my thesis again. It’s become a tradition over the last year that I meet a writer friend each week, we catch up, have something to drink, and sit down for a few hours of good-quality writing time.

photo09

The work desk at the coffee shop: laptop, steamed pork bun, and rosebud latte.

We’ve gotten to know the coffee shop really well over the course of this year. It’s pretty new in the neighborhood, but dark and hidden enough that business is slow, and we don’t feel bad keeping a table for several hours. We have our favorite menu items, but we’ve tried most everything by now. Some mornings, the owner’s family comes in, and the kids watch cartoons at another table.

I work on my thesis mostly, or sometimes I’ll work on analysis that spills over from the week, or I’ll check on some scheduled jobs running on the computing cluster.

My friend Jason writes short stories, works on revising his novel (magical realism in ancient Egypt in the reign of Rameses XI), or drafts posts for his blog about the puzzles of the British constitution. We trade tips on how to organize notes and citations, and how to stay motivated. So I’ve been hearing a lot about the cultural difference between academic work in the humanities and the sciences. One of the big differences is the level of citation that’s expected.

As a particle physicist, when I write a paper it’s very clear which experiment I’m writing about. I only write about one experiment at a time, and I typically focus on a very small topic. Because of that, I’ve learned that the standard for making new claims is that you usually make one new claim per paper, and it’s highlighted in the abstract, introduction, and conclusion with a clear phrase like “the new contribution of this work is…” It’s easy to separate which work you claim as your own and which work is from others, because anything outside “the new contribution of this work” belongs to others. A single citation for each external experiment should suffice.

For academic work in history, the standard is much different: the writing itself is much closer to the original research. As a start, you’ll need a citation for each quote, going to sources that are as primary as you can get your hands on. The stranger idea for me is that you also need a citation for each and every idea of analysis that someone else has come up with, and that a statement without a citation is automatically claimed as original work. This shows up in the difference between Jason’s posts about modern constitutional issues and historical ones: the historical ones have huge source lists, while the modern ones are content with a few hyperlinks.

In both cases, things that are “common knowledge” doesn’t need to be cited, like the fact that TeV cosmic rays exist (they do) or the year that Elizabeth I ascended the throne (1558).

There’s a difference in the number of citations between modern physics research and history research. Is that because of the timing (historical versus modern) or the subject matter? Do they have different amounts of common knowledge? For modern topics in physics and in history, the sources are available online, so a hyperlink is a perfect reference, even in formal post. By that standard, all Quantum Diaries posts should be ok with the hyperlink citation model. But even in those cases, Jason puts footnoted citations to modern articles in the JSTOR database, and uses more citations overall.

Another cool aspect of our coffee shop is that the music is sometimes ridiculous, and it interrupts my thoughts if I get stuck in some esoteric bog. There’s an oddly large sample of German covers of 30s and 40s showtunes. You haven’t lived until you’ve heard “The Lady is a Tramp” in German while calculating oscillation probabilities. I’m kidding. Mostly.

Jason has shown me a different way of handling citations, and I’ve taught him some of the basics of HTML, so now his citations can appear as hyperlinks to the references list!

As habits go, I’m proud of this social coffee shop habit. I default to getting stuff done, even if I’m feeling slightly off or uninspired.  The social reward of hanging out makes up for the slight activation energy of getting off my couch, and once I’m out of the house, it’s always easier to focus.  I miss prime Farmers’ Market time, but I could go before we meet. The friendship has been a wonderful supportive certainty over the last year, plus I get some perspective on my field compared to others.

Share

This article appeared in Fermilab Today on July 24, 2014.

Fermilab engineer Jim Hoff has invented an electronic circuit that can guard against radiation damage. Photo: Hanae Armitage

Fermilab engineer Jim Hoff has invented an electronic circuit that can guard against radiation damage. Photo: Hanae Armitage

Fermilab engineer Jim Hoff has received patent approval on a very tiny, very clever invention that could have an impact on aerospace, agriculture and medical imaging industries.

Hoff has engineered a widely adaptable latch — an electronic circuit capable of remembering a logical state — that suppresses a commonly destructive circuit error caused by radiation.

There are two radiation-based errors that can damage a circuit: total dose and single-event upset. In the former, the entire circuit is doused in radiation and damaged; in an SEU, a single particle of radiation delivers its energy to the chip and alters a state of memory, which takes the form of 1s and 0s. Altered states of memory equate to an unintentional shift from logical 1 or logical 0 and ultimately lead to loss of data or imaging resolution. Hoff’s design is essentially a chip immunization, preemptively guarding against SEUs.

“There are a lot of applications,” Hoff said. “Anyone who needs to store data for a length of time and keep it in that same state, uncorrupted — anyone flying in a high-altitude plane, anyone using medical imaging technology — could use this.”

Past experimental data showed that, in any given total-ionizing radiation dose, the latch reduces single-event upsets by a factor of about 40. Hoff suspects that the invention’s newer configurations will yield at least two orders of magnitude in single-event upset reduction.

The invention is fondly referred to as SEUSS, which stands for single-event upset suppression system. It’s relatively inexpensive and designed to integrate easily with a multitude of circuits — all that’s needed is a compatible transistor.

Hoff’s line of work lies in chip development, and SEUSS is currently used in some Fermilab-developed chips such as FSSR, which is used in projects at Jefferson Lab, and Phoenix, which is used in the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.

The idea of SEUSS was born out of post-knee-surgery, bed-ridden boredom. On strict bed rest, Hoff’s mind naturally wandered to engineering.

“As I was lying there, leg in pain, back cramping, I started playing with designs of my most recent project at work,” he said. “At one point I stopped and thought, ‘Wow, I just made a single-event upset-tolerant SR flip-flop!’”

While this isn’t the world’s first SEUSS-tolerant latch, Hoff is the first to create a single-event upset suppression system that is also a set-reset flip-flop, meaning it can take the form of almost any latch. As a flip-flop, the adaptability of the latch is enormous and far exceeds that of its pre-existing latch brethren.

“That’s what makes this a truly special latch — its incredible versatility,” says Hoff.

From a broader vantage point, the invention is exciting for more than just Fermilab employees; it’s one of Fermilab’s first big efforts in pursuing potential licensees from industry.

Cherri Schmidt, head of Fermilab’s Office of Partnerships and Technology Transfer, with the assistance of intern Miguel Marchan, has been developing the marketing plan to reach out to companies who may be interested in licensing the technology for commercial application.

“We’re excited about this one because it could really affect a large number of industries and companies,” Schmidt said. “That, to me, is what makes this invention so interesting and exciting.”

Hanae Armitage

Share

Welcome to Thesisland

Tuesday, July 22nd, 2014

When I joined Quantum Diaries, I did so with trepidation: while it was an exciting opportunity, I was worried that all I could write about was the process of writing a thesis and looking for postdoc jobs. I ended up telling the site admin exactly that: I only had time to work on a thesis and job hunt. I thought I was turning down the offer. But the reply I got was along the lines of “It’s great to know what topics you’ll write about! When can we expect a post?”. So, despite the fact that this is a very different topic from any recent QD posts, I’m starting a series about the process of writing a physics PhD thesis. Welcome.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

There are as many approaches to writing a PhD thesis as there are PhDs, but they can be broadly described along a spectrum.

On one end is the “constant documentation” approach: spend some fixed fraction of your time on documenting every project you work on. In this approach, the writing phase is completely integrated with the research work, and it’s easy to remember the things you’re writing about. There is a big disadvantage: it’s really easy to write too much, to spend too much time writing and not enough doing, or otherwise un-balance your time. If you keep a constant fraction of your schedule dedicated to writing, and that fraction is (in retrospect) too big, you’ve lost a lot of time. But you have documented everything, which everyone who comes after will be grateful for. If they ever see your work.

The other end of the spectrum is the “write like hell” approach (that is, write as fast as you can), where all the research is completed and approved before writing starts. This has the advantage that if you (and your committee) decide you’ve written enough, you immediately get a PhD! The disadvantage is that if you have to write about old projects, you’ll probably have forgotten a lot. So this approach typically leads to shorter theses.

These two extremes were first described to me (see the effect of thesis writing? It’s making my blog voice go all weird and passive) by two professors who were in grad school together and still work together. Each took one approach, and they both did fine, but the “constant documentation” thesis was at least twice (or was it three times?) as long as the “write like hell” thesis.

Somewhere between those extremes is the funny phenomenon of the “staple thesis”: a thesis primarily composed of all the papers you wrote in grad school, stapled together. A few of my friends have done this, but it’s not common in my research group because our collaboration is so large. I’ll discuss that in more detail later.

I’m going for something in the middle: as soon as I saw a light at the end of the tunnel, I wanted to start writing, so I downloaded the UW latex template for PhD theses and started filling it in. It’s been about 14 months since then, with huge variations in the writing/research balance. To help balance between the two approaches, I’ve found it helpful to keep at least some notes about all the physics I do, but nothing too polished: it’s always easier to start from some notes, however minimal, than to start from nothing.

When I started writing, there were lots of topics available that needed some discussion: history and theory, my detector, all the calibration work I did for my master’s project–I could have gone full-time writing at that point and had plenty to do. But my main research project wasn’t done yet. So for me, it’s not just a matter of balancing “doing” with “documenting”; it’s also a question of balancing old documentation with current documentation. I’ve almost, *almost* finished writing the parts that don’t depend on my work from the last year or so. In the meantime, I’m still finishing the last bits of analysis work.

It’s all a very long process. How many readers are looking towards writing a thesis later on? How many have gone through this and found a method that served them well? If it was fast and relatively low-stress, would you tell me about it?

Share

This article appeared in Fermilab Today on July 21, 2014.

Members of the prototype proton CT scanner collaboration move the detector into the CDH Proton Center in Warrenville. Photo: Reidar Hahn

Members of the prototype proton CT scanner collaboration move the detector into the CDH Proton Center in Warrenville. Photo: Reidar Hahn

A prototype proton CT scanner developed by Fermilab and Northern Illinois University could someday reduce the amount of radiation delivered to healthy tissue in a patient undergoing cancer treatment.

The proton CT scanner would better target radiation doses to the cancerous tumors during proton therapy treatment. Physicists recently started testing with beam at the CDH Proton Center in Warrenville.

To create a custom treatment plan for each proton therapy patient, radiation oncologists currently use X-ray CT scanners to develop 3-D images of patient anatomy, including the tumor, to determine the size, shape and density of all organs and tissues in the body. To make sure all the tumor cells are irradiated to the prescribed dose, doctors often set the targeting volume to include a minimal amount of healthy tissue just outside the tumor.

Collaborators believe that the prototype proton CT, which is essentially a particle detector, will provide a more precise 3-D map of the patient anatomy. This allows doctors to more precisely target beam delivery, reducing the amount of radiation to healthy tissue during the CT process and treatment.

“The dose to the patient with this method would be lower than using X-ray CTs while getting better precision on the imaging,” said Fermilab’s Peter Wilson, PPD associate head for engineering and support.

Fermilab became involved in the project in 2011 at the request of NIU’s high-energy physics team because of the laboratory’s detector building expertise.

The project’s goal was a tall order, Wilson explained. The group wanted to build a prototype device, imaging software and computing system that could collect data from 1 billion protons in less than 10 minutes and then produce a 3-D reconstructed image of a human head, also in less than 10 minutes. To do that, they needed to create a device that could read data very quickly, since every second data from 2 million protons would be sent from the device — which detects only one proton at a time — to a computer.

NIU physicist Victor Rykalin recommended building a scintillating fiber tracker detector with silicon photomultipliers. A similar detector was used in the DZero experiment.

“The new prototype CT is a good example of the technical expertise of our staff in detector technology. Their expertise goes back 35 to 45 years and is really what makes it possible for us to do this,” Wilson said.

In the prototype CT, protons pass through two tracking stations, which track the particles’ trajectories in three dimensions. (See figure.) The protons then pass through the patient and finally through two more tracking stations before stopping in the energy detector, which is used to calculate the total energy loss through the patient. Devices called silicon photomultipliers pick up signals from the light resulting from these interactions and subsequently transmit electronic signals to a data acquisition system.

In the prototype proton CT scanner, protons enter from the left, passing through planes of fibers and the patient's head. Data from the protons' trajectories, including the energy deposited in the patient, is collected in a data acquisition system (right), which is then used to map the patient's tissue. Image courtesy of George Coutrakon, NIU

In the prototype proton CT scanner, protons enter from the left, passing through planes of fibers and the patient’s head. Data from the protons’ trajectories, including the energy deposited in the patient, is collected in a data acquisition system (right), which is then used to map the patient’s tissue. Image courtesy of George Coutrakon, NIU

Scientists use specialized software and a high-performance computer at NIU to accurately map the proton stopping powers in each cubic millimeter of the patient. From this map, visually displayed as conventional CT slices, the physician can outline the margins, dimensions and location of the tumor.

Elements of the prototype were developed at both NIU and Fermilab and then put together at Fermilab. NIU developed the software and computing systems. The teams at Fermilab worked on the design and construction of the tracker and the electronics to read the tracker and energy measurement. The scintillator plates, fibers and trackers were also prepared at Fermilab. A group of about eight NIU students, led by NIU’s Vishnu Zutshi, helped build the detector at Fermilab.

“A project like this requires collaboration across multiple areas of expertise,” said George Coutrakon, medical physicist and co-investigator for the project at NIU. “We’ve built on others’ previous work, and in that sense, the collaboration extends beyond NIU and Fermilab.”

Rhianna Wisniewski

Share

This article appeared in symmetry on July 11, 2014.

Together, the three experiments will search for a variety of types of dark matter particles. Photo: NASA

Together, the three experiments will search for a variety of types of dark matter particles. Photo: NASA

Two US federal funding agencies announced today which experiments they will support in the next generation of the search for dark matter.

The Department of Energy and National Science Foundation will back the Super Cryogenic Dark Matter Search-SNOLAB, or SuperCDMS; the LUX-Zeplin experiment, or LZ; and the next iteration of the Axion Dark Matter eXperiment, ADMX-Gen2.

“We wanted to pool limited resources to put together the most optimal unified national dark matter program we could create,” says Michael Salamon, who manages DOE’s dark matter program.

Second-generation dark matter experiments are defined as experiments that will be at least 10 times as sensitive as the current crop of dark matter detectors.

Program directors from the two federal funding agencies decided which experiments to pursue based on the advice of a panel of outside experts. Both agencies have committed to working to develop the new projects as expeditiously as possible, says Jim Whitmore, program director for particle astrophysics in the division of physics at NSF.

Physicists have seen plenty of evidence of the existence of dark matter through its strong gravitational influence, but they do not know what it looks like as individual particles. That’s why the funding agencies put together a varied particle-hunting team.

Both LZ and SuperCDMS will look for a type of dark matter particles called WIMPs, or weakly interacting massive particles. ADMX-Gen2 will search for a different kind of dark matter particles called axions.

LZ is capable of identifying WIMPs with a wide range of masses, including those much heavier than any particle the Large Hadron Collider at CERN could produce. SuperCDMS will specialize in looking for light WIMPs with masses lower than 10 GeV. (And of course both LZ and SuperCDMS are willing to stretch their boundaries a bit if called upon to double-check one another’s results.)

If a WIMP hits the LZ detector, a high-tech barrel of liquid xenon, it will produce quanta of light, called photons. If a WIMP hits the SuperCDMS detector, a collection of hockey-puck-sized integrated circuits made with silicon or germanium, it will produce quanta of sound, called phonons.

“But if you detect just one kind of signal, light or sound, you can be fooled,” says LZ spokesperson Harry Nelson of the University of California, Santa Barbara. “A number of things can fake it.”

SuperCDMS and LZ will be located underground—SuperCDMS at SNOLAB in Ontario, Canada, and LZ at the Sanford Underground Research Facility in South Dakota—to shield the detectors from some of the most common fakers: cosmic rays. But they will still need to deal with natural radiation from the decay of uranium and thorium in the rock around them: “One member of the decay chain, lead-210, has a half-life of 22 years,” says SuperCDMS spokesperson Blas Cabrera of Stanford University. “It’s a little hard to wait that one out.”

To combat this, both experiments collect a second signal, in addition to light or sound—charge. The ratio of the two signals lets them know whether the light or sound came from a dark matter particle or something else.

SuperCDMS will be especially skilled at this kind of differentiation, which is why the experiment should excel at searching for hard-to-hear low-mass particles.

LZ’s strength, on the other hand, stems from its size.

Dark matter particles are constantly flowing through the Earth, so their interaction points in a dark matter detector should be distributed evenly throughout. Quanta of radiation, however, can be stopped by much less significant barriers—alpha particles by a piece of paper, beta particles by a sandwich. Even gamma ray particles, which are harder to stop, cannot reach the center of LZ’s 7-ton detector. When a particle with the right characteristics interacts in the center of LZ, scientists will know to get excited.

The ADMX detector, on the other hand, approaches the dark matter search with a more delicate touch. The dark matter axions ADMX scientists are looking for are too light for even SuperCDMS to find.

If an axion passed through a magnetic field, it could convert into a photon. The ADMX team encourages this subtle transformation by placing their detector within a strong magnetic field, and then tries to detect the change.

“It’s a lot like an AM radio,” says ADMX-Gen2 co-spokesperson Gray Rybka of the University of Washington in Seattle.

The experiment slowly turns the dial, tuning itself to watch for one axion mass at a time. Its main background noise is heat.

“The more noise there is, the harder it is to hear and the slower you have to tune,” Rybka says.

In its current iteration, it would take around 100 years for the experiment to get through all of the possible channels. But with the addition of a super-cooling refrigerator, ADMX-Gen2 will be able to search all of its current channels, plus many more, in the span of just three years.

With SuperCDMS, LZ and ADMX-Gen2 in the works, the next several years of the dark matter search could be some of its most interesting.

Kathryn Jepsen

Share

La 37ème Conférence internationale de physique des hautes énergies vient de se terminer à Valence, en Espagne. Cette année, pas de grande surprise : aucun nouveau boson, aucun signe de nouvelles particules ou phénomènes révélant la nature de la matière sombre ou l’existence de nouvelles théories comme la supersymétrie. Mais comme toujours, quelques petites anomalies ont capté l’attention.

Les chercheur-e-s s’intéressent particulièrement à toute déviation par rapport aux prédictions théoriques car ces petites anomalies pourraient révéler l’existence d’une “nouvelle physique”. Cela permettrait de découvrir des indices d’une théorie plus inclusive puisque tout le monde réalise que le modèle théorique actuel, le Modèle standard, a ses limites et doit être remplacé par une théorie plus complète.

Mais il faut se méfier. Tous les physiciens et physiciennes le savent bien : de petits écarts apparaissent souvent et disparaissent tout aussi vite. Toutes les mesures faites en physique suivent des lois statistiques. Des déviations d’un écart-type entre les valeurs mesurées expérimentalement et celles prédites par la théorie sont observées dans trois mesures sur dix. De plus grands écarts sont moins communs, mais toujours possibles. Une déviation de deux écarts-types se produit dans 5% des mesures, et trois écarts-types, 1%. Il y a aussi des erreurs systématiques reliées aux instruments de mesure. Ces erreurs ne sont pas de nature statistiques mais peuvent être réduites avec une connaissance accrue du détecteur. L’erreur expérimentale associée à chaque résultat correspond à un écart-type. Voici à titre d’exemple deux petites anomalies rapportées durant la conférence et qui ont attiré l’attention cette année.

La Collaboration ATLAS a montré un résultat préliminaire sur la production d’une paire de bosons W. La mesure de ce taux permet d’effectuer des vérifications détaillées du Modèle puisque les théoricien–ne-s peuvent prévoir combien de fois des paires de bosons W sont produites quand les protons entrent en collision dans Grand collisionneur de hadrons (LHC). Le taux de production dépend de l’énergie dégagée pendant ces collisions. Jusqu’ici, on peut faire deux mesures puisque le LHC a fonctionné à deux énergies différentes, soit 7 et 8 TeV.

Les expériences CMS et ATLAS avaient déjà publié leurs résultats basés sur les données recueillis à 7 TeV. Les taux mesurés excédaient légèrement les prédictions théoriques mais restaient tout de même à l’intérieur des marges d’erreur expérimentale avec des déviations de 1.0 et 1.4 écart-type, respectivement. CMS avait aussi publié des résultats basés sur environ 20% de toutes les données accumulées à 8 TeV. Le taux mesuré excédait légèrement la prédiction théorique par 1.7 écart-type. Le dernier résultat d’ATLAS ajoute un élément supplémentaire au tableau. Il est basé sur l’ensemble des données recueillies à 8 TeV. ATLAS obtient une déviation un peu plus forte pour le taux de production de deux bosons W à 8 TeV avec une déviation de 2.1 écarts-types par rapport à la prédiction théorique.

WWResultsLes quatre mesures expérimentales du taux de production de paires de bosons W (points noirs) avec l’incertitude expérimentale (barre horizontale) aussi bien que la prédiction théorique actuelle (triangle bleu) avec sa propre incertitude (bande bleue). On peut voir que toutes les mesures sont plus élevées que les prédictions actuelles, suggérant que le calcul théorique actuel n’inclut pas tout.

Chacune de ces quatre mesures est en bon accord avec la valeur théorique mais le fait qu’elles excèdent toutes cette prédiction commence à attirer l’attention. Très probablement, cela signifie que les théoriciens n’ont pas encore pris en compte toutes les petites corrections exigées par le Modèle standard pour déterminer ce taux suffisamment précisément. C’est un peu comme si on oubliait de noter quelques petites dépenses dans son budget, menant à un déficit non expliqué à la fin du mois. Il pourrait aussi y avoir des facteurs communs dans les incertitudes expérimentales, qui réduiraient l’importance globale de cette anomalie. Mais si les prédictions théoriques demeurent ce qu’elles sont, même en rajoutant toutes les petites corrections possibles, cela indiquerait l’existence de nouveaux phénomènes, ce qui serait passionnant. Il faudra alors surveiller l’évolution de cette mesure après la remise en marche du LHC en 2015 à plus haute énergie, soit 13 TeV.

La Collaboration CMS a présenté elle aussi un résultat intrigant. Un groupe de chercheur-e-s a trouvé quelques événements compatibles avec l’observation d’une désintégration d’un boson de Higgs en un tau et un muon. De telles désintégrations sont interdites dans le Modèle standard puisqu’elles enfreignent la conservation de la « saveur » leptonique. Il y a trois saveurs ou types de leptons chargés (une catégorie de particules fondamentales) : l’électron, le muon et le tau. Chacun vient avec son propre type de neutrinos. Dans toutes les observations faites jusqu’à présent, les leptons sont toujours produits soit avec leur propre neutrino, soit avec leur antiparticule. La désintégration d’un boson de Higgs en leptons devrait donc toujours produire un lepton chargé et son antiparticule, mais jamais deux leptons chargés de saveur différente. Il est tout simplement interdit d’enfreindre cette règle à l’intérieur du cadre du Modèle standard.

Il faudra vérifier tout cela avec plus de données, ce qui sera possible après la reprise du LHC l’année prochaine. Mais d’autres modèles de « nouvelle physique » permettent la violation de la saveur leptonique. Il s’agit de modèles comme ceux comprenant plusieurs doublets de Higgs ou des bosons de Higgs composites ou encore les modèles impliquant des dimensions supplémentaires comme ceux de Randall-Sundrum. Alors si avec plus de données ATLAS et CMS confirment que cette tendance correspond à un effet réel, ce sera une véritable révolution.

HtomutauLes résultats obtenus par la Collaboration CMS pour six types de désintégrations différentes. Tous donnent une valeur non-nulle, contrairement aux prédictions du Modèle standard, pour le taux de désintégration de bosons de Higgs en paires de tau et muon.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

Share