• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Laura Gladstone | MIT | USA

Read Bio

Life Underground: Anything Anyone Would Teach Me

Friday, April 17th, 2015

Going underground most days for work is probably the weirdest-sounding this about this job. At Laboratori Nazionali del Gran Sasso, we use the lab to be underground because of the protection it affords us from cosmic rays, weather, and other disruptions, and with it we get a shorthand description of all the weirdness of lab life. It’s all just “underground.”

ss17bis

The last kilometer of road before reaching the above-ground labs of LNGS

Some labs for low background physics are in mines, like SURF where fellow Quantum Diariest Sally Shaw works. One of the great things about LNGS is that we’re located off a highway tunnel, so it’s relatively easy to reach the lab: we just drive in. There’s a regular shuttle schedule every day, even weekends. When there are snowstorms that close parts of the highway, the shuttle still goes, it just takes a longer route all the way to the next easy exit. The ride is a particularly good time to start drafting blog posts. On days when the shuttle schedule is inconvenient or our work is unpredictable, we can drive individual cars, provided they’ve passed emissions standards.

The guards underground keep a running list of all the people underground at any time, just like in a mine. So, each time I enter or leave, I give my name to the guards. This leads to some fun interactions where Italian speakers try to pronounce names from all over. I didn’t think too much of it before I got here, but in retrospect I had expected that any name of European etymology would be easy, and others somewhat more difficult. In fact, the difficult names are those that don’t end in vowels: “GladStone” become “Glad-eh-Stone-eh”. But longer vowel-filled names are fine, and easy to pronounce, even though they’re sometimes just waved off as “the long one” with a gesture.

There’s constantly water dripping in the tunnel. Every experiment has to be housed in something waterproof, and gutters line all the hallways, usually with algae growing in them. The walls are coated with waterproofing, more to keep any potential chemical spill from us from getting into the local groundwater than to keep the water off our experiments. When we walk from the tunnel entrance to the experimental halls, the cue for me to don a hardhat is the first drip on my head from the ceiling. Somehow, it’s always right next to the shuttle stop, no matter where the shuttle parks.

And, because this is Italy, the side room for emergencies has a bathroom and a coffee machine. There’s probably emergency air tanks too, but the important thing is the coffee machine, to stave off epic caffeine withdrawal headaches. And of course, “coffee” means “espresso” unless otherwise stated– but that’s another whole post right there.

When I meet people in the neighboring villages, at the gym or buying groceries or whatever, they always ask what an “American girl” is doing so far away from the cities, and “lavoro a Laboratorio Gran Sasso” is immediately understood. The lab is even the economic engine that’s kept the nearest village alive: it has restaurants, hotels, and rental apartments all catering to people from the lab (and the local ski lift), but no grocery stores, ATMs, gyms, or post offices that would make life more convenient for long-term residents.

Every once in a while, when someone mentions going underground, I can’t help thinking back to the song “Underground” from the movie Labyrinth that I saw too many times growing up. Labyrinth and The Princess Bride were the “Frozen” of my childhood (despite not passing the Bechtel test).

Just like Sarah, my adventures underground are alternately shocking and exactly what I expected from the stories, and filled with logic puzzles and funny characters. Even my first night here, when I was delirious with jetlag, I saw a black cat scamper across a deserted medieval street, and heard the clock tower strike 13 times. And just like Wesley, “it was a fine time for me, I was learning to fence, to fight–anything anyone would teach me–” (except that in my case it’s more soldering, cryogenics plumbing, and ping-pong, and less fighting). The day hasn’t arrived where the Dread Pirate Roberts calls me to his office and gives me a professorship.

And now the shuttle has arrived back to the office, so we’re done. Ciao, a dopo.

(ps the clock striking 13 times was because it has separate tones for the hour and the 15-minute chunks. The 13 was really 11+2 for 11:30.)

Share

CUORE-0 Results Tour Kicks Off

Thursday, April 9th, 2015

The CUORE-0 collaboration just announced a result: a new limit of 2.7 x1024 years (90%C.L.) on the halflife of neutrinoless double beta decay in 130Te. Or, if you combine it with the data from Cuorecino, 4.0×1024 years. A paper has been posted to the arXiv preprint server and submitted to the journal Physical Review Letters.

Screen Shot 2015-04-09 at 5.26.55 PM

Bottom: Energy spectrum of 0νββ decay candidates in CUORE-0 (data points) and the best-fit model from the UEML analysis (solid blue line). The peak at ∼2507 keV is attributed to 60Co; the dotted black line shows the continuum background component of the best-fit model. Top: The nor-369 malized residuals of the best-fit model and the binned data.370 The vertical dot-dashed black line indicates the position of371 Qββ. From arXiv.

CUORE-0 is an intermediate step between the upcoming full CUORE detector and its prototype, Cuoricino. The limit from Cuoricino was 2.8×1024 years**, but this was limited by background contamination in the detector, and it took a long time to get to that result. For CUORE, the collaboration developed new and better methods (which are described in detail in an upcoming detector paper) for keeping everything clean and uniform, plus increased the amount of tellurium by a factor of 19. The results coming out now test and verify all of that except the increased mass: CUORE-0 uses all the same cleaning and assembly procedures as CUORE, but with only the first of 19 towers of crystals. It took data while the rest of the towers were being built. We stopped taking CUORE-0 data when the sensitivity was slightly better than Cuoricino, which only took half the exposure time of the Cuoricino run. The resulting background was 6 times lower in the continuum parts of the spectrum, and all the energy resolutions (which were calibrated individually for each crystal each month) were more uniform. So this is a result to be proud of: even before the CUORE detector starts taking data, we have this result to herald its success.

The energy spectra measured in both Cuoricino and CUORE-0, displaying the factor of 6 improvement in the background rates.

The energy spectra measured in both Cuoricino and CUORE-0, displaying the factor of 6 improvement in the background rates. From the seminar slides of L. Canonica.

 

The result was announced in the first seminar in a grand tour of talks about the new result. I got to see the announcement at Gran Sasso today–perhaps you, dear reader, can see one of the talks too! (and if not, there’s video available from the seminar today) Statistically speaking, out of these presentations you’re probably closest to the April APS meeting if you’re reading this, but any of them would be worth the effort to see. There was also a press release today and coverage in the Yale News and Berkley Labs news, because of which I’m making this post pretty short.

 

The Upcoming Talks:

There are also two more papers in preparation, which I’ll post about when they’re submitted. One describes the background model, and the other describes the technical details of the detector. The most comprehensive coverage of this result will be in a handful of PhD theses that are currently being written.

(post has been revised to include links with the arXiv post number: 1504.02454)

**Comparing the two limits to each other is not as straightforward as one might hope, because there were different statistical methods used to obtain them, which will be covered in detail in the papers. The two limits are roughly similar no matter how you look, and still the new result has better (=lower) backgrounds and took less time to achieve. A rigorous, apples-to-apples comparison of the two datasets would require me to quote internal collaboration numbers.

Share

Sonic Copper Cleaning

Saturday, February 7th, 2015

IMG_7979Today we cleaned parts to go into the detector using a sci-fi piece of machinery called a “sonic bath”.

On CUORE, we’re looking for a faint signal of radioactivity. That means we can’t let anything swamp that signal: we have to clean away the normal low-level of dirt present in the atmosphere and biological systems. Even something as normal as a banana has so much naturally-occurring radiation that the “banana-year” is a (someone irreverent and imprecise) unit of measurement for backgrounds of dark matter experiments.

The parts we’re cleaning will be guide tubes for a calibration system. Through them, we’ll place wires close to the detector, then remove them again when it’s time for the main data taking. The calibration wires have a measured amount of radioactivity, and we use that known signal to calibrate the other signals within CUORE.

We used a sonic bath to clean the parts: they’re in a bag with soap water, inside a larger tub filled with tap water. To agitate everything (like the dasher in a clothes washer) the machine uses sound. It’s a bit like the little machines that some people use to clean their contact lenses, but larger: about the size of a laundry room sink, or a restaurant kitchen sink.

IMG_8002My favorite part of the process was the warning on the side: running with an empty bath could cause burnout of the ultrasonic coupler. “The ultrasonic coupler” sound like something out of science fiction: like a combination of “sonic screwdriver” and “flux capacitor”. But it’s not fiction– this is just what we need to do for our daily work!

The noise it makes sounds a bit like an electric fly zapper: a low level electric buzz and cackle, with a faint hiss hinting that there’s something higher pitched above that. It’s practically impossible to hear the main frequency because it’s pitched so much higher than human hearing: the noise is at 30-40kHz, and a child can usually hear as high as 20kHz. Some of the lower resonances fall into an audible range, which is what makes it sound like there’s more going on than I can hear.

In the smaller machine (about the size of a bathroom sink), the agitation noise was more audible, almost headache-inducing in long doses. Since I just watched the fourth Harry Potter movie, it reminded me of the recorded mermaid message: you can only hear it when you’re underwater. If you’re in air, it sounds like a screech instead of a message. Knowing the line between science fiction and fact, I didn’t actually stick my ear in the water (and we wore earplugs in the lab).

IMG_7992There’s a funny effect with some of the bubbles in the tub. They get caught in vibrational nodes within the water, so even though they’re clearly made of air, they don’t rise to the top. It’s like an atom trap made of lasers holding a single atom in place, except this works at a macroscopic level so it’s more intuitive. Seeing the modes in action is a little reward for having worked through all those Jackson problem sets where we deconvolved arbitrary functions in various ways.

When the parts come out at the end, and after we repeat the process with some citric acid (like what you find in lemon juice) and then rinse everything, the rods are a completely different color. They’ve gone from a dead-leaf brown to a peachy pink, all shiny and bright and hopeful. It’s a clean start for a new detector. We preserved the clean exteriors by sealing them in vacuum bags,  and told the chem lab supervisor we were done for the day.

Share

The Helium Dilution Refrigerator in CUORE

Wednesday, February 4th, 2015

The dilution refrigerator is the coldest cooling stage for the CUORE detector. It keeps the crystals cold enough that the heat can be detected from a single radioactive decay. The purpose of CUORE is to study the energy spectrum of these decays, so it’s vital that the surroundings be cold. Here’s how it works.

The CUORE cryostat dilution unit

The CUORE cryostat dilution unit

What CUORE Does

CUORE is looking for a kind of radioactive decay that’s extremely rare if indeed it happens at all. It’s never been observed before. It’s called “neutrinoless double beta decay:” a decay emitting two electrons but no neutrinos. Lots of radioactive elements undergo beta decay and emit electrons. Some emit two at once, in a double interaction. That’s accompanied by two neutrinos. The special case that CUORE investigates is the theoretical possibility that the neutrinos annihilate each other before the interaction is completed, so no neutrinos come out.

This can happen only if neutrinos are their own antiparticles, which is an amazingly interesting possibility. Whether or not neutrinos are their own antiparticles is one of the great open questions in neutrino physics today. In the process of investigating this, we also will learn about absolute neutrino masses, two neutrino double beta decay, and a whole host of experimental techniques.

How Cold Helps

Making the detector profoundly cold makes this search possible. Heat is the main signal, so any extra heat floating around is background noise. But additionally, the signal itself gets stronger at lower temperatures. The heat capacity is a strong function of temperature: the colder you go, the less heat it takes to create a change in temperature. So by making the detector colder, the amount of heat deposited by a single decay creates a larger change in temperature, making it more distinguishable from background noise.

The temperatures we’re considering here are measured in millikelvin. For perspective, let’s look at some other cold things. At the South Pole (where my grad school experiment IceCube is) the outdoor temperature ranges between 0 and -100 Fahrenheit, or 255 to 200 Kelvin (K). Liquid argon boils at 87K. Liquid nitrogen boils off at 77K. The cosmic microwave background is at 2.7K. The temperature we’re hoping to use is 10mK or less, which makes CUORE the coldest cubic meter in the universe. The advancement CUORE represents isn’t simply the temperature but also the volume. It might not be the coldest place in the universe, but it’s the coldest place as big as a cubic meter.

A Mixture of Helium Isotopes

"Phase
There’s a trick we use to cool down our helium so much. Helium comes in two isotopes: 3He and 4He. When they get cold enough, both of them become superfluids, but at different temperatures: 2.2K for 4He and 1mK for 3He. The refrigerator operates between these two, so the 4He is superfluid but the 3He is only a regular liquid. By simple 3He evaporation, we can get it down to about 300mK. The key to our trick is that when you mix the two isotopes together, the mixture can become even colder than either would be individually. It splits into two phases, and by making one phase change into the other, we can pump out more heat.

When you cool a mixture of 3He and 4He to very nearly zero (below 867mK), the mixture separates into two different phases. One phase contains more 3He, so we call it the concentrated phase. The other contains less 3He, so we call it the dilute phase. We have a tube going down into the dilute phase and pumping away 3He, shifting the balance of the concentrations in the two phases. As we pump away 3He from the dilute phase, more 3He changes phase from the concentrated phase to take its place and maintain an equilibrium. As each atom changes phase, it absorbs heat because of the lower enthalpy in the dilute phase. The faster we pump out 3He, and the more 3He changes phase, the more cooling power the system has. The power of the cooling engine in limited by the interface area between the two phases, so large area makes more power.

The process of cooling something by pumping away from the dilute phase shows up in another, more familiar context: it’s the same process as when we cool tea by blowing away the steam. More tea can evaporate, cooling what remains.

Throughout this process, we follow the 3He for “dilute” and “concentrated” naming conventions for a couple of reasons. First, the stuff that’s circulating through the system is nearly all 3He and only a little 4He. The 4He stays in its superfluid state within the mixing chamber while the 3He is pumped through condensing lines. There can be a tiny tiny bit of 4He that creeps along the tubes up the pumps (yes, “creep” is the technical term to use here, literally), where it can evaporate and be pumped along with the 3He, but that’s usually less than 1%. The second reason we follow the 3He is that it’s extremely expensive and rare, but that’s a different topic, and it involves international politics in the nuclear age.

[1] Image from F. Pobell: Matter and Methods at Low Temperature, 2nd ed., Springer-Verlag, New York (1995), via G. Ventura and L. Risegari: The Art of Cryogenics Low-Temperature Experimental Techniques, Elsevier, Oxford (2008).

Share

Graduating, part 2: Final Thesis Revisions

Wednesday, November 26th, 2014
IMG_5306

The doorway to the registrar’s office where the final thesis check takes place

I took an entire month between defending my thesis and depositing it with the grad school. During that month, I mostly revised my thesis, but also I took care of a bunch of logistical things I had been putting off until after the defense: subletting the apartment, selling the car, engaging movers, starting to pack… and of course putting comments into the thesis from the committee. I wrote back to my (now current) new boss who said we should chat again after I “come up for air” (which is a pretty accurate way of describing it). I went grocery shopping, and for the first time in months it was fun to walk around the store imagining and planning the things I could make in my kitchen. I had spare creative energy again!

Partly I needed a full month to revise the thesis because I was making changes to the analysis within the thesis right up to the day before I defended, and I changed the wording on the concluding sentences literally 20 minutes before I presented. I didn’t have time to polish the writing because the analysis was changing so much. The professor who gave me the most detailed comments was justifiably annoyed that he didn’t have sufficient time to read the whole dissertation before the defense. It worked out in the end, because the time he needed to finish reading was a time when I didn’t want to think about my thesis in any way. I even left town and visited friends in Chicago, just to break up the routine that had become so stressful. There’s nothing quite as nice as waking up to a cooked breakfast when you’ve forgotten that cooked breakfasts are an option.

There were still thesis revisions to implement. Some major comments reflected the fact that, while some chapters had been edited within a peer group, no one had read it cover-to-cover until after the defense. The professor who had the most detailed comments wrote a 12-page email detailing his suggestions, many of which were word substitutions and thus easy to implement. Apparently I have some tics in my formal writing style.

I use slightly too many (~1.2) semicolons per page of text; this reflects my inclination to use compound sentences but also avoid parentheses in formal writing. As my high school teacher, Perryman, taught me: if you have to use parentheses you’re not being confidently declarative, and if you ever want to use nested parentheses in a formal setting, figure out what you really want to say and just say it! (subtext: or figure out why you don’t want to say it, and don’t say it. No amount of parenthesis can make a statement disappear.) Anyway, I’d rather have too many semicolons than too many parentheses; I’d rather be seen as too formal than too tentative. It’s the same argument, to me, that I’d rather wear too much black than too much pink. So, many of the semicolons stayed in despite the comments. Somehow, in the thesis haze, I didn’t think of the option of many simple single-clause sentences. Single-clause sentences are hard.

I also used the word “setup” over 100 times as a catch-all word to encompass all of the following: apparatus, configuration, software, procedure, hypothesis. I hadn’t noticed that, and I have no good reason for it, so now my thesis doesn’t use the word “setup” at all. I think. And if it does, it’s too late to change it now!

And of course there was the matter of completing the concluding paragraph so it matched the conclusion I presented in my defense seminar. That took some work. I also tried to produce some numbers to complete the description of my analysis in more detail than I needed for the defense seminar, just for archival completeness. But by the time I had fixed everything else, it was only a few hours until my deposit margin-check appointment (and also 2:30am), so I gave up on getting those numbers.

The deposit appointment was all of 5 minutes long, but marked the line between “almost done” and “DONE!!!”. The reviewing administrator realized this. She shook my hand three times in those 5 minutes. When it was done, I went outside and there were birds singing. I bought celebratory coffee and a new Wisconsin shirt. And then started packing up my apartment for the movers arriving the next morning.

During that month of re-entering society,  I had some weird conversations which reminded me how isolated I had been during the thesis. A friend who used to work in our office had started her own business, but I’d only had time to ask her about it once or perhaps twice. When we had a bit of time to catch up more, I asked how it had been during the last few months, and she replied that it had been a year. A year. It just went by and I didn’t notice, without the regular office interactions.

I’d gotten into a grove of watching a couple episodes each night of long-running TV shows with emotionally predictable episodic plot lines. Star Trek and various murder mysteries were big. The last series was “House, MD” with Hugh Laurie. By coincidence, when I defended my thesis and my stress level starting deflating, I was almost exactly at the point in the series where they ran out of mysteries from the original book it was based on, and started going more into a soap-opera style character drama. By the time I wasn’t interested in the soap opera aspects anymore, it was time to start reengaging with my real-life friends.

A few days after I moved away from Madison, when I was staying with my parents, I picked up my high school routine of reading the local paper over breakfast, starting with the comics, then local editorials. I found (or rather, my dad found) myself criticizing the writing from the point of view of a dissertator. It takes more than a few days to get out of thesis-writing mode. The little nagging conscience doesn’t go away, still telling me that the difference between ok writing and great writing is important, more so now than at any point so far in my career. For the last edits of a PhD, it might be important to criticize at that level of detail. But for a local paper, pretty much anything is useful to the community.

At lunch Saturday in a little restaurant in the medieval part of the Italian village of Assergi, I found the antidote. When I can’t read any of the articles and posters on the walls, when I can’t carry on a conversation with more than 3-word sentences, it doesn’t matter anymore if the paragraphs have a clear and concise topic sentence. I need simple text. I’m happy if I can understand the general meaning. The humility of starting over again with Italian is the antidote for the anxiety of a thesis. It’s ok to look like a fool in some ways, because I am a certified non-fool in one small part of physics.

It’s not perfect of course: there’s still a lot of anxiety inherent in living in a country without speaking the language (well enough to get by without english-speaking help). I’ll write more about the cultural transition in another post, since I have so many posts to catch up on from while I was in the thesis-hole, and this post is definitely long enough. But for now, the thesis is over.

Share

Graduating, part 1: The Defense

Tuesday, November 25th, 2014

It’s been a crazy 3 weeks since I officially finished my PhD. I’m in the transition from being a grad student slowly approaching insanity to a postdoc who has everything figured out, and it’s a rocky transition.

DSC_0738The end of the PhD at Wisconsin has two steps. The first is the defense, which is a formal presentation of my research to the professors and committee, our colleagues, and very few friends and family. The second is actually turning the completed dissertation to the grad school, with the accompanying “margin check” appointment with the grad school. In between, the professors can send me comments about the thesis. I’ve heard so many stories of different universities setting up the end of a degree differently, it’s pretty much not worth going into the details. If you or someone you know is going through this process, you don’t need a comparison of how it works at different schools, you just need a lot of support and coping mechanisms. All the coping mechanisms you can think of, you need them. It’s ok, it’s a limited time, don’t feel guilty, just get through it. There is an end, and you will reach it.

The days surrounding the defense were planned out fairly carefully, including a practice talk with my colleagues, again with my parents (who visited for the defense), and delivery burritos. I ordered coffee and doughnuts for the defense from the places where you get those, and I realized why such an important day has such a surprisingly small variety of foods: because deviating from the traditional food is so very far down my list of priorities when there’s the physics to think about, and the committee, and the writing. The doughnuts just aren’t worth messing with. Plus, the traditional place to get doughnuts is already really good.

We even upheld a tradition the night before the defense. It’s not really a tradition per se, but I’ve seen it once and performed it once, so that makes it a tradition. If you find it useful, you can call it an even stronger tradition! We played an entire soundtrack and sung along, with laptops open working on defense slides. When my friend was defending, we watched “Chicago” the musical, and I was a little hoarse the next day. When I was defending, we listened to Leonard Bernstein’s version of Voltaire’s “Candide,” which has some wonderful wordplay and beautiful writing for choruses. The closing message was the comforting thought that it’s not going to be perfect, but life will go on.

“We’re neither wise nor pure nor good, we’ll do the best we know. We’ll build our house, and chop our wood, and make our garden grow.”

Hearing that at the apex of thesis stress, I think it will always make me cry. By contrast, there’s also a scene in Candide depicting the absurd juxtaposition of a fun-filled fair centered around a religious inquisition and hanging. Every time someone said they were looking forward to seeing my defense, I thought of this hanging-festival scene. I wonder if Pangloss had to provide his own doughnuts.

The defense itself went about as I expected it would. The arguments I presented had been polished over the last year, the slides over the last couple weeks, and the wording over a few days. My outfit was chosen well in advance to be comfortable, professional, and otherwise unremarkable (and keep my hair out my way). The seminar itself was scheduled for the time when we usually have lab group meetings, so the audience was the regular lab group albeit with a higher attendance-efficiency factor. The committee members were all present, even though one had to switch to a 6am flight into Madison to avoid impending flight cancellations. The questions from the committee mostly focused on understanding the implications of my results for other IceCube results, which I took to mean that my own work was presented well enough to not need further explanation.

It surprised me, in retrospect, how quickly the whole process went. The preparation took so long, but the defense itself went so quickly. From watching other people’s defenses, I knew to expect a few key moments: an introduction from my advisor, handshakes from many people at the end of the public session, the moment of walking out from the closed session to friends waiting in the hallway, and finally the first committee member coming out smiling to tell me they decided to pass me. I knew to look for these moments, and they went by so much faster in my own defense than I remember from my friends. Even though it went by so quickly, it still makes a difference having friends waiting in the hallway.

People asked me if it was a weight off my shoulders when I finally defended my thesis. It was, in a way, but even more it felt like cement shoes off my feet. Towards the end of the process, for the last year or so, a central part of myself felt professionally qualified, happy, and competent. I tried desperately to make that the main part. But until the PhD was finished, that part wasn’t the exterior truth. When I finished, I felt like the qualifications I had on paper matched how qualified I felt about myself. I’m still not an expert on many things, but I do know the dirty details of IceCube software and programing. I have my little corner of expertise, and no one can take that away. Degrees are different from job qualifications that way: if you stop working towards a PhD several years in, it doesn’t count as a fractional part of a degree; it’s just quitting. But if you work at almost any other job for a few years, you can more or less call it a few years of experience. A month before my defense, part of me knew I was so so so close to being done, but that didn’t mean I could take a break.

And now, I can take a break.

Share

A Physicist and Historian Walk Into a Coffee Shop

Saturday, July 26th, 2014

It’s Saturday, so I’m at the coffee shop working on my thesis again. It’s become a tradition over the last year that I meet a writer friend each week, we catch up, have something to drink, and sit down for a few hours of good-quality writing time.

photo09

The work desk at the coffee shop: laptop, steamed pork bun, and rosebud latte.

We’ve gotten to know the coffee shop really well over the course of this year. It’s pretty new in the neighborhood, but dark and hidden enough that business is slow, and we don’t feel bad keeping a table for several hours. We have our favorite menu items, but we’ve tried most everything by now. Some mornings, the owner’s family comes in, and the kids watch cartoons at another table.

I work on my thesis mostly, or sometimes I’ll work on analysis that spills over from the week, or I’ll check on some scheduled jobs running on the computing cluster.

My friend Jason writes short stories, works on revising his novel (magical realism in ancient Egypt in the reign of Rameses XI), or drafts posts for his blog about the puzzles of the British constitution. We trade tips on how to organize notes and citations, and how to stay motivated. So I’ve been hearing a lot about the cultural difference between academic work in the humanities and the sciences. One of the big differences is the level of citation that’s expected.

As a particle physicist, when I write a paper it’s very clear which experiment I’m writing about. I only write about one experiment at a time, and I typically focus on a very small topic. Because of that, I’ve learned that the standard for making new claims is that you usually make one new claim per paper, and it’s highlighted in the abstract, introduction, and conclusion with a clear phrase like “the new contribution of this work is…” It’s easy to separate which work you claim as your own and which work is from others, because anything outside “the new contribution of this work” belongs to others. A single citation for each external experiment should suffice.

For academic work in history, the standard is much different: the writing itself is much closer to the original research. As a start, you’ll need a citation for each quote, going to sources that are as primary as you can get your hands on. The stranger idea for me is that you also need a citation for each and every idea of analysis that someone else has come up with, and that a statement without a citation is automatically claimed as original work. This shows up in the difference between Jason’s posts about modern constitutional issues and historical ones: the historical ones have huge source lists, while the modern ones are content with a few hyperlinks.

In both cases, things that are “common knowledge” doesn’t need to be cited, like the fact that TeV cosmic rays exist (they do) or the year that Elizabeth I ascended the throne (1558).

There’s a difference in the number of citations between modern physics research and history research. Is that because of the timing (historical versus modern) or the subject matter? Do they have different amounts of common knowledge? For modern topics in physics and in history, the sources are available online, so a hyperlink is a perfect reference, even in formal post. By that standard, all Quantum Diaries posts should be ok with the hyperlink citation model. But even in those cases, Jason puts footnoted citations to modern articles in the JSTOR database, and uses more citations overall.

Another cool aspect of our coffee shop is that the music is sometimes ridiculous, and it interrupts my thoughts if I get stuck in some esoteric bog. There’s an oddly large sample of German covers of 30s and 40s showtunes. You haven’t lived until you’ve heard “The Lady is a Tramp” in German while calculating oscillation probabilities. I’m kidding. Mostly.

Jason has shown me a different way of handling citations, and I’ve taught him some of the basics of HTML, so now his citations can appear as hyperlinks to the references list!

As habits go, I’m proud of this social coffee shop habit. I default to getting stuff done, even if I’m feeling slightly off or uninspired.  The social reward of hanging out makes up for the slight activation energy of getting off my couch, and once I’m out of the house, it’s always easier to focus.  I miss prime Farmers’ Market time, but I could go before we meet. The friendship has been a wonderful supportive certainty over the last year, plus I get some perspective on my field compared to others.

Share

Welcome to Thesisland

Tuesday, July 22nd, 2014

When I joined Quantum Diaries, I did so with trepidation: while it was an exciting opportunity, I was worried that all I could write about was the process of writing a thesis and looking for postdoc jobs. I ended up telling the site admin exactly that: I only had time to work on a thesis and job hunt. I thought I was turning down the offer. But the reply I got was along the lines of “It’s great to know what topics you’ll write about! When can we expect a post?”. So, despite the fact that this is a very different topic from any recent QD posts, I’m starting a series about the process of writing a physics PhD thesis. Welcome.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

There are as many approaches to writing a PhD thesis as there are PhDs, but they can be broadly described along a spectrum.

On one end is the “constant documentation” approach: spend some fixed fraction of your time on documenting every project you work on. In this approach, the writing phase is completely integrated with the research work, and it’s easy to remember the things you’re writing about. There is a big disadvantage: it’s really easy to write too much, to spend too much time writing and not enough doing, or otherwise un-balance your time. If you keep a constant fraction of your schedule dedicated to writing, and that fraction is (in retrospect) too big, you’ve lost a lot of time. But you have documented everything, which everyone who comes after will be grateful for. If they ever see your work.

The other end of the spectrum is the “write like hell” approach (that is, write as fast as you can), where all the research is completed and approved before writing starts. This has the advantage that if you (and your committee) decide you’ve written enough, you immediately get a PhD! The disadvantage is that if you have to write about old projects, you’ll probably have forgotten a lot. So this approach typically leads to shorter theses.

These two extremes were first described to me (see the effect of thesis writing? It’s making my blog voice go all weird and passive) by two professors who were in grad school together and still work together. Each took one approach, and they both did fine, but the “constant documentation” thesis was at least twice (or was it three times?) as long as the “write like hell” thesis.

Somewhere between those extremes is the funny phenomenon of the “staple thesis”: a thesis primarily composed of all the papers you wrote in grad school, stapled together. A few of my friends have done this, but it’s not common in my research group because our collaboration is so large. I’ll discuss that in more detail later.

I’m going for something in the middle: as soon as I saw a light at the end of the tunnel, I wanted to start writing, so I downloaded the UW latex template for PhD theses and started filling it in. It’s been about 14 months since then, with huge variations in the writing/research balance. To help balance between the two approaches, I’ve found it helpful to keep at least some notes about all the physics I do, but nothing too polished: it’s always easier to start from some notes, however minimal, than to start from nothing.

When I started writing, there were lots of topics available that needed some discussion: history and theory, my detector, all the calibration work I did for my master’s project–I could have gone full-time writing at that point and had plenty to do. But my main research project wasn’t done yet. So for me, it’s not just a matter of balancing “doing” with “documenting”; it’s also a question of balancing old documentation with current documentation. I’ve almost, *almost* finished writing the parts that don’t depend on my work from the last year or so. In the meantime, I’m still finishing the last bits of analysis work.

It’s all a very long process. How many readers are looking towards writing a thesis later on? How many have gone through this and found a method that served them well? If it was fast and relatively low-stress, would you tell me about it?

Share

Why We Need an Event Viewer

Monday, June 30th, 2014

There’s a software tool I use almost every day, for almost any work situation. It’s good for designing event selections, for brainstorming about systematic errors, and for mesmerizing kids at outreach events. It’s good anytime you want to build intuition about the detector. It’s our event viewer. In this post, I explain a bit about how I use our event viewer, and also share the perspective of code architect Steve Jackson, who put the code together.

Steamshovel event viewer showing the event Mr. Snuffleupagus

The IceCube detector is buried in the glacier under the South Pole. The signals can only be read out electronically; there’s no way to reach the detector modules after the ice freezes around them. In designing the detector, we carefully considered what readout we would need to describe what happens in the ice, and now we’re at the stage of interpreting that data. A signal from one detector module might tell us the time, amplitude, and duration of light arriving at that detector, and we put those together into a picture of the detector. From five thousand points of light (or darkness), we have to answer: where did this particle come from? Does the random detector noise act the way we think it acts? Is the disruption from dust in the ice the same in all directions? All these questions are answerable, but the answers take some teasing out.

To help build our intuition, we use event viewer software to make animated views of interesting events. It’s one of our most useful tools as physicist-programmers. Like all bits of our software, it’s written within the collaboration, based on lots of open-source software, and unique to our experiment. It’s called “steamshovel,” a joke on the idea that you use it to dig through ice (actually, dig through IceCube data – but that’s the joke).

Meet Steve Jackson and Steamshovel

IceCube data from the event Mr. Snuffleupagus

Steve Jackson’s job on IceCube was originally maintaining the central software, a very broad job description. His background is in software including visualizations, and he’s worked as The Software Guy in several different physics contexts, including medical, nuclear, and astrophysics. After becoming acquainted with IceCube software needs, he narrowed his focus to building an upgraded version of the event viewer from scratch.

The idea of the new viewer, Steamshovel, was to write a general core in the programming language C++, and then higher-level functionality in Python. This splits up the problem of drawing physics in the detector into two smaller problems: how to translate physics into easily describable shapes, like spheres and lines, and how to draw those spheres and lines in the most useful way. Separating these two levels makes the code easier to maintain, easier to update the core, and easier for other people to add new physics ideas, but it doesn’t make it easier to write in the first place. (I’ll add: that’s why we hire a professional!) Steve says the process took about as long as he could have expected, considering Hofstadter’s Law, and he’s happy with the final product.

A Layer of Indirection 

As Steve told me, “Every problem in computer science can be addressed by adding a layer of indirection: some sort of intermediate layer where you abstract the relevant concepts into a higher level.” The extra level here is the set of lines and spheres that get passed from the Python code to the C++ code. By separating the defining from the drawing, this intermediate level makes it simpler to define new kinds of objects to draw.

A solid backbone, written with OpenGL in C++, empowers the average grad student to write software visualization “artists” as python classes. These artists can connect novel physics ideas, written in Python, to the C++ backbone, without the grad student having to get into the details of OpenGL, or, hopefully, any C++.

Here’s a test of that simplicity: as part of our week-long, whirlwind introduction to IceCube software, we taught new students how to write a new Steamshovel artist. With just a week of software training, they were able to produce them, a testament to the usability of the Steamshovel backbone.

This separation also lets the backbone include important design details that might not occur to the average grad student, but make the final product more elegant. One such detail is that the user can specify zoom levels much more easily, so graphics are not limited to the size of your computer screen. Making high-resolution graphics suitable for publication is possible and easy. Using these new views, we’ve made magazine covers, t-shirts, even temporary tatoos.

Many Platforms, Many People

IceCube has an interesting situation that we support (and have users) running our software on many different UNIX operating systems: Mac, Ubuntu, Red Hat, Fedora, Scientific Linux, even FreeBSD. But we don’t test our software on Windows, which is the standard for many complex visualization packages: yet another good reason to use the simpler OpenGL. “For cross-platform 3D graphics,” Steve says, “OpenGL is the low-level drawing API.”

As visualization software goes, the IceCube case is relatively simple. You can describe all the interesting things with lines and spheres, like dots for detector modules, lines and cylinders for the cables connecting them or for particle tracks, and spheres of configurable color and size for hits within the detector. There’s relatively little motion beyond appearing, disappearing, and changing sizes. The light source never moves. I would add that this is nothing – nothing! – like Pixar. These simplifications mean that the more complex software packages that Steve had the option to use were unnecessarily complex, full of options that he would never use, and the simple, open-source openGL was perfectly sufficient.

The process of writing Steamshovel wasn’t just one-man job (even though I only talked to one person for this post). Steve solicited, and received, ideas for features from all over the collaboration. I personally remember that when he started working here, he took the diligent and kind step of sitting and talking to several of us while we used the old event viewer, just to see what the workflow was like, the good parts and the bad. One particularly collaborative sub-project started when one IceCube grad student, Jakob, had the clever idea of displaying Monte Carlo true Cherenkov cones. We know where the simulated light emissions are, and how the light travels through the ice – could we display the light cone arriving at the detector modules and see whether a particular hit occurred at the same time? Putting together the code to make this happen involved several people (mainly Jakob and Steve), and wouldn’t have been possible coding in isolation.

Visual Cortex Processing

The moment that best captured the purpose of a good event viewer, Steve says, was when he animated an event for the first time. Specifically, he made the observed phototube pulses disappear as the charge died away, letting him see what happens on a phototube after the first signal. Animating the signal pulses made the afterpulsing “blindingly obvious.”

We know, on an intellectual level, that phototubes display afterpulsing, and it’s especially strong and likely after a strong signal pulse. But there’s a difference between knowing, intellectually, that a certain fraction of pulses will produce afterpulses and seeing those afterpulses displayed. We process information very differently if we can see it directly than if we have to construct a model in our heads based on interpreting numbers, or even graphs. An animation connects more deeply to our intuition and natural instinctive processes.

As Steve put it: “It brings to sharp relief something you only knew about in sort of a complex, long thought out way. The cool thing about visualization is that you can get things onto a screen that your brain will notice pre-cognitively; you don’t even have to consciously think to distinguish between a red square and a blue square. So even if you know that two things are different, from having looked carefully through the math, if you see those things in a picture, the difference jumps out without you even having to think about it. Your visual cortex does the work for you. […] That was one of the coolest moments for me, when these people who understood the physics in a deep way nonetheless were able to get new insights on it just by seeing the data displayed in a new way. ”

And that’s why need event viewers.

Share

IceCube DeepCore and Atmospheric Neutrino Mixing

Tuesday, June 3rd, 2014

Today at the Neutrino2014 conference in Boston, the IceCube collaboration showed an analysis looking for standard atmospheric neutrino oscillations in the 20-30 GeV region. Although IceCube has seen oscillations before, and reported them in a poster at the last Neutrino conference, in 2012, this plenary talk showed the first analysis where the IceCube error bands are becoming competitive with other oscillation experiments.

IC86Multi_NuMuOsc_results_Pscan_V1Neutrino oscillation is a phenomenon where neutrinos change from one flavor to another as they travel; it’s a purely quantum phenomenon. It has been observed in several contexts, including particle accelerators, nuclear reactors, cosmic rays hitting the atmosphere, and neutrinos traveling from our Sun. This is the first widely accepted phenomenon in particle physics that requires an extension to the Standard Model, the capstone of which was the observation of the Higgs boson at CERN. Neutrinos and neutrino oscillations represent the next stage of particle physics, beyond the Higgs.

IC86Multi_NuMuOsc_results_LEOf the parameters used to describe neutrino oscillations, most have been previously measured. The mixing angles that describe oscillations are the most recent focus of measurement. Just two years ago, the last of the neutrino mixing angles was measured by the Daya Bay experiment. Of the remaining mixing angles, the atmospheric angle accessible to IceCube remains the least constrained by experimental measurements.  

IceCube, because of its size, is in a unique position to measure the atmospheric mixing angle. Considering neutrinos that traverse the diameter of the Earth, the oscillation effect is the strongest in the energy region from 20 to 30 GeV, and an experiment that can contain a 20 GeV neutrino interaction must be very large. The Super Kamiokande experiment in Japan, for example, also measures atmospheric oscillations, but because of its small size relative to IceCube, Super Kamiokande can’t resolve energies above a few GeV. At any higher energies, the detector is simply saturated. Other experiments can measure the same mixing angle using accelerator beamlines, like the MINOS experiment that sends neutrinos from Fermilab to Minnesota. Corroborating these observations from several experimental methods and separate experiments proves the strength of the oscillation framework.

The sheer size of IceCube means that neutrinos have many chances to interact and be observed within the detector, giving IceCube a statistical advantage over other oscillation experiments. Even after selecting only the best reconstructed events, the experimental sample remaining still has over five thousand events from three years of data. Previous atmospheric oscillation experiments base analysis on hundreds or fewer events, counting instead on precise understanding of systematic effects. 

The IceCube collaboration is composed of more than 250 scientists from about 40 institutions around the world, mostly from the United States and Europe. The current results are possible because of decades of planning and construction, dedicated detector operations, and precise calibrations from all over the IceCube collaboration.

IceCube has several major talks at the Neutrino conference this year, the first time that the collaboration has had such a prominent presence. In addition to the new oscillations result, Gary Hill spoke in the opening session about the high energy astrophysical neutrinos observed over the last few years. Darren Grant spoke about the proposed PINGU infill array, which was officially encouraged in the recent P5 report. IceCube contributed nine posters on far-ranging topics from calibration and reconstruction methods to a neutrino-GRB correlation search. The conference-inspired display at the MIT museum is about half IceCube material, including an 8-foot tall LED model of the detector. One of three public museum talks on Saturday will be from (yours truly) Laura Gladstone about the basics of IceCube science and life at the South Pole.

One new aspect of the new oscillation analysis is that it uses an energy reconstruction designed for the low end of the energy range available to IceCube, in the tens-of-GeV range. In this range, only a handful of hits are visible for each event, and reconstructing directional information can be tricky. “We took a simple but very clever idea from the ANTARES Collaboration, and rehashed it to tackle one of our biggest uncertainties: the optical properties of the ice. It turned out to work surprisingly well,” says IceCuber Juan Pablo Yanez Garza, who brought the new reconstruction to IceCube, and presented the result in Boston.  By considering only the detector hits that arrive without scattering, the reconstruction algorithm is more robust against systematic errors in the understanding of the glacial ice in which IceCube is built. 

Share